OpenCV 前景检测速度慢

发布于 2024-09-17 10:19:24 字数 2273 浏览 3 评论 0 原文

我正在尝试实现概述的码本前景检测算法 这里学习OpenCV一书中。

该算法仅描述了针对图片的每个像素的基于码本的方法。因此,我采用了想到的最简单的方法 - 拥有一组密码本,每个像素一个,很像 IplImage 底层的矩阵结构。数组的长度等于图像中的像素数。

我编写了以下两个循环来学习背景并分割前景。它利用了我对src图像内部矩阵结构的有限理解,并使用指针运算来遍历像素。

  void foreground(IplImage* src, IplImage* dst, codeBook* c, int* minMod, int* maxMod){

 int height = src->height;
 int width = src->width;

 uchar* srcCurrent = (uchar*) src->imageData;
 uchar* srcRowHead = srcCurrent;
 int srcChannels = src->nChannels;
 int srcRowWidth = src->widthStep;

 uchar* dstCurrent = (uchar*) dst->imageData;
 uchar* dstRowHead = dstCurrent;
 // dst has 1 channel
 int dstRowWidth = dst->widthStep;

 for(int row = 0; row < height; row++){
  for(int column = 0; column < width; column++){
   (*dstCurrent) = find_foreground(srcCurrent, (*c), srcChannels, minMod, maxMod);
   dstCurrent++;
   c++;
   srcCurrent += srcChannels;
  }
  srcCurrent = srcRowHead + srcRowWidth;
  srcRowHead = srcCurrent;
  dstCurrent = dstRowHead + dstRowWidth;
  dstRowHead = dstCurrent;
 }
}

void background(IplImage* src, codeBook* c, unsigned* learnBounds){

 int height = src->height;
 int width = src->width;

 uchar* srcCurrent = (uchar*) src->imageData;
 uchar* srcRowHead = srcCurrent;
 int srcChannels = src->nChannels;
 int srcRowWidth = src->widthStep;

 for(int row = 0; row < height; row++){
  for(int column = 0; column < width; column++){
   update_codebook(srcCurrent, c[row*column], learnBounds, srcChannels);
   srcCurrent += srcChannels;
  }
  srcCurrent = srcRowHead + srcRowWidth;
  srcRowHead = srcCurrent;
 }
}

该程序可以运行,但速度非常慢。有什么明显的因素在减慢速度吗?还是简单实现中固有的问题?我能做些什么来加快速度吗?每个码本都没有按特定顺序排序,因此处理每个像素确实需要线性时间。因此,将背景样本加倍,程序对于每个像素的运行速度会慢 2,然后按像素数放大。但就实现而言,我没有看到任何清晰的、合乎逻辑的方法来对代码元素条目进行排序。

我知道 opencv 示例中有相同算法的示例实现。然而,该结构似乎要复杂得多。我希望更多地了解这种方法背后的原因,我知道我可以修改示例以适应现实生活中的应用程序。

谢谢

I am trying to implement the codebook foreground detection algorithm outlined here in the book Learning OpenCV.

The algorithm only describes a codebook based approach for each pixel of the picture. So I took the simplest approach that came to mind - to have a array of codebooks, one for each pixel, much like the matrix structure underlying IplImage. The length of the array is equal to the number of pixels in the image.

I wrote the following two loops to learn the background and segment the foreground. It uses my limited understanding of the matrix structure inside the src image, and uses pointer arithmetic to traverse the pixels.

  void foreground(IplImage* src, IplImage* dst, codeBook* c, int* minMod, int* maxMod){

 int height = src->height;
 int width = src->width;

 uchar* srcCurrent = (uchar*) src->imageData;
 uchar* srcRowHead = srcCurrent;
 int srcChannels = src->nChannels;
 int srcRowWidth = src->widthStep;

 uchar* dstCurrent = (uchar*) dst->imageData;
 uchar* dstRowHead = dstCurrent;
 // dst has 1 channel
 int dstRowWidth = dst->widthStep;

 for(int row = 0; row < height; row++){
  for(int column = 0; column < width; column++){
   (*dstCurrent) = find_foreground(srcCurrent, (*c), srcChannels, minMod, maxMod);
   dstCurrent++;
   c++;
   srcCurrent += srcChannels;
  }
  srcCurrent = srcRowHead + srcRowWidth;
  srcRowHead = srcCurrent;
  dstCurrent = dstRowHead + dstRowWidth;
  dstRowHead = dstCurrent;
 }
}

void background(IplImage* src, codeBook* c, unsigned* learnBounds){

 int height = src->height;
 int width = src->width;

 uchar* srcCurrent = (uchar*) src->imageData;
 uchar* srcRowHead = srcCurrent;
 int srcChannels = src->nChannels;
 int srcRowWidth = src->widthStep;

 for(int row = 0; row < height; row++){
  for(int column = 0; column < width; column++){
   update_codebook(srcCurrent, c[row*column], learnBounds, srcChannels);
   srcCurrent += srcChannels;
  }
  srcCurrent = srcRowHead + srcRowWidth;
  srcRowHead = srcCurrent;
 }
}

The program works, but is very sluggish. Is there something obvious that is slowing it down? Or is it an inherent problem in the simple implementation? Is there anything I can do to speed it up? Each code book is sorted in no specific order, so it does take linear time to process each pixel. So double the background samples, and the program runs slower by 2 for each pixel, which is then magnified by the number of pixels. But as the implementation stands, I don't see any clear, logical way to sort the code element entries.

I am aware that there is an example implementation of the same algorithm in the opencv samples. However, that structure seems to be much more complex. I am looking more to understand the reasoning behind this method, I am aware that I can just modify the sample for real life applications.

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

栀梦 2024-09-24 10:19:24

无论您如何实现,对图像中的每个像素进行操作都会很慢。

Operating on every pixel in an image is going to be slow, regardless of how you implement it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文