我正在尝试实现概述的码本前景检测算法 这里在学习OpenCV一书中。
该算法仅描述了针对图片的每个像素的基于码本的方法。因此,我采用了想到的最简单的方法 - 拥有一组密码本,每个像素一个,很像 IplImage 底层的矩阵结构。数组的长度等于图像中的像素数。
我编写了以下两个循环来学习背景并分割前景。它利用了我对src图像内部矩阵结构的有限理解,并使用指针运算来遍历像素。
void foreground(IplImage* src, IplImage* dst, codeBook* c, int* minMod, int* maxMod){
int height = src->height;
int width = src->width;
uchar* srcCurrent = (uchar*) src->imageData;
uchar* srcRowHead = srcCurrent;
int srcChannels = src->nChannels;
int srcRowWidth = src->widthStep;
uchar* dstCurrent = (uchar*) dst->imageData;
uchar* dstRowHead = dstCurrent;
// dst has 1 channel
int dstRowWidth = dst->widthStep;
for(int row = 0; row < height; row++){
for(int column = 0; column < width; column++){
(*dstCurrent) = find_foreground(srcCurrent, (*c), srcChannels, minMod, maxMod);
dstCurrent++;
c++;
srcCurrent += srcChannels;
}
srcCurrent = srcRowHead + srcRowWidth;
srcRowHead = srcCurrent;
dstCurrent = dstRowHead + dstRowWidth;
dstRowHead = dstCurrent;
}
}
void background(IplImage* src, codeBook* c, unsigned* learnBounds){
int height = src->height;
int width = src->width;
uchar* srcCurrent = (uchar*) src->imageData;
uchar* srcRowHead = srcCurrent;
int srcChannels = src->nChannels;
int srcRowWidth = src->widthStep;
for(int row = 0; row < height; row++){
for(int column = 0; column < width; column++){
update_codebook(srcCurrent, c[row*column], learnBounds, srcChannels);
srcCurrent += srcChannels;
}
srcCurrent = srcRowHead + srcRowWidth;
srcRowHead = srcCurrent;
}
}
该程序可以运行,但速度非常慢。有什么明显的因素在减慢速度吗?还是简单实现中固有的问题?我能做些什么来加快速度吗?每个码本都没有按特定顺序排序,因此处理每个像素确实需要线性时间。因此,将背景样本加倍,程序对于每个像素的运行速度会慢 2,然后按像素数放大。但就实现而言,我没有看到任何清晰的、合乎逻辑的方法来对代码元素条目进行排序。
我知道 opencv 示例中有相同算法的示例实现。然而,该结构似乎要复杂得多。我希望更多地了解这种方法背后的原因,我知道我可以修改示例以适应现实生活中的应用程序。
谢谢
I am trying to implement the codebook foreground detection algorithm outlined here in the book Learning OpenCV.
The algorithm only describes a codebook based approach for each pixel of the picture. So I took the simplest approach that came to mind - to have a array of codebooks, one for each pixel, much like the matrix structure underlying IplImage. The length of the array is equal to the number of pixels in the image.
I wrote the following two loops to learn the background and segment the foreground. It uses my limited understanding of the matrix structure inside the src image, and uses pointer arithmetic to traverse the pixels.
void foreground(IplImage* src, IplImage* dst, codeBook* c, int* minMod, int* maxMod){
int height = src->height;
int width = src->width;
uchar* srcCurrent = (uchar*) src->imageData;
uchar* srcRowHead = srcCurrent;
int srcChannels = src->nChannels;
int srcRowWidth = src->widthStep;
uchar* dstCurrent = (uchar*) dst->imageData;
uchar* dstRowHead = dstCurrent;
// dst has 1 channel
int dstRowWidth = dst->widthStep;
for(int row = 0; row < height; row++){
for(int column = 0; column < width; column++){
(*dstCurrent) = find_foreground(srcCurrent, (*c), srcChannels, minMod, maxMod);
dstCurrent++;
c++;
srcCurrent += srcChannels;
}
srcCurrent = srcRowHead + srcRowWidth;
srcRowHead = srcCurrent;
dstCurrent = dstRowHead + dstRowWidth;
dstRowHead = dstCurrent;
}
}
void background(IplImage* src, codeBook* c, unsigned* learnBounds){
int height = src->height;
int width = src->width;
uchar* srcCurrent = (uchar*) src->imageData;
uchar* srcRowHead = srcCurrent;
int srcChannels = src->nChannels;
int srcRowWidth = src->widthStep;
for(int row = 0; row < height; row++){
for(int column = 0; column < width; column++){
update_codebook(srcCurrent, c[row*column], learnBounds, srcChannels);
srcCurrent += srcChannels;
}
srcCurrent = srcRowHead + srcRowWidth;
srcRowHead = srcCurrent;
}
}
The program works, but is very sluggish. Is there something obvious that is slowing it down? Or is it an inherent problem in the simple implementation? Is there anything I can do to speed it up? Each code book is sorted in no specific order, so it does take linear time to process each pixel. So double the background samples, and the program runs slower by 2 for each pixel, which is then magnified by the number of pixels. But as the implementation stands, I don't see any clear, logical way to sort the code element entries.
I am aware that there is an example implementation of the same algorithm in the opencv samples. However, that structure seems to be much more complex. I am looking more to understand the reasoning behind this method, I am aware that I can just modify the sample for real life applications.
Thanks
发布评论
评论(1)
无论您如何实现,对图像中的每个像素进行操作都会很慢。
Operating on every pixel in an image is going to be slow, regardless of how you implement it.