图像减影分割优化算法
对于 OpenCV 中的一个项目,我希望尽可能好地分割移动物体,当然噪音也最小。
为此,我想使用图像减法算法。 我已经有一个正在运行的程序,但今天没有找到一种方法来获得足够公平的结果。
我已经给出了以下(灰度)图像:
IplImage* grayScale;
IplImage* lastFrame;
IplImage* secondLastFrame;
IplImage* thirdLastFrame;
到目前为止,我已尝试使用 cvSub();
或使用 cvAbsDiff();
减去当前帧图像和最后一帧以获得活动部件。
但不幸的是,我仍然在那里得到很多噪音(即,由于有风时稍微移动的树木),并且如果移动的物体相当大并且具有均匀的颜色(假设一个穿着白色或黑色衬衫的人),则减法只检测人的左侧和右侧图像的变化,而不检测身体本身的变化,因此一个物体有时会被检测为两个物体......
cvAbsDiff(this->lastFrame,grayScale,output);
cvThreshold(output,output,10,250, CV_THRESH_BINARY);
cvErode(output,output, NULL, 2);
cvDilate(output,output, NULL, 2);
为了消除这种噪音,我尝试用 < 腐蚀和膨胀图像code>cvErode() 和 cvDilate()
但这非常慢,如果屏幕上移动的对象很小,侵蚀会删除相当多的对象,所以在删除之后我并不总是能得到好的结果或分裂的对象。
之后,我执行 cvFindContours()
来获取轮廓,检查大小以及是否适合在移动对象周围绘制一个矩形。但结果很差,因为由于分割不良,一个对象经常被分割成多个矩形。
一位朋友现在告诉我,我可能会尝试使用两个以上的后续帧进行减法,因为这可能已经减少了噪音......但我真的不知道他的意思是什么,以及我应该如何添加/减去帧以获得几乎没有噪声并显示足够大的物体斑点的图像。
有人可以帮我吗?如何使用多个帧来获取噪声尽可能最小但对于移动物体具有足够大的斑点的图像?我将不胜感激任何提示...
添加:
我已经在这里上传了当前视频:http://temp.tinytall.de/ 也许有人想在那里尝试一下...
这是其中的一个框架:左图显示了 cvFindContours() 的结果,右图是分段图像然后我尝试找到轮廓...
所以一个大物体如果移动速度很快,它就可以正常工作足够了......即自行车......但是对于步行的人来说,它并不总是能得到好的结果......有什么想法吗?
for a project in OpenCV I would like to segment moving objects as good as possible with of course minimal noise.
For this I would like to use an image substraction algorithm.
I already have a running program but didn't find a way today to get fair enough results.
I already have the following (grayscale) images given:
IplImage* grayScale;
IplImage* lastFrame;
IplImage* secondLastFrame;
IplImage* thirdLastFrame;
So far I have tried to substract current frames image and the last frame with cvSub();
or with cvAbsDiff();
to get the moving parts.
But unfortunately I still get a lot of noise there (i.e. due to slightly moving trees when it's windy) and if the object that moves is quite big and has a homogenic color (let's say a person in a white or black shirt), the substraction only detects the changes in the image on the left and right side of the person, not on the body itself, so one object is sometimes detected as two objects...
cvAbsDiff(this->lastFrame,grayScale,output);
cvThreshold(output,output,10,250, CV_THRESH_BINARY);
cvErode(output,output, NULL, 2);
cvDilate(output,output, NULL, 2);
To get rid of this noise I tried eroding and dilating the images with cvErode()
and cvDilate()
but this is quite slow and if the moving objects on the screen are small the erosion deletes quite a bit to much of the object so after delating I don't always get a good result or splitted up objects.
After this I do a cvFindContours()
to get contours, check on size and if it fits draw a rectangle around the moving objects. But results are poor because often an object is split into several rectangles due to the bad segmentation.
A friend now told me I might try using more than two following frames for the substraction since this might already reduce the noise... but I don't really know what he meant by that and how I should add/substract the frames to get an image that is almost noise free and shows big enough object blobs.
Can anybody help me with that? How can I use more than one frames to get an image that has as minimum noise as possible but with big enough blobs for the moving objects? I would be thankful for any tipps...
ADDITIONS:
I have uploaded a current video right here: http://temp.tinytall.de/ Maybe somebody wants to try it there...
This is a frame from it: The left image shows my results from cvFindContours() and the right one is the segmented image on which I then try to find the contours...
So one large objects it works fine if they are moving fast enough... i.e. the bicycle.. but on walking people it doesn't always get a good result though... Any ideas?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
给定三个相邻帧 A、B、C,您可以得到两个帧差异 X 和 Y。通过组合 X 和 Y(通过例如阈值处理,然后进行逻辑
AND
操作),您可以减少噪声的影响。一个不需要的副作用是运动检测到的区域将略小于理想区域(AND
操作将减小该区域)。由于图像序列运动估计已经被深入研究了几十年,您可能想了解更复杂的运动检测方法,例如使用运动矢量场。在这种情况下,谷歌学术是你的朋友。
Given three adjacent frames A, B, C you can get two frame differences X and Y. By combining X and Y (through, e.g. thresholding and then logical
AND
operation) you can reduce the effect of the noise. An unwanted side effect is that the motion-detected area will be slightly smaller than ideal (theAND
operation will reduce the area).Since image sequence motion estimation has been well researched for decades, you may want to read about more sophisticated methods of motion detection, e.g. working with motion vector fields. Google Scholar is your friend in that case.
看来你有固定的背景。
一种可能的解决方案是让计算机学习背景,例如。通过一段时间内的平均值。然后计算平均图像与当前图像之间的差异。差异可能源于移动的物体。
It seems like you have a fixed background.
One possible solution is to let the computer learn the background, eg. by taking an average over time. Then calculate the difference between the average image and the current. Differences are likely to origin from moving objects.
嗯,这是一个非常冒险的话题。运动估计相当复杂。因此,尝试找到好的文献并避免发明算法:)
我的建议是:
Well, it is a very dicey subject. Motion estimation is quite complex. So try to find good literature and avoid inventing algorithms :)
My suggestions are: