OpenCV 距离变换输出与输入图像完全相同的图像
我正在使用 OpenCV 做一些检测工作,我需要使用距离变换。除了 opencv 中的距离变换函数给我一个与我用作源的图像完全相同的图像。有人知道我做错了什么吗?这是我的代码部分:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);
简而言之,我对来自 kinect 的图像进行分级,将其转换为单通道图像,对其进行平滑处理,运行精明的边缘检测器,反转值,然后进行距离变换。但转换后的图像看起来与输入图像完全相同。怎么了?
谢谢!
I am doing some detection work using OpenCV, and I need to use the distance transform. Except the distance transform function in opencv gives me an image that is exactly the same as the image I use as source. Anyone know what I am doing wrong? Here is the portion of my code:
cvSetData(depthImage, m_rgbWk, depthImage->widthStep);
//gotten openCV image in "depthImage"
IplImage *single_channel_depthImage = cvCreateImage(cvSize(320, 240), 8, 1);
cvSplit(depthImage, single_channel_depthImage, NULL, NULL, NULL);
//smoothing
IplImage *smoothed_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvSmooth(single_channel_depthImage, smoothed_image, CV_MEDIAN, 9, 9, 0, 0);
//do canny edge detector
IplImage *edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvCanny(smoothed_image, edges_image, 100, 200);
//invert values
IplImage *inverted_edges_image = cvCreateImage(cvSize(320, 240), 8, 1);
cvNot(edges_image, inverted_edges_image);
//calculate the distance transform
IplImage *distance_image = cvCreateImage(cvSize(320, 240), IPL_DEPTH_32F, 1);
cvZero(distance_image);
cvDistTransform(inverted_edges_image, distance_image, CV_DIST_L2, CV_DIST_MASK_PRECISE, NULL, NULL);
In a nutshell, I grad the image from the kinect, turn it into a one channel image, smooth it, run the canny edge detector, invert the values, and then I do the distance transform. But the transformed image looks exactly the same as the input image. What's wrong?
Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我相信这里的关键是它们看起来相同。这是我编写的一个小程序,用于显示差异:
在非标准化图像中,您会看到以下内容:
看起来并没有真正改变任何内容,但与整体相比,距离步长非常小值范围 [0, 255](由于
imshow
将图像从 32 位浮点数转换为 8 位进行显示),我们看不到差异,所以让我们对其进行标准化...现在我们得到这个:
值本身应该是正确的,但在显示时,您需要对图像进行标准化才能看到差异。
编辑:
这是来自
dist
矩阵左上角的一个 10x10 小样本,表明这些值实际上是不同的:I believe the key here is that they look the same. Here is a small program I wrote to show the difference:
In the non-normalized image, you see this:
which doesn't really look like it changed anything, but the distance steps are very small compared to the overall range of values [0, 255] (due to
imshow
converting the image from 32-bit float to 8-bits for display), we can't see the differences, so let's normalize it...Now we get this:
The values themselves should be correct, but when displayed you will need to normalize the image to see the difference.
EDIT :
Here is a small 10x10 sample from the upper-left corner of the
dist
matrix show that the values are in fact different:我刚刚想出了这个。
OpenCV
distanceTransform
所以它期望你的边缘图像是负的。
您需要做的就是否定边缘图像:
I just figured this one out.
The OpenCV
distanceTransform
and so it expects your edges image to be negative.
All you need to do is to negate your edges image:
您可以在标准化函数之前使用以下代码打印该值:
You can print this values using this code before normalize function:
Mat 格式
normalize(Mat_dist, Mat_norm, 0, 255, NORM_MINMAX, CV_8U);
如果要可视化结果,则需要将归一化缩放为 0 。 .. 255 而不是 0 ... 1,否则一切都会显得黑色。在缩放为 0 ... 1 的图像上使用
imshow();
可以工作,但可能会导致后续处理步骤出现问题。至少对我来说是这样。Mat formats
normalize(Mat_dist, Mat_norm, 0, 255, NORM_MINMAX, CV_8U);
If you want to visualize the result, you need to scale the normalization to 0 ... 255 and not to 0 ... 1 or everything will seem black. Using
imshow();
on a scaled to 0 ... 1 image will work but may cause problmes in the next processing steps. Al least it did in my case.