2D点集匹配
将扫描(拍摄的照片)点集与模板点集(图像中的蓝色、绿色、红色、粉色圆圈)相匹配的最佳方法是什么? 我正在使用 opencv/c++。也许某种 ICP 算法?我想将扫描图像包装到模板图像中!
模板点集:
扫描点集:
What is the best way to match the scan (taken photo) point sets to the template point set (blue,green,red,pink circles in the images)?
I am using opencv/c++. Maybe some kind of the ICP algorithm? I would like to wrap the scan image to the template image!
template point set:
scan point set:
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果物体相当刚性且对齐,则简单自相关 就可以了。
如果没有,我会使用 RANSAC 来估计主题和模板之间的转换(看起来你有特征点)。请提供有关该问题的一些详细信息。
编辑:
RANSAC(随机样本共识)可以用于您的情况。将模板中不必要的点视为噪声(特征检测器检测到的错误特征) - 它们是轮廓线。 RANSAC 可以处理轮廓线,因为它随机选择一小部分特征点(可以启动模型的最小数量),启动模型并计算模型与给定数据的匹配程度(模板中还有多少个其他点对应于您的模型)其他点)。如果您选择了错误的子集,该值将会很低并且您将放弃模型。如果你选择正确的子集,它会很高,你可以使用 LMS 算法来改进你的匹配。
If the object is reasonably rigid and aligned, simple auto-correlation would do the trick.
If not, I would use RANSAC to estimate the transformation between the subject and the template (it seems that you have the feature points). Please provide some details on the problem.
Edit:
RANSAC (Random Sample Consensus) could be used in your case. Think about unnecessary points in your template as noise (false features detected by a feature detector) - they are the outliners. RANSAC could handle outliners, because it choose a small subset of feature points (the minimal amount that could initiate your model) randomly, initiates the model and calculates how well your model match the given data (how many other points in the template correspond to your other points). If you choose wrong subset, this value will be low and you will drop the model. If you choose right subset it will be high and you could improve your match with an LMS algorithm.
你必须匹配红色矩形吗?原始图像的角落包含四个黑色矩形,这些矩形似乎是为了匹配而制作的。我可以用 4 行 Mathematica 代码可靠地找到它们:
每个像素需要 max(R,G,B),即它过滤掉红色和黄色打印(或多或少)。结果如下所示:
然后我只使用 LoG 过滤器来查找暗点并查找局部最大值在结果图像中
结果:
Do you have to match the red rectangles? The original image contains four black rectangles in the corners that seem to be made for matching. I can reliably find them with 4 lines of Mathematica code:
This takes max(R,G,B) for each pixel, i.e. it filters out the red and yellow print (more or less). The result looks like this:
Then I just use a LoG filter to find the dark spots and look for local maxima in the result image
Result:
您看过 OpenCV 的 descriptor_extractor_matcher.cpp 示例吗?此示例使用 RANSAC 来检测两个输入图像之间的单应性。我想当你说包裹时你实际上指的是扭曲?如果您想使用检测到的单应性矩阵扭曲图像,请查看 warpPerspective 函数。最后,这里是一些使用不同特征检测器的好教程OpenCV。
编辑:
你可能没有SURF特征,但你肯定有不同类别的特征点。基于特征的匹配通常分为两个阶段:特征检测(您已经完成)和匹配所需的提取。因此,您可以尝试将功能转换为 KeyPoint,然后进行特征提取和匹配。下面是一个小代码片段,说明了如何执行此操作:
您可能需要调整响应强度,使其不会被提取阶段限制。但是,希望这能说明您可能尝试做的事情。
Have you looked at OpenCV's descriptor_extractor_matcher.cpp sample? This sample uses RANSAC to detect the homography between the two input images. I assume when you say wrap you actually mean warp? If you would like to warp the image with the homography matrix you detect, have a look at the warpPerspective function. Finally, here are some good tutorials using the different feature detectors in OpenCV.
EDIT :
You may not have SURF features, but you certainly have feature points with different classes. Feature based matching is generally split into two phases: feature detection (which you have already done), and extraction which you need for matching. So, you might try converting your features into a KeyPoint and then doing the feature extraction and matching. Here is a little code snippet of how you might go about this:
You may need to tune the response strength such that it doesn't get thresholded out by the extraction phase. But, hopefully that's illustrative of what you might try to do.
请按照下列步骤操作:
https://stackoverflow.com/a/18091472/457687
Follow these steps:
https://stackoverflow.com/a/18091472/457687