2D点集匹配

发布于 2024-12-20 04:54:23 字数 250 浏览 3 评论 0原文

将扫描(拍摄的照片)点集与模板点集(图像中的蓝色、绿色、红色、粉色圆圈)相匹配的最佳方法是什么? 我正在使用 opencv/c++。也许某种 ICP 算法?我想将扫描图像包装到模板图像中!

模板点集: template image

扫描点集: 扫描图像

What is the best way to match the scan (taken photo) point sets to the template point set (blue,green,red,pink circles in the images)?
I am using opencv/c++. Maybe some kind of the ICP algorithm? I would like to wrap the scan image to the template image!

template point set:
template image

scan point set:
scan image

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

一直在等你来 2024-12-27 04:54:23

如果物体相当刚性且对齐,则简单自相关 就可以了。
如果没有,我会使用 RANSAC 来估计主题和模板之间的转换(看起来你有特征点)。请提供有关该问题的一些详细信息。

编辑:
RANSAC(随机样本共识)可以用于您的情况。将模板中不必要的点视为噪声(特征检测器检测到的错误特征) - 它们是轮廓线。 RANSAC 可以处理轮廓线,因为它随机选择一小部分特征点(可以启动模型的最小数量),启动模型并计算模型与给定数据的匹配程度(模板中还有多少个其他点对应于您的模型)其他点)。如果您选择了错误的子集,该值将会很低并且您将放弃模型。如果你选择正确的子集,它会很高,你可以使用 LMS 算法来改进你的匹配。

If the object is reasonably rigid and aligned, simple auto-correlation would do the trick.
If not, I would use RANSAC to estimate the transformation between the subject and the template (it seems that you have the feature points). Please provide some details on the problem.

Edit:
RANSAC (Random Sample Consensus) could be used in your case. Think about unnecessary points in your template as noise (false features detected by a feature detector) - they are the outliners. RANSAC could handle outliners, because it choose a small subset of feature points (the minimal amount that could initiate your model) randomly, initiates the model and calculates how well your model match the given data (how many other points in the template correspond to your other points). If you choose wrong subset, this value will be low and you will drop the model. If you choose right subset it will be high and you could improve your match with an LMS algorithm.

撩发小公举 2024-12-27 04:54:23

你必须匹配红色矩形吗?原始图像的角落包含四个黑色矩形,这些矩形似乎是为了匹配而制作的。我可以用 4 行 Mathematica 代码可靠地找到它们:

lotto = [source image]
lottoBW = Image[Map[Max, ImageData[lotto], {2}]]

每个像素需要 max(R,G,B),即它过滤掉红色和黄色打印(或多或少)。结果如下所示:

bw filter result

然后我只使用 LoG 过滤器来查找暗点并查找局部最大值在结果图像中

lottoBWG = ImageAdjust[LaplacianGaussianFilter[lottoBW, 20]]
MaxDetect[lottoBWG, 0.5]

结果:
在此处输入图像描述

Do you have to match the red rectangles? The original image contains four black rectangles in the corners that seem to be made for matching. I can reliably find them with 4 lines of Mathematica code:

lotto = [source image]
lottoBW = Image[Map[Max, ImageData[lotto], {2}]]

This takes max(R,G,B) for each pixel, i.e. it filters out the red and yellow print (more or less). The result looks like this:

bw filter result

Then I just use a LoG filter to find the dark spots and look for local maxima in the result image

lottoBWG = ImageAdjust[LaplacianGaussianFilter[lottoBW, 20]]
MaxDetect[lottoBWG, 0.5]

Result:
enter image description here

唔猫 2024-12-27 04:54:23

您看过 OpenCV 的 descriptor_extractor_matcher.cpp 示例吗?此示例使用 RANSAC 来检测两个输入图像之间的单应性。我想当你说包裹时你实际上指的是扭曲?如果您想使用检测到的单应性矩阵扭曲图像,请查看 warpPerspective 函数。最后,这里是一些使用不同特征检测器的好教程OpenCV。

编辑:
你可能没有SURF特征,但你肯定有不同类别的特征点。基于特征的匹配通常分为两个阶段:特征检测(您已经完成)和匹配所需的提取。因此,您可以尝试将功能转换为 KeyPoint,然后进行特征提取和匹配。下面是一个小代码片段,说明了如何执行此操作:

typedef int RED_TYPE = 1;
typedef int GREEN_TYPE = 2;
typedef int BLUE_TYPE = 3;
typedef int PURPLE_TYPE = 4;

struct BenFeature
{
    Point2f pt;
    int classId;
};

vector<BenFeature> benFeatures;

// Detect the features as you normally would in addition setting the class ID

vector<KeyPoint> keypoints;
for(int i = 0; i < benFeatures.size(); i++)
{
    BenFeature bf = benFeatures[i];
    KeyPoint kp(bf.pt,
                10.0, // feature neighborhood diameter (you'll probaby need to tune it)
                -1.0,  // (angle) -1 == not applicable
                500.0, // feature response strength (set to the same unless you have a metric describing strength)
                1, // octave level, (ditto as above)
                bf.classId // RED, GREEN, BLUE, or PURPLE.
                );
    keypoints.push_back(kp);
}

// now proceed with extraction and matching...

您可能需要调整响应强度,使其不会被提取阶段限制。但是,希望这能说明您可能尝试做的事情。

Have you looked at OpenCV's descriptor_extractor_matcher.cpp sample? This sample uses RANSAC to detect the homography between the two input images. I assume when you say wrap you actually mean warp? If you would like to warp the image with the homography matrix you detect, have a look at the warpPerspective function. Finally, here are some good tutorials using the different feature detectors in OpenCV.

EDIT :
You may not have SURF features, but you certainly have feature points with different classes. Feature based matching is generally split into two phases: feature detection (which you have already done), and extraction which you need for matching. So, you might try converting your features into a KeyPoint and then doing the feature extraction and matching. Here is a little code snippet of how you might go about this:

typedef int RED_TYPE = 1;
typedef int GREEN_TYPE = 2;
typedef int BLUE_TYPE = 3;
typedef int PURPLE_TYPE = 4;

struct BenFeature
{
    Point2f pt;
    int classId;
};

vector<BenFeature> benFeatures;

// Detect the features as you normally would in addition setting the class ID

vector<KeyPoint> keypoints;
for(int i = 0; i < benFeatures.size(); i++)
{
    BenFeature bf = benFeatures[i];
    KeyPoint kp(bf.pt,
                10.0, // feature neighborhood diameter (you'll probaby need to tune it)
                -1.0,  // (angle) -1 == not applicable
                500.0, // feature response strength (set to the same unless you have a metric describing strength)
                1, // octave level, (ditto as above)
                bf.classId // RED, GREEN, BLUE, or PURPLE.
                );
    keypoints.push_back(kp);
}

// now proceed with extraction and matching...

You may need to tune the response strength such that it doesn't get thresholded out by the extraction phase. But, hopefully that's illustrative of what you might try to do.

深居我梦 2024-12-27 04:54:23

请按照下列步骤操作:

  1. 匹配两个图像中的点或特征,这将决定您的环绕;
  2. 确定您想要对包装进行什么样的改造。最通用的是单应性(参见 cv::findHomography()),不太通用的是简单翻译(使用 cv::matchTempalte())。中间情况是沿 x、y 平移和旋转。为此,我编写了一个比单应性更好的快速函数,因为它使用较少的自由度,同时仍然优化正确的指标(坐标的平方差):
    https://stackoverflow.com/a/18091472/457687
  3. 如果您认为您的匹配有很多异常值,请使用 RANSAC步骤 1 的顶部。您基本上需要随机选择查找参数所需的最小点集,求解,确定内点,使用所有内点再次求解,然后迭代尝试改进当前的解决方案(增加内点数量、减少错误或两者兼而有之)。请参阅维基百科了解 RANSAC 算法:http://en.wikipedia.org/wiki/Ransac

Follow these steps:

  1. Match points or features in two images, this will determine your wrapping;
  2. Determine what transformation you are looking for for your wrapping. The most general would be homography (see cv::findHomography()) and the less general would be a simple translation (use cv::matchTempalte()). The intermediate case would be translation along x, y and rotation. For this I wrote a fast function that is better than Homography since it uses less degrees of freedom while still optimizing the right metrics (squared differences in coordinates):
    https://stackoverflow.com/a/18091472/457687
  3. If you think your matches have a lot of outliers use RANSAC on top of your step 1. You basically need to randomly select a minimal set of points required for finding parameters, solve, determine inliers, solve again using all inliers, and then iterate trying to improve your current solution (increase the number of inliers, reduce error, or both). See Wikipedia for RANSAC algorithm: http://en.wikipedia.org/wiki/Ransac
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文