OpenCV 2.2 SURF 特征匹配问题

发布于 2024-11-10 03:50:49 字数 547 浏览 0 评论 0原文

右上角没有任何内容匹配

我修改了 OpenCV 演示应用程序“matching_to_many_images.cpp”来查询图像(左)到来自网络摄像头的帧(右)。第一张图片的右上角出了什么问题?

我们认为这与我们遇到的另一个问题有关。我们从一个空数据库开始,只添加唯一的(与数据库中的功能不匹配的功能),但仅添加三个功能后,我们就得到了所有新功能的匹配......

我们正在使用: SurfFeatureDetector surfFeatureDetector(400,3,4); SurfDescriptorExtractor surfDescriptorExtractor; FlannBasedMatcher flannDescriptorMatcher;

完整的代码可以在以下位置找到:http://www.copypastecode.com/71973/

Matching with nothing in the top right corner

I have modified the OpenCV demo application "matching_to_many_images.cpp" to query a image (left) to a frames from the webcam (right). What have gone wrong with the top right corner of the first image?

We think this is related to another problem we have. We begin with an empty database and we only add unique (features that not match the features in our database) but after adding only three features, we get a match on all new features....

we are using:
SurfFeatureDetector surfFeatureDetector(400,3,4);
SurfDescriptorExtractor surfDescriptorExtractor;
FlannBasedMatcher flannDescriptorMatcher;

Complete code can be found at: http://www.copypastecode.com/71973/

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

尐籹人 2024-11-17 03:50:49

我认为这与边界关键点有关。检测器检测关键点,但为了让 SURF 描述符返回一致的值,它需要其周围像素块中的像素数据,这在边界像素中不可用。您可以使用以下代码片段在检测到关键点之后但在计算描述符之前删除边界点。我建议使用 20 或更大的 borderSize。

removeBorderKeypoints( vector<cv::KeyPoint>& keypoints, const cv::Size imageSize, const boost::int32_t borderSize )
{
    if( borderSize > 0)
    {
        keypoints.erase( remove_if(keypoints.begin(), keypoints.end(),
                               RoiPredicatePic((float)borderSize, (float)borderSize,
                                            (float)(imageSize.width - borderSize),
                                            (float)(imageSize.height - borderSize))),
                     keypoints.end() );
    }
}

其中 RoiPredicatePic 实现为:

struct RoiPredicatePic
{
    RoiPredicatePic(float _minX, float _minY, float _maxX, float _maxY)
    : minX(_minX), minY(_minY), maxX(_maxX), maxY(_maxY)
    {}

    bool operator()( const cv::KeyPoint& keyPt) const
    {
        cv::Point2f pt = keyPt.pt;
        return (pt.x < minX) || (pt.x >= maxX) || (pt.y < minY) || (pt.y >= maxY);
    }

    float minX, minY, maxX, maxY;
};

此外,近似最近邻索引并不是匹配图像对之间特征的最佳方法。我建议您尝试其他更简单的匹配器。

I think this has to do with the border keypoints. The detector detects the keypoints, but for the SURF descriptor to return consistent values it needs pixel data in a block of pixels around it, which is not available in the border pixels. You can use the following snippet to remove border points after keypoints are detected but before descriptors are computed. I suggest using borderSize of 20 or more.

removeBorderKeypoints( vector<cv::KeyPoint>& keypoints, const cv::Size imageSize, const boost::int32_t borderSize )
{
    if( borderSize > 0)
    {
        keypoints.erase( remove_if(keypoints.begin(), keypoints.end(),
                               RoiPredicatePic((float)borderSize, (float)borderSize,
                                            (float)(imageSize.width - borderSize),
                                            (float)(imageSize.height - borderSize))),
                     keypoints.end() );
    }
}

Where RoiPredicatePic is implemented as:

struct RoiPredicatePic
{
    RoiPredicatePic(float _minX, float _minY, float _maxX, float _maxY)
    : minX(_minX), minY(_minY), maxX(_maxX), maxY(_maxY)
    {}

    bool operator()( const cv::KeyPoint& keyPt) const
    {
        cv::Point2f pt = keyPt.pt;
        return (pt.x < minX) || (pt.x >= maxX) || (pt.y < minY) || (pt.y >= maxY);
    }

    float minX, minY, maxX, maxY;
};

Also, approximate nearest neighbor indexing is not the best way to match features between pairs of images. I would suggest you to try other simpler matchers.

若言繁花未落 2024-11-17 03:50:49

您的方法工作完美,但由于错误地调用了drawMatches 函数,它显示了错误的结果。

您的错误调用是这样的:

drawMatches(image2, image2Keypoints, image1, image1Keypoints, matches, result);

正确的调用应该是:

drawMatches(image1, image1Keypoints, image2, image2Keypoints, matches, result);

Your approach is working flawless but it shows wrong results because of calling incorrectly the drawMatches function.

Your incorrect call was something like this:

drawMatches(image2, image2Keypoints, image1, image1Keypoints, matches, result);

The correct call should be:

drawMatches(image1, image1Keypoints, image2, image2Keypoints, matches, result);
旧伤慢歌 2024-11-17 03:50:49

我遇到了同样的问题。令人惊讶的是,该解决方案与边界点或 KNN 匹配器无关。只需要一个不同的匹配策略来从大量的匹配中过滤掉“好的匹配”。

使用 2 NN 搜索,并满足以下条件 -

如果距离(第一个匹配)< 0.6*距离(第二场比赛) 第一场比赛是“好比赛”。

过滤掉所有不满足上述条件的匹配,并仅对“好的匹配”调用drawMatches。瞧!

I faced the same problem. Surprisingly, the solution has nothing to do with border points or the KNN matcher. Just needs a different matching strategy for filtering out "good matches" from the plethora of matches.

Use a 2 NN search, and the following condition-

if distance(1st match) < 0.6*distance(2nd match) the 1st match is a "good match".

Filter out all the matches that do not satisfy the above condition and call drawMatches for only the "good matches". Voila!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文