使用 OpenCV 描述符与 findFundamentalMat 匹配
我之前发布了有关同一程序的问题,但没有收到答案。我已经纠正了当时遇到的问题,但又面临新的问题。
基本上,我使用未校准的方法自动校正立体图像对的旋转和平移。我使用 SURF 等特征检测算法来查找两个图像(左右立体图像对)中的点,然后再次使用 SURF 匹配两个图像之间的点。然后,我需要使用这些匹配点来找到可用于校正图像的基本矩阵。
我的问题是这样的。我的匹配点存储在描述符匹配的单个向量中,然后过滤异常值。 findFundamentalMat 将两个单独的匹配点数组作为输入。我不知道如何从向量转换为两个单独的数组。
cout << "< Matching descriptors..." << endl;
vector<DMatch> filteredMatches;
crossCheckMatching( descriptorMatcher, descriptors1, descriptors2, filteredMatches, 1 );
cout << filteredMatches.size() << " matches" << endl << ">" << endl;
矢量已创建。
void crossCheckMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
const Mat& descriptors1, const Mat& descriptors2,
vector<DMatch>& filteredMatches12, int knn=1 )
{
filteredMatches12.clear();
vector<vector<DMatch> > matches12, matches21;
descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn );
descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn );
for( size_t m = 0; m < matches12.size(); m++ )
{
bool findCrossCheck = false;
for( size_t fk = 0; fk < matches12[m].size(); fk++ )
{
DMatch forward = matches12[m][fk];
for( size_t bk = 0; bk < matches21[forward.trainIdx].size(); bk++ )
{
DMatch backward = matches21[forward.trainIdx][bk];
if( backward.trainIdx == forward.queryIdx )
{
filteredMatches12.push_back(forward);
findCrossCheck = true;
break;
}
}
if( findCrossCheck ) break;
}
}
}
匹配项经过交叉检查并存储在filteredMatches 中。
cout << "< Computing homography (RANSAC)..." << endl;
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
H12 = findHomography( Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold );
cout << ">" << endl;
单应性是根据运行时在命令提示符中设置的阈值找到的。
//Mat drawImg;
if( !H12.empty() ) // filter outliers
{
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
for( size_t i1 = 0; i1 < points1.size(); i1++ )
{
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) < 4 ) // inlier
matchesMask[i1] = 1;
}
/* draw inliers
drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 ); */
}
进一步过滤匹配项以删除异常值。
……然后呢?如何将剩下的内容分成两个 Mat 的匹配点以在 findFundamentalMat 中使用?
编辑
我现在已经使用我的掩码来制作 FinalMatches 向量(这取代了上面的最终过滤程序):
Mat drawImg;
if( !H12.empty() ) // filter outliers
{
size_t i1;
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
for( i1 = 0; i1 < points1.size(); i1++ )
{
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) < 4 ) // inlier
matchesMask[i1] = 1;
}
for( i1 = 0; i1 < filteredMatches.size(); i1++ )
{
if ( matchesMask[i1] == 1 )
finalMatches.push_back(filteredMatches[i1]);
}
namedWindow("matches", 1);
// draw inliers
drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 );
imshow("matches", drawImg);
}
但是我仍然不知道如何将我的 FinalMatches DMatch 向量分割成 Mat 数组,我需要将其输入到 findFundamentalMat 中,请帮助!!!
编辑
工作(某种)解决方案:
Mat drawImg;
vector<Point2f> finalPoints1;
vector<Point2f> finalPoints2;
if( !H12.empty() ) // filter outliers
{
size_t i, idx;
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
for( i = 0; i < points1.size(); i++ )
{
if( norm(points2[i] - points1t.at<Point2f>((int)i,0)) < 4 ) // inlier
matchesMask[i] = 1;
}
for ( idx = 0; idx < filteredMatches.size(); idx++)
{
if ( matchesMask[idx] == 1 ) {
finalPoints1.push_back(keypoints1[filteredMatches[idx].queryIdx].pt);
finalPoints2.push_back(keypoints2[filteredMatches[idx].trainIdx].pt);
}
}
namedWindow("matches", 0);
// draw inliers
drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 );
imshow("matches", drawImg);
}
然后我将 FinalPoints1 和 FinalPoints2 作为 Mat 的输入到 findFundamentalMat 中。现在我唯一的问题是我的输出远没有达到预期,图像全部搞砸了:-/
I posted earlier with a problem regarding the same program but received no answers. I've since corrected the issue I was experiencing at that point, only to face a new problem.
Basically I am auto correcting stereo image pairs for rotation and translation using an uncalibrated approach. I use feature detection algorithms such as SURF to find points in two images, a left and right stereo image pair, and then using SURF again I match the points between the two images. I then need to use these matched points to find the fundamental matrix which I can use to correct the images.
My issue is this. My matching points are stored in a single vector of descriptor matches, which is then filtered for outliers. findFundamentalMat takes as input two separate arrays of matching points. I don't know how to convert from my vector to my two separate arrays.
cout << "< Matching descriptors..." << endl;
vector<DMatch> filteredMatches;
crossCheckMatching( descriptorMatcher, descriptors1, descriptors2, filteredMatches, 1 );
cout << filteredMatches.size() << " matches" << endl << ">" << endl;
The vector is created.
void crossCheckMatching( Ptr<DescriptorMatcher>& descriptorMatcher,
const Mat& descriptors1, const Mat& descriptors2,
vector<DMatch>& filteredMatches12, int knn=1 )
{
filteredMatches12.clear();
vector<vector<DMatch> > matches12, matches21;
descriptorMatcher->knnMatch( descriptors1, descriptors2, matches12, knn );
descriptorMatcher->knnMatch( descriptors2, descriptors1, matches21, knn );
for( size_t m = 0; m < matches12.size(); m++ )
{
bool findCrossCheck = false;
for( size_t fk = 0; fk < matches12[m].size(); fk++ )
{
DMatch forward = matches12[m][fk];
for( size_t bk = 0; bk < matches21[forward.trainIdx].size(); bk++ )
{
DMatch backward = matches21[forward.trainIdx][bk];
if( backward.trainIdx == forward.queryIdx )
{
filteredMatches12.push_back(forward);
findCrossCheck = true;
break;
}
}
if( findCrossCheck ) break;
}
}
}
The matches are cross checked and stored within filteredMatches.
cout << "< Computing homography (RANSAC)..." << endl;
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
H12 = findHomography( Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold );
cout << ">" << endl;
The homography is found based on a threshold which is set at run time in the command prompt.
//Mat drawImg;
if( !H12.empty() ) // filter outliers
{
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
for( size_t i1 = 0; i1 < points1.size(); i1++ )
{
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) < 4 ) // inlier
matchesMask[i1] = 1;
}
/* draw inliers
drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 ); */
}
The matches are further filtered to remove outliers.
...and then what? How do I split what's left into two Mat's of matching points to use in findFundamentalMat?
EDIT
I have now used my mask to make a finalMatches vector as such (this replaces the final filtering procedure above):
Mat drawImg;
if( !H12.empty() ) // filter outliers
{
size_t i1;
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
for( i1 = 0; i1 < points1.size(); i1++ )
{
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) < 4 ) // inlier
matchesMask[i1] = 1;
}
for( i1 = 0; i1 < filteredMatches.size(); i1++ )
{
if ( matchesMask[i1] == 1 )
finalMatches.push_back(filteredMatches[i1]);
}
namedWindow("matches", 1);
// draw inliers
drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 );
imshow("matches", drawImg);
}
However I still do not know how to split my finalMatches DMatch vector into the Mat arrays which I need to feed into findFundamentalMat, please help!!!
EDIT
Working (sort of) solution:
Mat drawImg;
vector<Point2f> finalPoints1;
vector<Point2f> finalPoints2;
if( !H12.empty() ) // filter outliers
{
size_t i, idx;
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
for( i = 0; i < points1.size(); i++ )
{
if( norm(points2[i] - points1t.at<Point2f>((int)i,0)) < 4 ) // inlier
matchesMask[i] = 1;
}
for ( idx = 0; idx < filteredMatches.size(); idx++)
{
if ( matchesMask[idx] == 1 ) {
finalPoints1.push_back(keypoints1[filteredMatches[idx].queryIdx].pt);
finalPoints2.push_back(keypoints2[filteredMatches[idx].trainIdx].pt);
}
}
namedWindow("matches", 0);
// draw inliers
drawMatches( leftImg, keypoints1, rightImg, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask, 2 );
imshow("matches", drawImg);
}
And then I feed finalPoints1 and finalPoints2 into findFundamentalMat as Mat's. Now my only problem is that my output is not remotely as expected, the images are all screwed up :-/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您的匹配数组是描述符数组的偏移量。由于每个描述符都有一个相应的关键点,因此您可以简单地从索引迭代并构建两个关键点数组。然后可以将这些关键点输入到 findFundamentalMat 中。
编辑:
我相信你的错误是在生成 FinalMatches 时丢失了信息。向量filteredMatches 已重载。 matchesMask 为 1 的索引显示关键点 1 的索引,而 FinalMatches 中存储的索引是关键点 2 的索引。通过深入了解 FinalMatches,您实际上会丢失第一组索引。
尝试以下操作:
使用一个循环来计算有多少实际匹配项:
现在声明正确大小的 CvMats:
现在迭代过滤的匹配项并插入:(确切的语法可能有所不同,您明白了)
Your match array are offsets into the descriptor arrays. Since each descriptor has a corresponding keypoint, you can simply iterate and build two arrays of keypoints from the indices. These keypoints can then be fed into findFundamentalMat.
Edit:
I believe your mistake is in generating finalMatches where you are losing information. The vector filteredMatches is overloaded. The indices where matchesMask is 1 show the indices into keypoints1 while the indices stored into finalMatches are the indices into keypoints2. By scrunching down into finalMatches, you are in effect losing the first set of indices.
Try the following:
Have a loop that counts how many actual matches there are:
Now declare CvMats of correct size:
Now iterate over the filteredMatches and insert: (Exact syntax may differ, you get the idea)