RANSAC算法

发布于 2024-10-11 08:47:41 字数 177 浏览 3 评论 0原文

谁能告诉我如何使用RANSAC算法来选择具有一定重叠部分的两幅图像中的共同特征点?问题来自于基于特征的图像拼接。
替代文字 替代文本

Can anybody please show me how to use RANSAC algorithm to select common feature points in two images which have a certain portion of overlap? The problem came out from feature based image stitching.
alt text
alt text

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

空名 2024-10-18 08:47:41

几年前我实现了一个图像拼接器。维基百科上关于 RANSAC 的文章很好地描述了通用算法。

当使用 RANSAC 进行基于特征的图像匹配时,您想要的是找到最能将第一幅图像变换为第二幅图像的变换。这将是维基百科文章中描述的模型。

如果您已经获得了两个图像的特征,并且发现第一个图像中的哪些特征与第二个图像中的哪些特征最匹配,那么将使用类似这样的 RANSAC。

The input to the algorithm is:
n - the number of random points to pick every iteration in order to create the transform. I chose n = 3 in my implementation.
k - the number of iterations to run
t - the threshold for the square distance for a point to be considered as a match
d - the number of points that need to be matched for the transform to be valid
image1_points and image2_points - two arrays of the same size with points. Assumes that image1_points[x] is best mapped to image2_points[x] accodring to the computed features.

best_model = null
best_error = Inf
for i = 0:k
  rand_indices = n random integers from 0:num_points
  base_points = image1_points[rand_indices]
  input_points = image2_points[rand_indices] 
  maybe_model = find best transform from input_points -> base_points

  consensus_set = 0
  total_error = 0
  for i = 0:num_points
    error = square distance of the difference between image2_points[i] transformed by maybe_model and image1_points[i]
    if error < t
      consensus_set += 1
      total_error += error

  if consensus_set > d && total_error < best_error
    best_model = maybe_model
    best_error = total_error

最终结果是将 image2 中的点最好地转换为 image1 的转换,这正是拼接时您想要的。

I implemented a image stitcher a couple of years back. The article on RANSAC on Wikipedia describes the general algortihm well.

When using RANSAC for feature based image matching, what you want is to find the transform that best transforms the first image to the second image. This would be the model described in the wikipedia article.

If you have already got your features for both images and have found which features in the first image best matches which features in the second image, RANSAC would be used something like this.

The input to the algorithm is:
n - the number of random points to pick every iteration in order to create the transform. I chose n = 3 in my implementation.
k - the number of iterations to run
t - the threshold for the square distance for a point to be considered as a match
d - the number of points that need to be matched for the transform to be valid
image1_points and image2_points - two arrays of the same size with points. Assumes that image1_points[x] is best mapped to image2_points[x] accodring to the computed features.

best_model = null
best_error = Inf
for i = 0:k
  rand_indices = n random integers from 0:num_points
  base_points = image1_points[rand_indices]
  input_points = image2_points[rand_indices] 
  maybe_model = find best transform from input_points -> base_points

  consensus_set = 0
  total_error = 0
  for i = 0:num_points
    error = square distance of the difference between image2_points[i] transformed by maybe_model and image1_points[i]
    if error < t
      consensus_set += 1
      total_error += error

  if consensus_set > d && total_error < best_error
    best_model = maybe_model
    best_error = total_error

The end result is the transform that best tranforms the points in image2 to image1, which is exacly what you want when stitching.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文