SIFT与NCC和ZNCC的关系是什么以及它与Harris角点检测器的关系
SIFT 是替代 ZNCC 和 NCC 的匹配方法吗 或者SIFT只是向NCC提供输入,换句话说SIFT被提议用作Harris角点检测算法的替代方案?
Is SIFT a matching approach to replace ZNCC and NCC
or SIFT just provides input to NCC, in other words SIFT is proposed to be used as an alternative to Harris corner detection algorithm?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
SIFT实际上是David Lowe提出的检测、描述和匹配管道。它受欢迎的原因是它开箱即用的效果很好。
SIFT 的检测步骤(图像中的哪些点是有趣的)与您提到的 Harris 角点检测器相当,由高斯差分检测器组成。该检测器是中心环绕滤波器,应用于尺度空间金字塔(也应用于金字塔 LK 跟踪等)以检测最大尺度空间响应。
然后,描述步骤(区分该区域的因素)在矩形箱中构建梯度直方图,其中多个尺度以最大响应尺度为中心。这意味着比原始像素值、颜色直方图等更具有描述性和对照明变化等更鲁棒性。还存在主导方向的归一化以获得平面内旋转不变性。
SIFT 的匹配步骤(对于给定的描述符/补丁,在一堆描述符/补丁中是最接近的)由最近距离比率度量组成,该度量测试最接近匹配与第二最近匹配之间的距离比率。这个想法是,如果比率较低,则第一个比第二个好得多,因此您应该进行匹配。否则,第一和第二大约相等,您应该拒绝匹配,因为噪声等在这种情况下很容易生成错误匹配。在实践中这比欧几里得距离更有效。尽管对于大型数据库,您需要矢量量化等来保持其准确有效的工作。
总的来说,我认为 SIFT 描述符/匹配是比 NCC/ZNCC 更好/更稳健的方法,尽管您确实在计算负载上付出了代价。
SIFT is actually a detection, description, and matching pipeline which is proposed by David Lowe. The reason for its popularity is that it works quite well out of the box.
The detection step of SIFT (which points in the image are interesting), comparable to the Harris corner detector that you mentioned, consists of a Difference of Gaussians detector. This detector is a center surround filter and is applied to a scale space pyramid (also applied in things like pyramidal LK tracking) to detect a maximal scale space response.
The description step (what distinguishes this region) then builds histograms of gradients in rectangular bins with several scales centered around the maximal response scale. This is meant as more descriptive and robust to illumination changes etc. than things like raw pixel values, color histograms, etc. There is also a normalization of dominant orientation to get in-plane rotational invariance.
The matching step (for a given descriptor/patch, which out of a pile of descriptors/patches is closest) for SIFT consist of a nearest distance ratio metric which tests for the ratio of distances between the closest match and second closest match. The idea is that if the ratio is low, then the first is much better than the second, thus you should make the match. Else, first and second is about equal and you should reject the match as noise, etc. can easily generate a false match in this scenario. This works better than Euclidean distance in practice. Though for large databases, you'll need vector quantization etc. to keep this working accurately and efficiently.
Overall, I'd argue that the SIFT descriptor/match is a much better/robust approach than NCC/ZNCC though you do pay for it in computational load.