特征检测和描述符提取有什么区别?

发布于 2024-11-26 10:42:56 字数 116 浏览 8 评论 0 原文

有谁知道 OpenCV 2.3 中特征检测和描述符提取之间的区别?

我知道后者是使用 DescriptorMatcher 进行匹配所必需的。如果是这样的话,FeatureDetection 有什么用呢?

Does anyone know the difference between feature detection and descriptor extraction in OpenCV 2.3?

I understand that the latter is required for matching using DescriptorMatcher. If that's the case, what is FeatureDetection used for?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

财迷小姐 2024-12-03 10:42:56

特征检测

  • 在计算机视觉和图像处理中,特征检测的概念是指旨在计算图像信息的抽象并在每个图像点上做出局部决策是否存在某个图像特征的方法。此时是否给定类型。生成的特征将是图像域的子集,通常采用孤立点、连续曲线或连接区域的形式。

    特征检测=如何找到图像中一些有趣的点(特征)。 (例如,找到一个角点,找到一个模板等。)

特征提取

  • 在模式识别和图像处理中,特征提取是一种特殊的方法。降维的形式。当算法的输入数据太大而无法处理并且被怀疑是众所周知的冗余(数据很多,但信息不多)时,输入数据将被转换为特征的简化表示集(也称为特征向量) 。将输入数据转换为特征集称为特征提取。如果仔细选择提取的特征,则期望特征集将从输入数据中提取相关信息,以便使用这种简化的表示而不是全尺寸输入来执行所需的任务。

    特征提取=如何表示我们发现的有趣点,并将它们与图像中其他有趣点(特征)进行比较。 (例如,该点的局部区域强度​​?该点周围区域的局部方向?等等)

实际例子:可以用harris角点法找到角点,但您可以用任何您想要的方法来描述它(例如直方图、HOG、第 8 个邻接中的局部方向)

您可以在这篇维基百科文章。

Feature detection

  • In computer vision and image processing the concept of feature detection refers to methods that aim at computing abstractions of image information and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain, often in the form of isolated points, continuous curves or connected regions.

    Feature detection = how to find some interesting points (features) in the image. (For example, find a corner, find a template, and so on.)

Feature extraction

  • In pattern recognition and in image processing, feature extraction is a special form of dimensionality reduction. When the input data to an algorithm is too large to be processed and it is suspected to be notoriously redundant (much data, but not much information) then the input data will be transformed into a reduced representation set of features (also named features vector). Transforming the input data into the set of features is called feature extraction. If the features extracted are carefully chosen it is expected that the features set will extract the relevant information from the input data in order to perform the desired task using this reduced representation instead of the full-size input.

    Feature extraction = how to represent the interesting points we found to compare them with other interesting points (features) in the image. (For example, the local area intensity of this point? The local orientation of the area around the point? And so on)

Practical example: You can find a corner with the harris corner method, but you can describe it with any method you want (Histograms, HOG, Local Orientation in the 8th adjacency for instance)

You can see here some more information in this Wikipedia article.

苯莒 2024-12-03 10:42:56

特征检测特征描述符提取都是基于特征的图像配准的一部分。只有在整个基于特征的图像配准过程的背景下查看它们才能了解它们的工作是什么才有意义。

基于特征的注册算法

下图来自 PCL 文档 显示了这样的注册管道:

PCL 成对配准

  1. 数据采集: 输入图像和参考图像是输入到算法中。图像应从略有不同的角度显示相同的场景。

  2. 关键点估计(特征检测):一个关键点(兴趣点) 是点云中的一个点,具有以下特征:

    1. 它有一个清晰的、最好是有数学依据的定义,
    2. 它在图像空间中具有明确的位置,
    3. 兴趣点周围的局部图像结构局部信息内容丰富。

      OpenCV 附带了多种特征检测 实现,例如:

图像中的此类显着点非常有用,因为它们的总和可以表征图像并有助于区分图像的不同部分。

  1. 特征描述符(描述符提取器):检测到关键点后,我们继续为每个关键点计算一个描述符。 “局部描述符是点的局部邻域的紧凑表示。与描述完整对象或点云的全局描述符相反,局部描述符尝试仅在点周围的局部邻域中相似形状和外观,因此非常适合表示它从匹配上来说。” (Dirk Holz 等人)OpenCV 选项

  2. 对应估计(描述符匹配器):下一个任务是找到两个图像中找到的关键点之间的对应关系。因此提取的特征被放置在可以有效搜索的结构中(例如 kd-tree)。通常,查找所有局部特征描述符并将它们中的每一个与另一图像中对应的对应部分进行匹配就足够了。然而,由于来自相似场景的两个图像不一定具有相同数量的特征描述符,因为一个云可以比另一个云拥有更多的数据,因此我们需要运行单独的对应拒绝过程。 OpenCV 选项

  3. 通信拒绝:执行通信拒绝的最常见方法之一是使用RANSAC(随机样本共识)。

  4. 变换估计:计算出两个图像之间的鲁棒对应关系后,使用绝对方向算法来计算应用于输入图像以匹配参考图像。有许多不同的算法方法可以做到这一点,常见的方法是:奇异值分解(SVD) .

Both, Feature Detection and Feature descriptor extraction are parts of the Feature based image registration. It only makes sense to look at them in the context of the whole feature based image registration process to understand what their job is.

Feature-based registration algorithm

The following picture from the PCL documentation shows such a Registration pipeline:

PCL pairwise registration

  1. Data acquisition: An input image and a reference image are fed into the algorithm. The images should show the same scene from slightly different viewpoints.

  2. Keypoint estimation (Feature detection): A keypoint (interest point) is a point within the point cloud that has the following characteristics:

    1. it has a clear, preferably mathematically well-founded, definition,
    2. it has a well-defined position in image space,
    3. the local image structure around the interest point is rich in terms of local information contents.

      OpenCV comes with several implementations for Feature detection, such as:

Such salient points in an image are so useful because the sum of them characterises the image and helps making different parts of it distinguishable.

  1. Feature descriptors (Descriptor extractor): After detecting keypoints we go on to compute a descriptor for every one of them. "A local descriptor a compact representation of a point’s local neighbourhood. In contrast to global descriptors describing a complete object or point cloud, local descriptors try to resemble shape and appearance only in a local neighborhood around a point and thus are very suitable for representing it in terms of matching." (Dirk Holz et al.). OpenCV options:

  2. Correspondence Estimation (descriptor matcher): The next task is to find correspondences between the keypoints found in both images.Therefore the extracted features are placed in a structure that can be searched efficiently (such as a kd-tree). Usually it is sufficient to look up all local feature-descriptors and match each one of them to his corresponding counterpart from the other image. However due to the fact that two images from a similar scene don't necessarily have the same number of feature-descriptors as one cloud can have more data than the other, we need to run a separated correspondence rejection process. OpenCV options:

  3. Correspondence rejection: One of the most common approaches to perform correspondence rejection is to use RANSAC (Random Sample Consensus).

  4. Transformation Estimation: After robust correspondences between the two images are computed an Absolute Orientation Algorithm is used to calculate a transformation matrix which is applied on the input image to match the reference image. There are many different algorithmic approaches to do this, a common approach is: Singular Value Decomposition(SVD).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文