从 8 个相连像素的列表中提取片段

发布于 2024-11-16 09:09:25 字数 2379 浏览 3 评论 0原文

当前情况:我正在尝试从图像中提取片段。感谢 openCV 的 findContours() 方法,我现在拥有每个轮廓的 8 个连接点的列表。然而,这些列表不能直接使用,因为它们包含大量重复项。

问题给定一个8连接点的列表,其中可以包含重复项,从中提取段。

可能的解决方案:

  • 首先,我使用openCV的approxPolyDP( ) 方法。然而,结果很糟糕......这是缩放轮廓:

在此处输入图像描述

这是<的结果code>approxPolyDP(): (9 段!有些重叠)

在此处输入图像描述

但我想要的是更像是:

在此处输入图像描述

这很糟糕,因为 approxPolyDP() 可以将“看起来像几个片段”的内容转换为“几个部分”。然而,我所拥有的是一个往往会自我迭代多次的点列表。

例如,如果我的点是:

0 1 2 3 4 5 6 7 8 
  9   

那么,点列表将为 0 1 2 3 4 5 6 7 8 7 6 5 4 3 2 1 9... 如果点的数量变为大(> 100),那么不幸的是,由 approxPolyDP() 提取的段不是重复的(即:它们彼此重叠,但并不严格相等,所以我不能只是例如,说“删除重复项”,而不是像素)

  • 也许,我有一个解决方案,但它很长(虽然很有趣)。首先,对于所有 8 连接列表,我创建一个稀疏矩阵(为了提高效率),如果像素属于该列表,则将矩阵值设置为 1。然后,我创建一个图表,其中节点对应于像素,边缘位于相邻像素之间。这也意味着我添加像素之间的所有缺失边缘(复杂性很小,可能是因为稀疏矩阵)。然后我删除所有可能的“方块”(4个相邻节点),这是可能的,因为我已经在处理非常薄的轮廓。然后我可以启动最小生成树算法。最后,我可以用 openCV 的 approxPolyDP() 来近似树的每个分支 总结

一下:我有一个繁琐的方法,但我尚未实现,因为它似乎容易出错。然而,我问你们,Stack Overflow 的人们:是否还有其他现有的方法,可能有很好的实现?


编辑:澄清一下,一旦我有一棵树,我就可以提取“分支”(分支从链接到 3 个或更多其他节点的叶子或节点开始)然后,openCV 的 approxPolyDP() 中的算法是Ramer–Douglas–Peucker算法,这是它的作用的维基百科图片:

在此处输入图像描述

有了这张图片,它很容易理解为什么当点可能彼此重复时它会失败


另一个编辑:在我的方法中,有一些可能值得注意的事情。当您考虑位于网格中的点(如像素)时,通常,最小生成树算法没有用,因为有许多可能的最小生成树与最小生成树

X-X-X-X
|
X-X-X-X

本质上非常不同

X-X-X-X
| | | |
X X X X

,但两者都是最小生成树

但是,在我的情况下,我的节点很少形成簇,因为它们应该是轮廓,并且已经在 findContours() 中预先运行了细化算法。


回答 Tomalak 的评论:

在此处输入图像描述

如果 DP 算法返回 4 个段(从点 2< /code> 到中心两次)我会很高兴!当然,通过良好的参数,我可以达到“偶然”拥有相同段的状态,并且可以删除重复项。然而,很明显,该算法并不是为此而设计的。

这是一个包含太多段的真实示例:

在此处输入图像描述

Current situation: I'm trying to extract segments from an image. Thanks to openCV's findContours() method, I now have a list of 8-connected point for every contours. However, these lists are not directly usable, because they contain a lot of duplicates.

The problem: Given a list of 8-connected points, which can contain duplicates, extract segments from it.

Possible solutions:

  • At first, I used openCV's approxPolyDP() method. However, the results are pretty bad... Here is the zoomed contours:

enter image description here

Here is the result of approxPolyDP(): (9 segments! Some overlap)

enter image description here

but what I want is more like:

enter image description here

It's bad because approxPolyDP() can convert something that "looks like several segments" in "several segments". However, what I have is a list of points that tend to iterate several times over themselves.

For example, if my points are:

0 1 2 3 4 5 6 7 8 
  9   

Then, the list of point will be 0 1 2 3 4 5 6 7 8 7 6 5 4 3 2 1 9... And if the number of points become large (>100) then the segments extracted by approxPolyDP() are unfortunately not duplicates (i.e : they overlap each other, but are not strictly equal, so I can't just say "remove duplicates", as opposed to pixels for example)

  • Perhaps, I've got a solution, but it's pretty long (though interesting). First of all, for all 8-connected list, I create a sparse matrix (for efficiency) and set the matrix values to 1 if the pixel belongs to the list. Then, I create a graph, with nodes corresponding to pixels, and edges between neighbouring pixels. This also means that I add all the missing edges between pixels (complexity small, possible because of the sparse matrix). Then I remove all possible "squares" (4 neighbouring nodes), and this is possible because I am already working on pretty thin contours. Then I can launch a minimal spanning tree algorithm. And finally, I can approximate every branch of the tree with openCV's approxPolyDP()

To sum up: I've got a tedious method, that I've not yet implemented as it seems error-prone. However, I ask you, people at Stack Overflow: are there other existing methods, possibly with good implementations?


Edit: To clarify, once I have a tree, I can extract "branches" (branches start at leaves or nodes linked to 3 or more other nodes) Then, the algorithm in openCV's approxPolyDP() is the Ramer–Douglas–Peucker algorithm, and here is the Wikipedia picture of what it does:

enter image description here

With this picture, it is easy to understand why it fails when points may be duplicates of each other


Another edit: In my method, there is something that may be interesting to note. When you consider points located in a grid (like pixels), then generally, the minimal spanning tree algorithm is not useful because there are many possible minimal trees

X-X-X-X
|
X-X-X-X

is fundamentally very different from

X-X-X-X
| | | |
X X X X

but both are minimal spanning trees

However, in my case, my nodes rarely form clusters because they are supposed to be contours, and there is already a thinning algorithm that runs beforehand in the findContours().


Answer to Tomalak's comment:

enter image description here

If DP algorithm returns 4 segments (the segment from the point 2 to the center being there twice) I would be happy! Of course, with good parameters, I can get to a state where "by chance" I have identical segments, and I can remove duplicates. However, clearly, the algorithm is not designed for it.

Here is a real example with far too many segments:

enter image description here

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

梦毁影碎の 2024-11-23 09:09:25

使用 Mathematica 8,我根据图像中的白色像素列表创建了形态图。它在您的第一张图像上运行良好:

在此处输入图像描述

在此处输入图像描述

创建形态图:

graph = MorphologicalGraph[binaryimage];

然后您可以查询您感兴趣的图属性。

这给出了图中顶点的名称:

vertex = VertexList[graph]

边的列表:

EdgeList[graph]

给出了顶点的位置:

pos = PropertyValue[{graph, #}, VertexCoordinates] & /@ vertex

这就是第一张图像的结果:

In[21]:= vertex = VertexList[graph]

Out[21]= {1, 3, 2, 4, 5, 6, 7, 9, 8, 10}

In[22]:= EdgeList[graph]

Out[22]= {1 \[UndirectedEdge] 3, 2 \[UndirectedEdge] 4,  3 \[UndirectedEdge] 4, 
          3 \[UndirectedEdge] 5, 4 \[UndirectedEdge] 6,  6 \[UndirectedEdge] 7, 
          6 \[UndirectedEdge] 9, 8 \[UndirectedEdge] 9,  9 \[UndirectedEdge] 10}

In[26]:= pos = PropertyValue[{graph, #}, VertexCoordinates] & /@ vertex

Out[26]= {{54.5, 191.5}, {98.5, 149.5},  {42.5, 185.5}, 
          {91.5, 138.5}, {132.5, 119.5}, {157.5, 72.5},
          {168.5, 65.5}, {125.5, 52.5},  {114.5, 53.5}, 
          {120.5, 29.5}}

给定文档, http://reference.wolfram.com/mathematica/ref/MorphologicalGraph.html,先命令MorphologicalGraph通过形态细化计算骨架:

skeleton = Thinning[binaryimage, Method -> "Morphological"]

然后检测顶点;它们是分支点和终点:

verteximage = ImageAdd[
                  MorphologicalTransform[skeleton, "SkeletonEndPoints"],   
                  MorphologicalTransform[skeleton, "SkeletonBranchPoints"]]

在此处输入图像描述

然后在分析其连通性后将顶点链接起来。

例如,可以首先破坏顶点周围的结构,然后查找连接的组件,从而显示图的边缘:

comp = MorphologicalComponents[
           ImageSubtract[
               skeleton, 
               Dilation[vertices, CrossMatrix[1]]]];
Colorize[comp] 

在此处输入图像描述

细节决定成败,但如果您希望开发自己的实现,这听起来像是一个坚实的起点。

Using Mathematica 8, I created a morphological graph from the list of white pixels in the image. It is working fine on your first image:

enter image description here

enter image description here

Create the morphological graph:

graph = MorphologicalGraph[binaryimage];

Then you can query the graph properties that are of interest to you.

This gives the names of the vertex in the graph:

vertex = VertexList[graph]

The list of the edges:

EdgeList[graph]

And that gives the positions of the vertex:

pos = PropertyValue[{graph, #}, VertexCoordinates] & /@ vertex

This is what the results look like for the first image:

In[21]:= vertex = VertexList[graph]

Out[21]= {1, 3, 2, 4, 5, 6, 7, 9, 8, 10}

In[22]:= EdgeList[graph]

Out[22]= {1 \[UndirectedEdge] 3, 2 \[UndirectedEdge] 4,  3 \[UndirectedEdge] 4, 
          3 \[UndirectedEdge] 5, 4 \[UndirectedEdge] 6,  6 \[UndirectedEdge] 7, 
          6 \[UndirectedEdge] 9, 8 \[UndirectedEdge] 9,  9 \[UndirectedEdge] 10}

In[26]:= pos = PropertyValue[{graph, #}, VertexCoordinates] & /@ vertex

Out[26]= {{54.5, 191.5}, {98.5, 149.5},  {42.5, 185.5}, 
          {91.5, 138.5}, {132.5, 119.5}, {157.5, 72.5},
          {168.5, 65.5}, {125.5, 52.5},  {114.5, 53.5}, 
          {120.5, 29.5}}

Given the documentation, http://reference.wolfram.com/mathematica/ref/MorphologicalGraph.html, the command MorphologicalGraph first computes the skeleton by morphological thinning:

skeleton = Thinning[binaryimage, Method -> "Morphological"]

Then the vertex are detected; they are the branch points and the end points:

verteximage = ImageAdd[
                  MorphologicalTransform[skeleton, "SkeletonEndPoints"],   
                  MorphologicalTransform[skeleton, "SkeletonBranchPoints"]]

enter image description here

And then the vertex are linked after analysis of their connectivity.

For example, one could start by breaking the structure around the vertex and then look for the connected components, revealing the edges of the graph:

comp = MorphologicalComponents[
           ImageSubtract[
               skeleton, 
               Dilation[vertices, CrossMatrix[1]]]];
Colorize[comp] 

enter image description here

The devil is in the details, but that sounds like a solid starting point if you wish to develop your own implementation.

无需解释 2024-11-23 09:09:25

尝试数学形态学。首先,您需要放大闭合图像来填补漏洞。

cvDilate(pimg, pimg, NULL, 3);
cvErode(pimg, pimg, NULL);

我得到了这张图片

在此处输入图像描述

下一步应该是应用 细化算法。不幸的是,它没有在 OpenCV 中实现(MATLAB 有 bwmorphthin 参数)。例如,使用 MATLAB,我将图像细化为以下图像:

在此处输入图像描述

但是 OpenCV 有实现细化所需的所有基本形态学操作(cvMorphologyExcvCreateStructuringElementEx 等)。

另一个想法。

他们说距离变换似乎在此类任务中非常有用。可能是这样。
考虑 cvDistTransform 函数。它创建这样的图像:

在此处输入图像描述

然后使用类似 cvAdaptiveThreshold 的内容:

< img src="https://i.sstatic.net/hjyeu.jpg" alt="在此处输入图像描述">

这是骨架。我想你可以迭代所有连接的白色像素,找到曲线并过滤掉小片段。

Try math morphology. First you need to dilate or close your image to fill holes.

cvDilate(pimg, pimg, NULL, 3);
cvErode(pimg, pimg, NULL);

I got this image

enter image description here

The next step should be applying thinning algorithm. Unfortunately it's not implemented in OpenCV (MATLAB has bwmorph with thin argument). For example with MATLAB I refined the image to this one:

enter image description here

However OpenCV has all needed basic morphological operations to implement thinning (cvMorphologyEx, cvCreateStructuringElementEx, etc).

Another idea.

They say that distance transform seems to be very useful in such tasks. May be so.
Consider cvDistTransform function. It creates to an image like that:

enter image description here

Then using something like cvAdaptiveThreshold:

enter image description here

That's skeleton. I guess you can iterate over all connected white pixels, find curves and filter out small segments.

最好是你 2024-11-23 09:09:25

我以前实现过类似的算法,并且是采用增量最小二乘方式实现的。效果相当好。伪代码有点像:

L = empty set of line segments
for each white pixel p
  line = new line containing only p
  C = empty set of points
  P = set of all neighboring pixels of p
  while P is not empty
    n = first point in P
    add n to C
    remove n from P
    line' = line with n added to it
    perform a least squares fit of line'
    if MSE(line) < max_mse and d(line, n) < max_distance
      line = line'
      add all neighbors of n that are not in C to P
  if size(line) > min_num_points
    add line to L

其中 MSE(line) 是线的均方误差(线中所有点到最佳拟合线的平方距离的总和),d(line,n) 是从点 n 到直线。 max_distance 的良好值似乎是一个像素左右,而 max_mse 似乎要小得多,并且取决于图像中线段的平均大小。对我来说,0.1 或 0.2 像素可以处理相当大的图像。

我一直在用 Canny 算子预处理的实际图像上使用它,所以我得到的唯一结果就是这样。这是上述算法在图像上的结果:
原始图像
Detected snippet

也可以使算法更快。我的 C++ 实现(由于我的工作强制封闭源代码,抱歉,否则我会把它给你)在大约 20 毫秒内处理了上面的图像。这包括应用 Canny 算子进行边缘检测,因此在您的情况下它应该更快。

I've implemented a similar algorithm before, and I did it in a sort of incremental least-squares fashion. It worked fairly well. The pseudocode is somewhat like:

L = empty set of line segments
for each white pixel p
  line = new line containing only p
  C = empty set of points
  P = set of all neighboring pixels of p
  while P is not empty
    n = first point in P
    add n to C
    remove n from P
    line' = line with n added to it
    perform a least squares fit of line'
    if MSE(line) < max_mse and d(line, n) < max_distance
      line = line'
      add all neighbors of n that are not in C to P
  if size(line) > min_num_points
    add line to L

where MSE(line) is the mean-square-error of the line (sum over all points in the line of the squared distance to the best fitting line) and d(line,n) is the distance from point n to the line. Good values for max_distance seem to be a pixel or so and max_mse seems to be much less, and will depend on the average size of the line segments in your image. 0.1 or 0.2 pixels have worked in fairly large images for me.

I had been using this on actual images pre-processed with the Canny operator, so the only results I have are of that. Here's the result of the above algorithm on an image:
Raw image
Detected segments

It's possible to make the algorithm fast, too. The C++ implementation I have (closed source enforced by my job, sorry, else I would give it to you) processed the above image in about 20 milliseconds. That includes application of the Canny operator for edge detection, so it should be even faster in your case.

彩扇题诗 2024-11-23 09:09:25

您可以首先使用 openCV 提供的 HoughLinesP 从轮廓图像中提取直线:

HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength = 0, double maxLineGap = 0)  

如果您选择 threshold = 1minLineLenght 小,您甚至可以获得所有单个元素。但要小心,因为如果有很多边缘像素,它会产生很多结果。

You can start by extraction straight lines from your contours image using HoughLinesP which is provided with openCV:

HoughLinesP(InputArray image, OutputArray lines, double rho, double theta, int threshold, double minLineLength = 0, double maxLineGap = 0)  

If you choose threshold = 1 and minLineLenght small, you can even obtain all single elements. Be careful though, since it yields many results in case you have many edge pixels.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文