最快的距离变换可用算法
我正在寻找最快的距离变换可用算法。
根据此网站 http://homepages.inf.ed.ac.uk/rbf/HIPR2/ 只需两次传递即可更有效地计算距离变换(例如 Rosenfeld 和 Pfaltz 1968)。”
distance.htm,它描述:“使用巧妙的算法, 周围,我发现:“Rosenfeld, A and Pfaltz, J L. 1968. Distance Functions on Digital Pictures. Pattern Recognition, 1, 33-61。”
但我相信我们应该有更好更快的算法已经比 1968 年的算法了吗?事实上,我找不到 1968 年的来源,所以非常感谢您的帮助。
I am looking for the fastest available algorithm for distance transform.
According to this site http://homepages.inf.ed.ac.uk/rbf/HIPR2/distance.htm, it describes: "The distance transform can be calculated much more efficiently using clever algorithms in only two passes (e.g. Rosenfeld and Pfaltz 1968)."
Searching around, I found: "Rosenfeld, A and Pfaltz, J L. 1968. Distance Functions on Digital Pictures. Pattern Recognition, 1, 33-61."
But I believe we should have a better and faster algorithm than the one in 1968 already? In fact, I could not find the source from 1968, so any help is highly appreciated.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
本文回顾了已知的精确距离变换算法:
《2D 欧氏距离变换算法:比较综述》
https://rfabbri.github.io/stuff/fabbri-EDT- Survey-ACMCSurvFeb2008.pdf
最快的精确距离变换来自 Meijster:
“计算线性距离变换的通用算法时间。”
http://fab.cba.mit.edu/classes/ S62.12/docs/Meijster_distance.pdf
该算法的设计特别适合并行计算。
这是在我的开源库中实现的,该库尝试模拟 Photoshop 的“图层样式:”
https://github.com/ vinniefalco/LayerEffects
This paper reviews the known exact distance transform algorithms:
"2D Euclidean distance transform algorithms: A comparative survey"
https://rfabbri.github.io/stuff/fabbri-EDT-survey-ACMCSurvFeb2008.pdf
The fastest exact distance transform is from Meijster:
"A General Algorithm for Computing Distance Transforms in Linear Time."
http://fab.cba.mit.edu/classes/S62.12/docs/Meijster_distance.pdf
The design of the algorithm is particularly well suited for parallel calculation.
This is implemented in my open source library which tries to emulate Photoshop's "Layer Style:"
https://github.com/vinniefalco/LayerEffects
OpenCV 库使用其近似的 cv::distanceTransform 函数一种将图像从左上角传递到右下角并返回的算法。 Gunilla Borgefors 的论文“数字图像中的距离变换”描述了该算法(Comput. Vision Graph. Image Process. 34 3, pp 344–371, 1986)。
该算法通过一些基本跳跃(水平、垂直、对角线和骑士移动)的组合来计算距离。每次跳跃都会产生成本。下表显示了不同跳跃的成本。
从一个像素到另一个像素的距离是所需跳跃成本的总和。下图显示了 0 个单元格到每个其他单元格的距离。箭头显示了通往某些单元格的路径。彩色数字反映了精确的(欧几里德)距离。
该算法的工作原理如下:以下蒙版
从图像的左上角移动到右下角。在此过程中,位于掩码边界内的单元格要么保留其值(如果已知且较小),要么获得通过将掩码值与单元格值(如果已知)相加而计算出的值在 mask-0-cell 下方。
之后,执行从右下到左上的第二遍(使用垂直和水平翻转掩模)。第二次通过后,计算距离。
The OpenCV library uses for its approximate cv::distanceTransform function a algorithm which passes the image from top left to bottom right and back. The algorithm is described in the paper "Distance transformations in digital images" from Gunilla Borgefors (Comput. Vision Graph. Image Process. 34 3, pp 344–371, 1986).
The algorithm calculates the distance through a combination of some basic jumps (horizontal, vertical, diagonal and the knight move). Each jump incurs costs. The following table shows the costs for the different jumps.
The distance from one pixel to another is the sum of the costs of the jumps necessary. The following image shows the distance from the 0-cells to each other cell. The arrows are showing the way to some cells. The colored numbers reflect the exact (euclidean) distance.
The algorithm works like this: Following mask
is moved from top left of the image to bottom right. During this pass, cells lying inside the boundaries of the mask either keep their value (if it is known and smaller) or they get the value calculated by summing the mask-value and the cell-value (if it is known) from the cell below the mask-0-cell.
After that, a second pass from bottom right to top left (with a vertical and horizontal flipped mask) is performed. After the second pass the distances are calculated.
关于计算距离函数有大量的新工作。
顺便说一下,你真的想使用这些而不是罗森菲尔德的工作,特别是当你想计算存在障碍物的距离时。
There's tons of newer work on computing distance functions.
By the way, you'd really want to use these instead of the work by Rosenfeld, specifically when you want to compute distances in the presence of obstacles.
Felzenszwalb 和 Huttenlocher 在他们的论文“采样函数的距离变换”中提出了一种精确且 O(N) 的优雅算法,可用 此处。他们利用了欧几里得距离变换的平方是抛物线这一事实,可以在每个维度上独立评估。
源代码也可用。
Felzenszwalb and Huttenlocher present an elegant algorithm that is exact and O(N) in their paper "Distance Transforms of Sampled Functions" available here. They exploit the fact that the square of the Euclidean distance transform is a parabola that can be evaluated independently in each dimension.
Source code is also available.
我已经实现了 Vinnie 的答案中引用的 Meijster 的 O(N) 方法。
“计算线性时间距离变换的通用算法。”
http://fab.cba.mit.edu/classes/S62 .12/docs/Meijster_distance.pdf
好处是它可以非常有效地并行化,独立计算每个像素线(这是一个可分离的方法)。在 12 个 CPU 核心上运行,1000^3 体积图像的距离场可在几秒钟内计算出来。
Felzenszwalb 和 Huttenlocher 2012 年的“采样函数的距离变换”的解决方案(在 bw1024 的答案中引用)基于完全相同的想法。有趣的是,他们没有引用 Meijster 12 年前所做的工作。
I have implemented Meijster's O(N) method cited in Vinnie's answer.
"A General Algorithm for Computing Distance Transforms in Linear Time."
http://fab.cba.mit.edu/classes/S62.12/docs/Meijster_distance.pdf
The nice thing is that it can be parallelized very efficiently, computing each pixel line independently (it's a separable method). Running on 12 CPU cores, the distance field of a 1000^3 volumetric image computes in a few seconds.
The solution of Felzenszwalb and Huttenlocher "Distance Transforms of Sampled Functions" dating from 2012 (cited in bw1024's answer) is based on exactly the same idea. Interestingly, they don't cite the work of Meijster done 12 years earlier.