scipy.spatial.Delaunay 中附近的点被遗漏

发布于 2024-12-14 08:05:07 字数 970 浏览 3 评论 0 原文

在比较 scipy 的(0.9.0)和 matplotlib 的(1.0.1)Delaunay 三角测量例程时,我注意到无法解释的行为。我的点是存储在 numpy.array([[easting, Northing], [easting, Northing], [easting, Northing]]) 中的 UTM 坐标。 Scipy 的边缘缺少我的一些观点,而 matplotlib 的边缘都在那里。有解决办法吗,还是我做错了什么?

import scipy
import numpy
from scipy.spatial import Delaunay
import matplotlib.delaunay


def delaunay_edges(points):
    d = scipy.spatial.Delaunay(points)
    s = d.vertices
    return numpy.vstack((s[:,:2], s[:,1:], s[:,::-2]))


def delaunay_edges_matplotlib(points):
        cens, edges, tri, neig = matplotlib.delaunay.delaunay(points[:,0], points[:,1])
        return edges


points = numpy.array([[500000.25, 6220000.25],[500000.5, 6220000.5],[500001.0, 6220001.0],[500002.0, 6220003.0],[500003.0, 6220005.0]])

edges1 = delaunay_edges(points)
edges2 = delaunay_edges_matplotlib(points)

numpy.unique(edges1).shape # Some points missing, presumably nearby ones
numpy.unique(edges2).shape # Includes all points

I am noticing an unexplained behaviour when comparing scipy's (0.9.0) and matplotlib's (1.0.1) Delaunay triangulation routines. My points are UTM coordinates stored in numpy.array([[easting, northing], [easting, northing], [easting, northing]]). Scipy's edges are missing some of my points, while matplotlib's are all there. Is there a fix, or am I doing something wrong?

import scipy
import numpy
from scipy.spatial import Delaunay
import matplotlib.delaunay


def delaunay_edges(points):
    d = scipy.spatial.Delaunay(points)
    s = d.vertices
    return numpy.vstack((s[:,:2], s[:,1:], s[:,::-2]))


def delaunay_edges_matplotlib(points):
        cens, edges, tri, neig = matplotlib.delaunay.delaunay(points[:,0], points[:,1])
        return edges


points = numpy.array([[500000.25, 6220000.25],[500000.5, 6220000.5],[500001.0, 6220001.0],[500002.0, 6220003.0],[500003.0, 6220005.0]])

edges1 = delaunay_edges(points)
edges2 = delaunay_edges_matplotlib(points)

numpy.unique(edges1).shape # Some points missing, presumably nearby ones
numpy.unique(edges2).shape # Includes all points

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

待天淡蓝洁白时 2024-12-21 08:05:07

scipy.spatial.Delaunay 的这种行为可能与浮点运算的印象有关。

您可能知道,scipy.spatial.Delaunay 使用 C qhull 库来计算 Delaunay 三角剖分。 Qhull 又是 Quickhull 算法,作者在本文论文 (1) 。您可能也知道计算机中使用的浮点运算是使用 IEEE 754 标准执行的(您可以在 维基百科)。根据标准,每个有限数最简单地由三个整数描述:s= 符号(零或一),c= 有效数字(或“系数”) , q= 指数。用于表示这些整数的位数随数据类型的不同而变化。因此,很明显,浮点数在数值轴上的分布密度并不是恒定的——数字越大,分布越松散。即使使用谷歌计算器也可以看到 - 你可以从 3333333333333334 中减去 3333333333333333 和 获取 0 。发生这种情况是因为 3333333333333333 和 3333333333333334 都四舍五入为相同的浮点数。

现在,了解了舍入误差后,我们可能需要阅读论文 (1) 的第 4 章,标题为使用印象进行复制。本章描述了一种处理舍入误差的算法:

Quickhull partitions a point and determines its horizon facets by computing 
whether the point is above or below a hyperplane. We have assumed that 
computations return consistent results ... With floating-point arithmetic, we 
cannot prevent errors from occurring, but we can repair the damage after 
processing a point. We use brute force: if adjacent facets are nonconvex, one of 
the facets is merged into a neighbor. Quickhull merges the facet that minimizes 
the maximum distance of a vertex to the neighbor.

这就是可能发生的情况 - Quickhull 无法区分两个附近的点,因此它合并了两个面,从而成功消除了其中一个面。为了消除这些错误,您可以尝试移动坐标原点:

co = points[0]
points = points - co

edges1 = delaunay_edges(points)
edges2 = delaunay_edges_matplotlib(points)

print numpy.unique(edges1) 
>>> [0 1 2 3 4]
print numpy.unique(edges2)
>>> [0 1 2 3 4]

这会将计算移动到浮点数相当密集的区域。

This behavior of scipy.spatial.Delaunay might be connected with the impresicion of the floating point arithmetic.

As you may know, scipy.spatial.Delaunay uses C qhull library to calculate Delaunay triangulation. Qhull, in its turn, is the implementation of Quickhull algorithm, which is in details described by the authors in this paper (1). You as well may know that floating-point arithmetic used in computers is carried out using IEEE 754 standard (you can read about it in Wikipedia, for instance). According to the standard, each finite number is most simply described by three integers: s= a sign (zero or one), c= a significand (or 'coefficient'), q= an exponent. The number of bits used to represent these integers varies with data type. So, it's obvious, that the density of floating point number distribution on nummerical axis is not constant - the greater numbers, the more loosely they are distributed. It can be seen even with Google calculator - you can subtract 3333333333333333 from 3333333333333334 and get 0. That happens because 3333333333333333 and 3333333333333334 both are rounded to the same floating point number.

Now, knowing about rounding errors, we might want to read chapter 4 of the paper (1), entitled Copying with impresicion. In this chapter, an algorithm of dealing with rounding errors is described:

Quickhull partitions a point and determines its horizon facets by computing 
whether the point is above or below a hyperplane. We have assumed that 
computations return consistent results ... With floating-point arithmetic, we 
cannot prevent errors from occurring, but we can repair the damage after 
processing a point. We use brute force: if adjacent facets are nonconvex, one of 
the facets is merged into a neighbor. Quickhull merges the facet that minimizes 
the maximum distance of a vertex to the neighbor.

So that's what might happens - Quickhull cannot distinguish between 2 nearby points, so it merges two facets, thus successfully eliminating one of them. To eliminate these errors, you can, for instance, try to move your coordinate origin:

co = points[0]
points = points - co

edges1 = delaunay_edges(points)
edges2 = delaunay_edges_matplotlib(points)

print numpy.unique(edges1) 
>>> [0 1 2 3 4]
print numpy.unique(edges2)
>>> [0 1 2 3 4]

This moves computations to the region where floating point numbers are reasonably dense.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文