在一组不断移动的XY坐标上计算最近的邻居的最佳方法?

发布于 2025-02-11 06:55:29 字数 357 浏览 4 评论 0原文

因此,我正在使用OpenCV进行一些对象跟踪。到目前为止,我拥有的脚本找到了要跟踪的好点 在此视频

中线路要去每个点的最近邻居。直观且效率低下的选择是检查每个坐标并跟踪最接近的坐标。我实现了这一点,只是为了看看它的速度有多慢,大约5 fps。

我已经阅读了有关KD树和四元 /八颗树,但是似乎所有这些都取决于 /所有数据点仍然是静态的吗?鉴于我的示例中的要点不断移动,必须在每个框架上重新生成,这似乎太多了吗?

谁能为如何跟踪点的最近邻居推荐最佳选择吗?

谢谢

So I am using OpenCV to do some object tracking. What I have so far is a script that finds good points to track
in this video

What I would like to now do is draw lines between the points, however I want the lines to go to each point's nearest neighbour. The intuitive and inefficient option for this is to check every coordinate and keep track of which is closest. I implemented this just to see how slow it was and got about 5 fps.

I have read about k-d trees and quad / octo trees however it seems that these all depend on most of / all of the data points remaining static? Given that the points in my example are constantly shifting the tree would have to be regenerated on each frame which seems too resource intensive?

Can anyone recommend the best option for how to keep track of nearest neighbour of the points?

Thank you

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

草莓味的萝莉 2025-02-18 06:55:29

对于100点,天真的方法会导致计算10K距离。

您可能在Python中编写自己的循环,这是“慢”的,因为它们都被解释了。您可以使用numba来jit。或者,您可以使用Numpy和Scipy进行计算。这是更快的,因为它不是Python,而是编译的代码,可以进行计算。

请查看 ,然后使用np.argmin

然后...幼稚的方法为o(n^2)。即使您为每个帧进行操作,将您的点分为网格或树,即便宜,即O(n)。它使您可以在大幅度减少的候选邻居集中进行查找。

通过计算一个点的网格索引:(i,j)=(y // k,x/x/k),将点分类为k宽网格,然后将点插入该单元格的列表。现在,您的查找只需要在整个半径上抓住细胞,直到找到点,然后选择最接近的细胞。

注意:您可以具有对象的Numpy数组。 listset在这种情况下。我会推荐list over Set,因为您不需要快速搜索,因此集合的费用没有好处,但是您确实需要快速插入。

如果您最近的邻居查询的最大距离超出了您不认为是“邻居”,则可以将网格单元的大小大小。这使查找变得容易得多,因为您只需要考虑细胞本身及其所有邻居即可。

For 100 points, the naive approach causes 10k distances to be calculated.

You probably write your own loops in python, which is "slow", because it's all interpreted. You could use numba to JIT that. Or you could use numpy and scipy to do the calculations. That is faster because it's not Python, it's compiled code, that does the calculations.

Check out scipy.spatial.distance_matrix, then use np.argmin

Then... the naive approach is O(N^2). Sorting your points into a grid or a tree, even if you do it for every frame, is cheap, i.e. O(n). And it lets you do lookups on drastically reduced sets of candidate neighbors.

Sort your points into a k-width grid by calculating a point's grid index: (i,j) = (y // k, x // k) and insert the point into a list for that cell. Now your lookup just has to grab cells in ever expanding radii until it finds points, then pick the closest.

Note: you can have numpy arrays of object. list or set in this case. I'd recommend list over set because you don't need fast search in there, so the expense of sets has no benefit, but you do need fast insertion.

If your nearest neighbor query has a maximum distance beyond which you don't consider anything to be a "neighbor", you can just size the grid cells to be that size. That makes lookup a lot easier because you only have to consider the cell itself and all its neighbors.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文