在 Aforge 中跟踪 blob

发布于 2024-09-15 18:48:42 字数 126 浏览 11 评论 0原文

我看了又看。有人知道如何从 Aforge 跟踪 blob 吗?我知道他们没有实现它,但我确实需要使用 Aforge,因为我正在使用其余的代码。我看到了一些关于卡尔曼滤波的参考,但我需要一些实现而不是理论。

tnx, v.

I looked and looked. Does anybody know how to track blobs from Aforge? I know they don't have it implemented but I would really need to use Aforge because of the rest of the code I'm using. I saw some reference to Kalman filtering but I need some implementation and not theories.

tnx,
v.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

梦萦几度 2024-09-22 18:48:42

AForge.NET BlobCounter 将提供斑点查找,尽管它相当简单并且不支持“损坏的”斑点。如果您想实现一些简单的斑点跟踪,您可能需要考虑以下几点:

  1. 如果您的斑点偶尔会出现碎片,您可能需要执行一些聚类(查找质心位置组以组合小碎片)以获得对位置的良好估计。在分析多个帧时,这会增加遇到边界条件(例如破碎的斑点)的机会,因此考虑这一点很重要。或者,如果您能够很好地控制条件(例如照明),那可能就足够了。在找到斑点之前,可以通过重复的膨胀/腐蚀操作来解决较小的(仅几个像素)中断,但这也会放大噪声并降低位置精度。

  2. 对于实际的跟踪,您有几种方法。卡尔曼滤波可以为您提供非常好的精度(子像素),因为它集成了来自多个帧的信息。如果您不需要这种级别的准确性,您可以考虑一种非常简单的算法,例如始终选择最接近最近位置的足够大的斑点。如果对象移动得不是很快,并且在被跟踪的对象附近没有出现其他斑点,则此方法有效。如果您需要更好的分析性能,您还可以估计最后两帧的速度,并使用它来限制搜索斑点时必须考虑的区域。

  3. 如果您需要跟踪高速物体,那就变得更具挑战性。在这种情况下,您可能会尝试将斑点查找与模板匹配结合起来。您可以基于 blob-find 创建一个模板,并将该模板与后续 blob 进行匹配,以便根据它们的模式而不仅仅是它们的大小/位置来对它们进行评分。这要求斑点随着时间的推移显得相当一致,这意味着模型的物理形状和照明条件必须保持固定。


更新回答您的问题:

今天早上只有几分钟,所以没有实际的代码,但基本思想是这样的:

  1. 仅考虑大于可配置大小的斑点(您可能必须凭经验确定这一点。)

  2. 保留有关找到的最后两个 blob 位置及其采样时间的信息。让我们在时间 t1 和 t0 时将这些向量称为 R2 中的 p1 和 p0。

  3. 如果假设速度变化缓慢,则新位置在时间 t2 的初步估计 p2 = p1 + (t2-t1)*(p1-p0)/(t1-t0)。这可能是也可能不是一个好的假设,因此您需要通过在所需的运动范围内捕获对象来验证这一点。

  4. 您可以选择使用此估计来将斑点搜索区域限制为以估计位置为中心的子图像。执行斑点查找后,将最接近估计位置的斑点作为新的位置测量。

上述的一个副作用是,如果由于某种原因,在一帧期间 blob 查找失败,您可以使用估计值。允许这种推断时间过长是危险的,但它可以让您对较小的噪声尖峰有一定的容忍度。

您可能会看到这将如何进一步进展,以包括最近帧的加速度估计或集成多个帧的速度/加速度,以更好地推断下一个样本的可能位置。您还可以开始相信估计(使用当前帧和前一帧的累积数据)比实际测量更精确(也许准确)。最终你会得到类似卡尔曼滤波器的东西。

The AForge.NET BlobCounter will provide the blob finding, though it's fairly simple and won't support 'broken' blobs. If you want to implement some simple blob tracking, a few things you might consider:

  1. If your blobs are occasionally fragmented, you may need to perform some clustering (finding groups of center of mass locations to combine small fragments) to get a good estimate of the location. When analyzing multiple frames, this increases the chance of encountering boundary conditions such as broken blobs, so it's important to consider. Alternately, if you have good control over conditions (such as lighting), that may be sufficient. Minor (only a few pixel) breaks can be resolved with repeated dilation/erosion operations before the blob find, though this can also amplify noise and reduce the positional accuracy.

  2. For the actual tracking, you have a few approaches. Kalman filtering can give you very good accuracy (sub-pixel), as it integrates information from multiple frames. If you don't need that level of accuracy, you might consider a very simple algorithm such as always picking the sufficiently large blob that was closest to the most recent location. This works if the object is not moving very quickly and you don't have other blobs popping up near your object being tracked. If you need better analysis performance, you might also be able to estimate the velocity from the last two frames and use that to limit the region you have to consider when searching for the blob.

  3. If you need to track a high-velocity object, that becomes a bit more challenging. Here is a case where you might try to combine blob-finding with template-matching. You can create a template based on the blob-find and match the template against subsequent blobs to score them based on their pattern and not merely their size/location. This requires that the blob appear reasonably consistent over time, which means the model's physical shape and lighting conditions must remain fixed.


UPDATE in response to your question:

Only have a few minutes this morning, so no actual code, but the basic idea is this:

  1. Only consider blobs greater than a configurable size (you'll probably have to determine this empirically.)

  2. Retain information on last two blob locations found and the times at which they were sampled. Let's call these vectors in R2, p1 and p0, at times t1 and t0.

  3. If you assume that velocity is changing slowly, then a preliminary estimate at time t2 of the new location p2 = p1 + (t2-t1)*(p1-p0)/(t1-t0). This may or may not be a good assumption, so you'll want to verify this by capturing your object under the required range of motions.

  4. You can optionally use this estimate to restrict your blob search area to a sub-image centered on the estimated location. After you perform the blob find, take the blob that's closest to the estimated location as your new location measurement.

One side effect of the above is that you can work with the estimate if, for some reason, the blob find fails during one frame. It's dangerous to allow this extrapolation for too long, but it can give you some tolerance for minor noise spikes.

You can probably see how this could progress further to include an estimate of acceleration from recent frames or integrate velocity/acceleration from multiple frames to better extrapolate a likely location for the next sample. You could also start to trust that the estimate (with accumulated data from the current and previous frames) is more precise (and perhaps accurate) than the actual measurement. Eventually you wind up with something like the Kalman filter.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文