滤波检测后的形状跟踪
我正在使用kinect来做计算机视觉软件。我有我的程序设置,以便它过滤掉超出一定距离的所有内容,一旦有东西进入足够近且足够大到足以成为一只手,我的程序就会假设它是。
不过,我想扩展这个功能。目前,如果手离开深度过滤区域,软件将不再跟踪其位置。当我识别出这只手之后,无论深度如何,我如何能够跟随它?
I am using a kinect to do computer visual software. I have my program setup so that it filters out everything beyond a certain distance, and once something enters close enough that is large enough to be a hand, my program assumes it is.
However, I want to extend this functionality. Currently, if the hand leaves the depth filtered region, the software no longer tracks it's position. How can I follow the hand after I've recognized it, regardless of the depth?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
我没有使用过 Kinect 控制器,但有一次我使用激光扫描仪返回了范围,但仅限于水平面。但我们使用的技术也可以应用于 Kinect。
当我们找到想要识别的对象时,我们计算了该对象的中心点
[X,Y]
(对于 Kinect 为[X,Y,Z]
)。对于下一个“帧”,我们从[X,Y]
查找给定半径r
内的所有点,对于我们发现的这些点,我们计算了一个新的中心我们用于下一个“帧”的 [X,Y]
等等。我们使用最大可能的物体速度和帧速率来计算最小可能的 r ,以确保物体不会逃脱我们在两个测量帧之间的跟踪。
I have not worked with the Kinect controller, but once I played with a laser scanner returned ranges, but only in a horizontal plane. But the technique we used could be applied to Kinect too.
When we found an object we wanted to identify we calculated the center point of the object
[X,Y]
(would be[X,Y,Z]
for Kinect). For the next "frame" we looked for all points within a given radiusr
from[X,Y]
, for those points we found we calculated a new center[X,Y]
that we used for the next "frame" and so on.We used the maximum possible object velocity and the framerate to calculate the smallest possible
r
that made sure the object did not escape from our tracking between two measurement frames.你可以看看均值平移跟踪:
http://www.comp.nus.edu.sg/~cs4243 /lecture/meanshift.pdf
然后,即使斑点变小或变大(更远或更近),也可以跟踪该斑点。
You can have a look at Mean Shift Tracking:
http://www.comp.nus.edu.sg/~cs4243/lecture/meanshift.pdf
Then it's possible to track the blob even when it gets smaller or bigger (further or closer).
我没有使用过 Kinect 控制器,但您可以尝试以下实现的快速模板匹配算法:
https://github.com/dajuric/accord-net-extensions
发挥你的深度图像而不是标准灰度图像。样品已包含在内。
聚苯乙烯
该库还提供其他跟踪算法,例如卡尔曼滤波、粒子滤波、JPDAF、Camshift、Mean-shift(包含示例)。
I have not worked with the Kinect controller, but you can try fast template matching algorithm implemented in:
https://github.com/dajuric/accord-net-extensions
Just use your depth image instead standard grayscale image. The samples are included.
P.S.
This library also provides other tracking algorithms such as Kalman filtering, particle filtering, JPDAF, Camshift, Mean-shift (samples included).