使用Azure Kinect的点云进行身体跟踪
我使用多个Azure Kinect设备来创建带有PCL和Open3D库的合并点Cloud。这是因为Azure Kinect不支持多设备的身体跟踪融合。我已经阅读了一些人从每个Kinect计算关节(位置和方向),然后以不同的方式融合它们,例如Kalman滤波器,但是获得良好跟踪的最正确方法是使用合并的云,然后跟踪检测到的身体,但是我找不到任何项目或SDK,只是科学研究。 谁能帮我吗?非常感谢。
I'm using multiple Azure Kinect devices to create a merged PointCloud with PCL and Open3D libraries. This is because Azure Kinect doesn't support multi-device body tracking fusion. I've read some people computing joints (position and orientation) from every single Kinect and then fusing them in different ways, such as Kalman filter, but the most correct way to obtain a good tracking is using a merged Cloud and then track detected bodies, but I can't find any project or SDK to use, just scientific researches.
Can anyone help me? Thank you very much.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我认为您无法找到任何形式的库,因为没有任何库!如果您能够成功融合PointClouds,则可以尝试在其中运行身体跟踪,以查看其是否改善结果,或将其转换为网格并使用某种基于网格的姿势估计。
I think the reason that you're unable to find any sort of library to use because none exist! If you're able to fuse the pointclouds successfully you could try running the body tracking on that to see if it improves results, or turn that into a mesh and use some sort of mesh-based pose estimation.