将虚拟相机与真实相机匹配
我正在尝试开发一个简单的增强现实应用程序。 我设置了一个设备,可以让我检测物理网络摄像头的旋转和位置。我将网络摄像头视频输入到一个小型 3d xna 应用程序中。在应用程序中,我对虚拟相机进行了定位和旋转以匹配真实相机。
当我覆盖 3D 图形透视线时,它看起来不太对齐,当我移动物理相机时,3D 图形的跟踪也不太正确。
使 3D 图形与现实世界图像保持一致涉及哪些参数?
I am trying to get a simple augmented reality app going.
I have got a rig set up that allows me to detect the rotation and position of a physical webcam. I have the webcam video being pumped in to a little 3d xna app. In the app I have the virtual camera being positioned and rotated to match the real camera.
When I overlay the 3d graphics perspective lines don't seem quite line up and when I move the physical camera the 3d graphics dont track quite right.
What parameters are involved in getting the 3d graphics to line up with the real world imagery?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
OpenCV 包含关于相机校准模型的工具和精彩的理论讨论,以及可以使用的方法用于校正扭曲。逆模型可用于扭曲生成的 3D 网格。
OpenCV includes tools and a great theoretical discussion on camera calibration models, and the approaches that can be used to correct for distortions. An inverse model could be used to distort the generated 3d mesh.