是否可以从飞行传感器的时间(而不是通常使用的RGB-D立体声视觉方法)中创建 /使用V-Disparity映射?
我正在通过使用飞行时间(TOF)相机的基于地板的障碍物检测进行主论文。
我发现有很多使用V-和U-差异图来检测和跟踪以立体视觉方法来检测和跟踪接地平面的应用程序。 他们通过拍摄的两张图片计算出差异,然后创建值的直方图,因此在V-分配图中,地面平面可见为倾斜的线,障碍物从中脱颖而出。 所以我的问题是,如果可以从飞行摄像机的数据中从数据中生成差异图?每个像素的坐标)和场景的振幅图像。
So the depth for the disparity in stereo vision is calculated like this:
depth = (baseline * focal length) / disparity)
A ToF camera has an objective and therefore it is using the pin hole approach to calculate the right depth.那么,是否有使用TOF相机获得差异图?
提前致谢!
I am doing my master thesis regarding floor based obstacle detection with a Time of Flight (ToF) camera.
I found out there are alot of applications that use the V- and U- Disparity map to detect and track objects and the ground plane with a stereo vision approach.
They calculate the disparity with the two pictures taken and then create a Histogram of the values so in the V- Disparty map the ground plane is visible as slanted line and obstacles stand out from it.
So my question is, if it is possible to generate the disparity map from the data from a time of flight camera? As far as i know those things give me back a point cloud (x,y,z coordinates from each pixel) and a amplitude image of the scene.
So the depth for the disparity in stereo vision is calculated like this:
depth = (baseline * focal length) / disparity)
A ToF camera has an objective and therefore it is using the pin hole approach to calculate the right depth. So is there any posibillity to gain a disparity map with an ToF camera?
Thanks in advance!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
TL; DR:不,您无法从飞行时间摄像机中生成差异图。
我没有使用过很多飞行摄像机,但是我使用过的摄像机给了我UINT16矩阵。矩阵的形状由y,用UINT16值对应于毫米距离摄像机的距离。这不是点云;这是一个深度图。
由于只有一个相机,因此没有差异,因此没有差距图,但我认为您知道这一点。
要从深度图创建差异图,我认为您可以在摄像机(基线)之间建立一些假距离,然后从那里重新安排方程式。因此,这将是差异=(fake_baseline * focal_length) / depth。从这里,您可以像往常一样计算U和V差异图。
由于您的基线将是一个虚构的数字,因此我有一个预感,这对于地面平面检测或避免物体避免将没有用。因此,我认为您只需使用飞行时间摄像头的深度图就会更好。
我以前从未尝试过,也从未使用过U-Disparity Maps,因此请用一粒盐来取我答案(我可能错了)。
TL;DR: No, you can't generate a disparity map from a time-of-flight camera.
I have not used many time-of-flight cameras, but the ones I have used have given me uint16 matrices. The shape of the matrices were X by Y with the uint16 values corresponding to the distance from the camera in millimeters. This is not a point cloud; it is a depth map.
Since there is only one camera, there is no disparity and thus no disparity map, but I think you know that.
To create a disparity map from the depth map, I assume you could just make up some fake distance between the cameras (baseline) and rearrange your equation from there. So it would be disparity = (fake_baseline * focal_length) / depth. From here you could calculate your U and V disparity maps like usual.
Since your baseline will be a made-up number, I have a hunch that this wouldn't be useful for ground plane detection nor object avoidance. Thus, I think you would be better off just using the depth map from the time-of-flight camera as-is.
I have not tried this before, nor ever used u-disparity maps before, so take my answer with a grain of salt (I could be wrong).