kinect物体测量
我目前正在尝试找出一种方法来使用 kinect 计算给定物体的尺寸,
因为我有以下数据
镜头的视场角 距离 和 800*600 分辨率的像素宽度
我相信这是可以计算的。有没有人有数学技能可以给我一点帮助?
I am currently trying to figure out a way to calcute the size of a given object with kinect
since I have the following data
angular field of view of the lens
distance
and width in pixels from a 800*600 resolution
I believe this can be possible to calculate. Does anyone has math skills to give me a little help?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
通过一些三角函数,应该可以进行近似。
如果您画一个直角三角形 ABC,相机位于其中一条腿 ( A) 和远端(边缘 BC)的物体,其中直角为 (C),则物体的高度将是腿 BC 的高度。到像素的距离可能是腿 AC 或 AB 的距离。 Kinect 传感器规范将对此进行规范。如果你得到到像素中心的距离,那么它将是AC。如果你有到像素角的距离,那么距离将为AB。
A 表示像素在相机中占据的角度,d 是直角斜边的距离,y 是远边(边缘 BC)的距离:
sin(A) = y / d
y = d sin(A)
y 是投影到物平面的像素长度。您可以通过将天使的罪孽乘以到物体的距离来计算它。
在这里我承认我不知道kinect的API,以及它提供的详细程度。你说你有视野角度。您可能会假设 800x600 像素网格的每个像素占据相机视野的相同角度。如果这样做,那么您可以将该视野分成相等的部分,以测量每个像素中物体的线性尺寸。
您还提到您与物体的距离。我假设您有 800x600 网格的每个像素的距离图。如果这是不正确的,如果您对被测量的对象做出一些假设,则可以进行一些计算来近似涉及感兴趣对象的像素的距离网格。
With some trigonometry, it should be possible to approximate.
If you draw a right trangle ABC, with the camera at one of the legs (A), and the object at the far end (edge BC), where the right angle is (C), then the height of the object is going to be the height of leg BC. the distance to the pixel might be the distance of leg AC or AB. The Kinect sensor specifications are going to regulate that. If you get distance to the center of a pixel, then it will be AC. if you have distances to pixel corners then the distance will be AB.
With A representing the angle at the camera that the pixel takes up, d is the distance of the hypotenuse of a right angle and y is the distance of the far leg (edge BC):
sin(A) = y / d
y = d sin(A)
y is the length of the pixel projected into the object plane. You calculate it by multiplying the sin of the angel by the distance to the object.
Here I confess I do not know the API of the kinect, and what level of detail it provides. You say you have the angle of the field of vision. You might assume each pixel of your 800x600 pixel grid takes up an equal angle of your camera's field of vision. If you do, then you can break up that field of vision into equal pieces to measure the linear size of your object in each pixel.
You also mentioned that you have the distance to the object. I was assuming that you have a distance map for each pixel of the 800x600 grid. If this is incorrect, some calculations can be done to approximate a distance grid for the pixels involving the object of interest if you make some assumptions about the object being measured.