在WebGL中使用深度图像数据构建3D模型
我有来自 Kinect 摄像头的一张由 RGB 和深度数据组合而成的图像 。
我想做两件事,如果可能的话,都在 WebGL 中:
- 从深度数据创建 3D 模型。
- 将 RGB 图像作为纹理投影到模型上。
我应该考虑哪个 WebGL JavaScript 引擎?有没有类似的例子,使用图像数据构建3D模型?
(提出第一个问题!)
发现使用 Photoshop 中的 3D 工具很容易(3D > 来自灰度的新网格):http://www.flickr.com/photos/forresto/5508400121/
I have an image that is a combination of the RGB and depth data from a Kinect camera.
I'd like to do two things, both in WebGL if possible:
- Create 3D model from the depth data.
- Project RGB image onto model as texture.
Which WebGL JavaScript engine should I look at? Are there any similar examples, using image data to construct a 3D model?
(First question asked!)
Found that it is easy with 3D tools in Photoshop (3D > New Mesh From Grayscale): http://www.flickr.com/photos/forresto/5508400121/
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不知道有任何 WebGL 框架可以专门解决您的问题。我认为您可以使用深度数据创建一个网格,从矩形均匀网格开始,然后根据深度值将每个顶点移动到后面或前面(Z 轴)。
一旦你有了这个,那么你需要生成纹理数组,并且从你在 flickr 上发布的图像中我会推断深度图像和纹理之间存在一对一的映射。因此生成纹理数组应该很简单。您只需将纹理上的相应坐标 (s,t) 映射到相应的顶点即可。因此,对于每个顶点,纹理数组中都有两个坐标。然后你绑定它。
最后,您需要确保使用纹理来为图像着色。这是一个两步过程:
第一步:将纹理坐标作为“属性 vec2”传递到顶点着色器并将其保存到变化的 vec2。
第二步:在片段着色器中,读取您在第一步中创建的
variing vec2
并使用它生成gl_FragColor
。我希望它有帮助。
I am not aware of any WebGL framework that resolves your problem specifically. I think you could potentially create a grid with your depth data, starting from a rectangular uniform grid and moving each vertex to the back or to the front (Z-axis) depending on the depth value.
Once you have this, then you need to generate the texture array and from the image you posted on flickr I would infer that there is a mapping one-to-one between the depth image and the texture. So generating the texture array should be straightforward. You just map the correspondent coordinate (s,t) on the texture to the respective vertex. So for every vertex you have two coordinates in the texture array. Then you bind it.
Finally you need to make sure that you are using the texture to color your image. This is a two step process:
First step: pass the texture coordinates as an "
attribute vec2
" to the vertex shader and save it to a varying vec2.Second step: In the fragment shader, read the
varying vec2
that you created on step one and use it to generate thegl_FragColor
.I hope it helps.