光线投射体素和 OpenGL
我目前正在研究光线投射和体素,这是一个很好的组合。 Sebastian Scholz 的 Voxelrenderer 很好地实现了这一点,而且还使用了 OpenGL。我想知道他的公式是如何运作的;如何将 OpenGL 与光线投射和体素结合使用?光线投射的想法不是为每个像素(或 Doom 中的线条)投射光线,然后绘制结果吗?
I'm currently looking into raycasting and voxels, which is a nice combination. A Voxelrenderer by Sebastian Scholz implements this pretty nicely, but also uses OpenGL. I'm wondering how his formula is working; how can you use OpenGL with raycasting and voxels? Isn't the idea for raycasting that a ray is casted for every pixel (or line ie in Doom) and then to draw the result?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
提到的光线投射器是体素渲染器,即一种可视化体积数据的方法,例如存储在 3D 纹理中的不透明度。 Doom 的光线投射算法还有另一个目的:对于屏幕上的每个像素,找到地图的第一个平面并在那里绘制它的颜色。现代 GPU 的光栅化功能取代了光线投射器的这种使用。
实时可视化体积数据仍然是由特殊硬件完成的任务,通常在医学和测地成像系统中找到。基本上,这些是保存体积 RGBA 数据的大量 RAM(几十 GB)。然后,对于屏幕上的每个像素,都会通过体积投射一条光线,并在该光线上集成 RGBA 数据。 GPU 体素渲染器通过片段着色器执行相同的操作;伪代码:
finalize
和combine
取决于数据类型和您想要可视化的内容。例如,如果您想要对密度进行积分(例如在 X 射线图像中),combine
将是求和操作并最终确定归一化。如果您要可视化云,则需要在体素之间进行 alpha 混合。The mentioned raycaster is a Voxelrenderer, i.e. a method do visualize volumetric data, like opacities stored in a 3D texture. Doom's raycasting algorithm has another intention: For every pixel on the screen find the first planar surface of the map and draw the color of that there. The rasterizing capabilited of modern GPUs obsoleted this use of raycasters.
Realtime visualizing volumetric data still is a task done by special hardware, typically found in medical and geodesic imaging systems. Basically those are huge bulks of RAM (several dozens of GB) holding volumetric RGBA data. Then for every on screen pixel a ray is cast through the volume and the RGBA data integrated over that ray. A GPU Voxelrenderer does the same thing by a fragment shader; pseudocode:
finalize
andcombine
depend on the kind of data and what you want to visualize. For example if you want to integrate the density (like in an X ray image),combine
would be a summing operation and finalize a normalization. If you were to visualize a cloud, you'd alpha blend between voxels.体素空间中的光线投射不会使用像素,效率很低。
您已经有一个数组来说明哪些空间是空的以及哪些空间具有体素立方体。
因此,快速版本是跟踪一条线,检查该线方向上每个体素的空度,直到达到完整的体素。
这将需要从内存中执行数百次读取操作,并为每个读取操作执行 2-3 次光线矢量乘法。
读取 10 亿个体素的内存位置大约需要 1 秒,因此几百个体素的速度非常快,并且总是在一帧内。
光线投射通常使用优化来检测空间中的小数位置,其中数学公式星形,网格顶点基于其边界框,然后是网格,在体素中,它只是逐步检查整数数组中的一条线,直到找到一个非空白。
Raycasting in a voxel space wouldn't use pixels, it would be inefficient.
You already have an array to say what spaces are empty and which ones have a voxel cube.
So a fast version is tracing a line which checks the emptimess of every voxel in the direction of the line, until it reaches a full voxel.
That would take a few hundred read ops from the memory and 2-3 multiplications of the ray vector for every read op.
to read a billion memory positions of voxels takes about 1 second, so a few hundred would be very fast and always within a frame.
Raycasting often uses optmizations to detect fractional places in space where a maths formula stars, where a mesh vertex is based on it's bounding box and then it's mesh, and in voxels it's just checks of a line in an integer array progressively until you find a non void.