OpenGL半透明实体没有折射?
我想渲染一个半透明的固体物体,但我不想让自己参与折射。我想要这个(希望)相当简单的效果:物体越厚,它就越不透明,它后面的物体就越模糊;但(再次)我不想让自己参与折射或任何复杂的光与物质的相互作用。
也许我错过了一些东西,但我找不到任何好的资料来讨论简单的、非不透明的实体(实体=填充几何体)与完全不透明的网格(具有不透明表面的空心物体)或具有透明表面的空心几何体的对比。
I want to render a translucent solid object but I don't want to involve myself in refraction. I want this (hopefully) rather simple effect: the thicker the object, the more opaque it gets and the more obscure the objects behind it gets; but (again) I don't want to involve myself in refraction or any complex light-matter interactions for that matter.
Perhaps I'm missing something but I can't find any good sources that discuss simple, non-opaque solids (solid=stuffed geometry) in contrast to totally opaque meshes (hollow object with opaque surfaces) or hollow geometry with transparent surfaces.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
OpenGL 是一种前向渲染器,它将可以光栅化的对象限制为点、线和多边形。出发点是所有 3D 形状都是由那些二维或更少维度的基元构建的。 OpenGL 本身没有实体、填充的 3D 几何体的概念,因此没有关于特定片段在概念上运行对象多远、进入或退出多少次的内置概念。
自从可以编写着色器程序以来,解决该问题的各种方法都成为可能,最明显的目的是光线投射。您可以将立方体作为几何体上传,设置为渲染背面而不是正面,并将体素形式的实际对象作为 3D 纹理贴图。在着色器中,对于每个像素,您将从 3D 纹理中的一个位置开始,获取朝向相机的矢量,并以适当的间隔向前重新采样。
一种更快、更容易调试的解决方案是构建对象的 BSP 树,以便将其分解为可以按从后到前的顺序绘制的凸部分。准备两个深度缓冲区和一个像素缓冲区。为了清楚起见,将深度缓冲区中的一个称为后缓冲区,将另一个称为前缓冲区。
您将沿着模型的凸面部分从后到前逐步进行,交替渲染到没有颜色输出的后深度缓冲区和写入到具有颜色输出的前深度缓冲区。您可以只使用软件渲染器中的一个,但由于各种管道原因,OpenGL 不允许读取目标缓冲区。
对于每个凸面部分,首先将背面多边形渲染到后台缓冲区。然后将正面多边形渲染到前端缓冲区和颜色缓冲区。编写一个着色器,以便输出的每个像素都计算其不透明度,作为其深度与后台缓冲区中存储在其位置的深度之间的差值。
如果您担心相机与模型相交,您还可以将正面多边形渲染到后缓冲区(在将它们渲染到前缓冲区之后立即渲染并切换目标将是最方便的),然后在最后绘制一个完整的- 前平面的屏幕多边形,输出合适的 alpha,其中后缓冲区的值与前缓冲区的值不同。
另外:如果源数据是某种体素数据,例如来自 CT 或 MRI 扫描仪,则光线投射的替代方法是作为 3D 纹理上传并绘制一系列切片(如果可能,垂直于视图平面,沿着否则为当前主轴)。您可以在Nvidia 开发者专区查看一些文档和演示。
OpenGL is a forward renderer that restricts the objects it can rasterise to points, lines and polygons. The starting point is that all 3d shapes are built from those two-or-fewer dimensional primitives. OpenGL does not in itself have a concept of solid, filled 3d geometry, and therefore has no built-in concept of how far through an object a particular fragment conceptually runs, just how many times it enters or exits.
Since it became possible to write shader programs, a variety of ways around the problem have become possible, with the most obvious for your purpose being ray casting. You could upload a cube as geometry, set to render back faces rather than front, and your actual object in voxel form as a 3d texture map. In your shader, for each pixel you'll start at one place in the 3d texture, get a vector towards the camera and walk forward resampling at suitable intervals.
A faster and easier to debug solution would be to build a BSP tree of your object for the purposes of breaking it into convex sections that can be drawn in back to front order. Prepare two depth buffers and a single pixel buffer. For clarity, call one the depth buffer the back buffer and the the other the front buffer.
You're going to step along convex sections of the model from back to front, alternating between rendering to the back depth buffer with no colour output and writing to the front depth buffer with colour output. You could get by with just one in a software renderer but OpenGL doesn't allow the target buffer to be read from for various pipeline reasons.
For each convex section, first render back-facing polygons to the back buffer. Then render front-facing polygons to the front buffer and the colour buffer. Write a shader so that every pixel you output calculates its opacity as the difference between its depth and the depth stored at its location in the back buffer.
If you're concerned about the camera intersecting with your model, you could also render front-facing polygons to the back buffer (immediately after rendering them to the front buffer and switching targets would be most convenient), then at the end draw a full-screen polygon at the front plane that outputs a suitable alpha where the value of the back buffer differs from that of the front buffer.
Addition: if the source data is some sort of voxel data, as from a CT or MRI scanner, then an alternative to ray casting is to upload as a 3d texture and draw a series of slices (perpendicular to the view plane if possible, along the current major axis otherwise). You can see some documentation and a demo at Nvidia's Developer Zone.