OpenGL,将纹理从图像应用到等值面

发布于 2024-09-30 04:25:14 字数 1281 浏览 3 评论 0原文

我有一个程序,需要将二维纹理(简单图像)应用到使用行进立方体算法生成的表面。我可以访问几何体,并且可以相对轻松地添加纹理坐标,但生成坐标的最佳方法却让我困惑。

体积中的每个点代表单个数据单元,并且每个数据单元可以具有不同的属性。为了简化事情,我正在考虑将它们分类为“类型”,并为每种类型分配一个纹理(或单个大型纹理图集的一部分)。

我的问题是我不知道如何生成适当的坐标。我可以将类型纹理的位置存储在类型类中并使用它,但是接缝将被严重拉伸(如果两个相邻点使用图集的不同部分)。如果可能的话,我想混合接缝上的纹理,但我不确定最好的方法。混合是可选的,但我需要以某种方式对顶点进行纹理化。可以将几何体分割成每种类型的多个部分,或者出于纹理目的而复制顶点,但这是不可取的。

如果可能的话,我想避免使用着色器,但如果有必要,我可以使用顶点和/或片段着色器来进行纹理混合。如果我确实使用着色器,那么最有效的方式来告诉它是纹理还是要采样的部分?看起来通过参数传递类型是最简单的方法,但可能很慢。

我的体积相对较小,每个维度有 8-16 个点(我将它们保持较小以加快生成速度,但在给定时间屏幕上有很多点)。我简单地考虑过将等值面设为体积分辨率的两倍,这样每个点就有更多的顶点(理论上是 8 个),这可以简化纹理处理。不过,这似乎并不会让混合变得更容易。

为了构建表面,我使用了 OpenGL 及其行进立方体和体积系统的可视化库。我已经生成了很好的几何图形,只需要弄清楚如何对其进行纹理化即可。

有没有一种方法可以有效地做到这一点,如果有的话该怎么办?如果没有,是否有人知道更好的方法来处理体积纹理?

编辑:请注意,纹理不仅仅是颜色的渐变。它实际上是一种纹理,通常带有图案。因此映射它的困难,梯度将是微不足道的。

编辑 2:为了帮助澄清问题,我将添加一些示例。他们可能只是混淆了事情,所以考虑一切高于明确事实的事情,如果可以的话,这些都是有帮助的。

我的几何体总是在立方体中(在立方体中加载、生成和保存)。如果形状影响可能的解决方案,那就是这样。

我需要将由图案和/或颜色(取决于点的“类型”的唯一的)组成的纹理应用到几何图形,其技术类似于对地形进行的喷溅(但这不是地形,所以我不不知道是否可以使用相同的技术)。

着色器是一种快速而简单的解决方案,尽管我想尽可能避免使用它们,正如我之前提到的。在固定功能管道中可用的东西是更好的,主要是为了稍微增加兼容性和开发时间。由于这只是一个微小的增加,因此如有必要,我将使用着色器和多通道渲染。

不确定是否需要任何其他说明,但我会根据需要更新问题。

I have a program in which I need to apply a 2-dimensional texture (simple image) to a surface generated using the marching-cubes algorithm. I have access to the geometry and can add texture coordinates with relative ease, but the best way to generate the coordinates is eluding me.

Each point in the volume represents a single unit of data, and each unit of data may have different properties. To simplify things, I'm looking at sorting them into "types" and assigning each type a texture (or portion of a single large texture atlas).

My problem is I have no idea how to generate the appropriate coordinates. I can store the location of the type's texture in the type class and use that, but then seams will be horribly stretched (if two neighboring points use different parts of the atlas). If possible, I'd like to blend the textures on seams, but I'm not sure the best manner to do that. Blending is optional, but I need to texture the vertices in some fashion. It's possible, but undesirable, to split the geometry into parts for each type, or to duplicate vertices for texturing purposes.

I'd like to avoid using shaders if possible, but if necessary I can use a vertex and/or fragment shader to do the texture blending. If I do use shaders, what would be the most efficient way of telling it was texture or portion to sample? It seems like passing the type through a parameter would be the simplest way, but possible slow.

My volumes are relatively small, 8-16 points in each dimension (I'm keeping them smaller to speed up generation, but there are many on-screen at a given time). I briefly considered making the isosurface twice the resolution of the volume, so each point has more vertices (8, in theory), which may simplify texturing. It doesn't seem like that would make blending any easier, though.

To build the surfaces, I'm using the Visualization Library for OpenGL and its marching cubes and volume system. I have the geometry generated fine, just need to figure out how to texture it.

Is there a way to do this efficiently, and if so what? If not, does anyone have an idea of a better way to handle texturing a volume?

Edit: Just to note, the texture isn't simply a gradient of colors. It's actually a texture, usually with patterns. Hence the difficulty in mapping it, a gradient would've been trivial.

Edit 2: To help clarify the problem, I'm going to add some examples. They may just confuse things, so consider everything above definite fact and these just as help if they can.

My geometry is in cubes, always (loaded, generated and saved in cubes). If shape influences possible solutions, that's it.

I need to apply textures, consisting of patterns and/or colors (unique ones depending on the point's "type") to the geometry, in a technique similar to the splatting done for terrain (this isn't terrain, however, so I don't know if the same techniques could be used).

Shaders are a quick and easy solution, although I'd like to avoid them if possible, as I mentioned before. Something usable in a fixed-function pipeline is preferable, mostly for the minor increase in compatibility and development time. Since it's only a minor increase, I will go with shaders and multipass rendering if necessary.

Not sure if any other clarification is necessary, but I'll update the question as needed.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

北斗星光 2024-10-07 04:25:14

关于问题的纹理组合部分:

您研究过 3D 纹理吗?当我们谈论行进立方体时,我可能应该立即说我明确不是在谈论体积纹理。相反,您将所有 2D 纹理堆叠到 3D 纹理中。然后,您将每个纹理坐标编码为它的二维位置,并将其引用为第三个坐标的纹理。如果您的纹理通常属于这样的类型,从逻辑上讲,要从一种类型的图案转换为另一种类型,您必须通过中介,那么它的效果最好。

一个明显的使用示例是将纹理映射到简单的高度图 - 顶部可能有雪纹理,下面有岩石纹理,下面有草纹理,底部有水纹理。如果引用水的顶点位于引用雪的顶点旁边,则几何填充过渡到岩石和草纹理是可以接受的。

另一种方法是使用添加剂混合分多次进行。对于每个纹理,绘制使用该纹理的每个面,并绘制淡入透明,延伸到从一种纹理切换到另一种纹理的任何面。

您可能需要通过完整的绘制来准备深度缓冲区(将颜色遮罩全部设置为拒绝对颜色缓冲区的更改),然后切换到 GL_EQUAL 深度测试并在禁用写入深度缓冲区的情况下再次绘制。通过完全相同的变换绘制完全相同的几何图形应该产生完全相同的深度值,无论准确性和精度问题如何。如果遇到问题,请使用 glPolygonOffset。

在坐标部分:

流行且简单的映射是圆柱形、长方体和球形。概念化您的形状以圆柱体、盒子或球体为边界,并具有从表面点到纹理位置的明确映射。然后,对于形状中的每个顶点,从它开始并遵循法线,直到触及边界几何体。然后抓取边界几何体上该位置的纹理位置。

我猜有一个潜在的问题,即正常人在行进立方体后往往不会表现出色,但我敢打赌你比我更了解这个问题。

On the texture combination part of the question:

Have you looked into 3d textures? As we're talking marching cubes I should probably immediately say that I'm explicitly not talking about volumetric textures. Instead you stack all your 2d textures into a 3d texture. You then encode each texture coordinate to be the 2d position it would be and the texture it would reference as the third coordinate. It works best if your textures are generally of the type where, logically, to transition from one type of pattern to another you have to go through the intermediaries.

An obvious use example is texture mapping to a simple height map — you might have a snow texture on top, a rocky texture below that, a grassy texture below that and a water texture at the bottom. If a vertex that references the water is next to one that references the snow then it is acceptable for the geometry fill to transition through the rock and grass texture.

An alternative is to do it in multiple passes using additive blending. For each texture, draw every face that uses that texture and draw a fade to transparent extending across any faces that switch from one texture to another.

You'll probably want to prep the depth buffer with a complete draw (with the colour masks all set to reject changes to the colour buffer) then switch to a GL_EQUAL depth test and draw again with writing to the depth buffer disabled. Drawing exactly the same geometry through exactly the same transformation should produce exactly the same depth values irrespective of issues of accuracy and precision. Use glPolygonOffset if you have issues.

On the coordinates part:

Popular and easy mappings are cylindrical, box and spherical. Conceptualise that your shape is bounded by a cylinder, box or sphere with a well defined mapping from surface points to texture locations. Then for each vertex in your shape, start at it and follow the normal out until you strike the bounding geometry. Then grab the texture location that would be at that position on the bounding geometry.

I guess there's a potential problem that normals tend not to be brilliant after marching cubes, but I'll wager you know more about that problem than I do.

晨光如昨 2024-10-07 04:25:14

这是一个困难而有趣的问题。

最简单的方法是通过使用 3D 纹理贴图来完全避免该问题,特别是如果您只想向等值面几何体添加一些随机表面细节。在着色器中实现的基于 Perlin 噪声的程序纹理对此非常有效。

困难的方法是研究 共形纹理映射(也称为共形表面参数化),其目标生成 2D 纹理空间和 3D 几何体表面之间的映射,这在某种意义上是最佳的(扭曲最少)。这篇论文有一些好照片。请注意,几何拓扑非常重要;生成共形映射以将纹理映射到像大脑这样的封闭表面上很容易,对于更高的 genus 需要引入剪切/撕裂/连接的对象。

This is a hard and interesting problem.

The simplest way is to avoid the issue completely by using 3D texture maps, especially if you just want to add some random surface detail to your isosurface geometry. Perlin noise based procedural textures implemented in a shader work very well for this.

The difficult way is to look into various algorithms for conformal texture mapping (also known as conformal surface parametrization), which aim to produce a mapping between 2D texture space and the surface of the 3D geometry which is in some sense optimal (least distorting). This paper has some good pictures. Be aware that the topology of the geometry is very important; it's easy to generate a conformal mapping to map a texture onto a closed surface like a brain, considerably more complex for higher genus objects where it's necessary to introduce cuts/tears/joins.

总攻大人 2024-10-07 04:25:14

您可能想尝试在 Blender 等工具中制作网格的 UV 贴图,看看他们是如何做到的。如果我理解你的问题,你有一个 3D 字段,它定义了实体体积以及(连续)颜色。您已经从体积创建了一个网格,现在需要使用从连续颜色空间提取的纹素将网格 UV 映射到 2D 纹理。在工具中,您可以在 3D 网格中定义“接缝”,您可以将其分割开,以便可以将整个网格放平以制作 UV 贴图。纹理中的接缝处可能存在锯齿,因此当您渲染网格时,这些接缝处的网格也会不连续(即三角形条不能穿过接缝,因为它是纹理中的不连续性)。

我不知道任何压平网格的正式方法,但您可以想象沿着接缝切割它,然后将整个物体视为一个弹簧/约束系统,然后将其放在平坦的表面上。我总是用困难的方式解决问题。 ;-)

You might want to try making a UV Map of a mesh in a tool like Blender to see how they do it. If I understand your problem, you have a 3D field which defines a solid volume as well as a (continuous) color. You've created a mesh from the volume, and now you need to UV-map the mesh to a 2D texture with texels extracted from the continuous color space. In a tool you would define "seams" in the 3D mesh which you could cut apart so that the whole mesh could be laid flat to make a UV map. There may be aliasing in your texture at the seams, so when you render the mesh it will also be discontinuous at those seams (ie a triangle strip can't cross over the seam because it's a discontinuity in the texture).

I don't know any formal methods for flattening the mesh, but you could imagine cutting it along the seams and then treating the whole thing as a spring/constraint system that you drop onto a flat surface. I'm all about solving things the hard way. ;-)

残月升风 2024-10-07 04:25:14

由于纹理问题和我遇到的一些限制,我选择编写一种不同的算法来构建几何体并在生成曲面时直接处理纹理。它比行进立方体稍微不太平滑,但允许我以适合我的项目的方式应用纹理坐标(并且速度更快一些)。

对于任何对行进立方体纹理感兴趣或只是混合纹理感兴趣的人,Tommy 的答案是一种非常有趣的技术,timday 发布的链接是关于展平网格纹理的极好资源。感谢两位的回答,希望对其他人有用。 :)

Due to the issues with texturing and some of the constraints I have, I've chosen to write a different algorithm to build the geometry and handle texturing directly in that as it produces surfaces. It's somewhat less smooth than the marching cubes, but allows me to apply the texcoords in a way that works for my project (and is a bit faster).

For anyone interested in texturing marching cubes, or just blending textures, Tommy's answer is a very interesting technique and the links timday posted are excellent resources on flattening meshes for texturing. Thanks to both of them for their answers, hopefully they can be of use to others. :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文