在 GLSL 中混合多个纹理

发布于 2024-11-18 13:02:53 字数 930 浏览 5 评论 0原文

这很长,但我保证它很有趣。 :)

我正在尝试使用 jMonkeyEngine 模仿另一个应用程序纹理的外观。我有一个组成“景观网格”的顶点和面(三角形)列表,该网格应该使用大约 7-15 种不同的纹理(取决于“景观”的地形)进行纹理化。每个三角形都有一个与之关联的纹理代码,表示该特定三角形应主要由哪种纹理组成。当然,每个面之间的纹理应该平滑地混合。

所以我正在尝试开发一种允许这样做的策略(不使用预制的 alpha 贴图 png 文件,纹理 alpha 需要在运行时完成)。现在我想如果我计算每个顶点处每个纹理的“强度”(在顶点着色器中)——通过考虑所有相邻面的地形类型(还不确定如何做到这一点)——我应该能够根据像素距顶点的距离设置 alpha 值。碎片着色器将使用生成的“alpha 贴图”来混合每个像素的每个纹理。

这是否可行,或者我应该考虑一种完全不同的策略?我有我想要模仿的应用程序的着色器代码(但它们是 HLSL,而我正在使用 GLSL),但似乎他们在其他地方执行此混合步骤:

    sampler MeshTextureSampler = sampler_state { Texture = diffuse_texture; AddressU = WRAP; AddressV = WRAP; MinFilter = LINEAR; MagFilter = LINEAR; }; 

我不确定这个 HLSL“MeshTextureSampler”是什么是的,但看起来这个应用程序可能已经根据需要预先混合了所有纹理,并根据面/地形代码数据为整个网格创建了单个纹理。在像素/片段着色器中,它们真正做的似乎是这样的:

float4 tex_col = tex2D(MeshTextureSampler, In.Tex0);

之后就只是阴影、光照等——据我所知,根本没有任何纹理混合,这让我相信这种纹理混合工作是我想是事先在CPU上完成的。欢迎任何建议。

This is long but I promise it's interesting. :)

I'm trying to mimic the appearance of another application's texturing using jMonkeyEngine. I have a list of vertices, and faces (triangles) making up a "landscape mesh" which should be textured with about 7-15 different textures (depending on the terrain of the "landscape"). Each triangle has a texture code associated with it, signifying which texture that particular triangle should mostly consist of. And of course, the textures should blend smoothly between each face.

So I'm trying to develop a strategy that allows this (which does NOT utilize pre-made alpha map png files, texture alphas need to be done at run time). Right now I figure if I calculate the "strength" of each texture at each vertex (in the vertex shader)--by factoring in the terrain types of all it's neighboring faces (unsure how to do this yet)--I should be able to set alpha values based on how far a pixel is from a vertex. The generated 'alpha map' would be used by the frag shader to blend each texture per pixel.

Is this even feasible, or should I be looking at a totally different strategy? I have the shader code for the application I'm trying to mimic (but they are HLSL and I'm using GLSL), but it seems like they're doing this blending step elsewhere:

    sampler MeshTextureSampler = sampler_state { Texture = diffuse_texture; AddressU = WRAP; AddressV = WRAP; MinFilter = LINEAR; MagFilter = LINEAR; }; 

I'm not sure what this HLSL "MeshTextureSampler" is but it seems like this application may have pre-blended all the textures as needed, and created a single texture for the entire mesh based on the face/terrain code data. In the pixel/fragment shader all they really seem to do is this:

float4 tex_col = tex2D(MeshTextureSampler, In.Tex0);

After that it's just shadows, lighting, etc -- no sort of texture blending at all as far as I can tell, which leads me to believe this texture blending work is being done on the CPU beforehand, I suppose. Any suggestions welcome.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

软甜啾 2024-11-25 13:02:53

如果我理解正确的话,这就是我的第一个镜头:

你的问题或多或少是如何在顶点上分配你的每面值。这实际上类似于网格上的法线生成:首先为每个三角形生成法线,然后按顶点计算它们。谷歌“正常一代”,你就会到达那里,但要点是这样的。对于每个相邻的三角形,找到一个权重因子(通常是使用顶点的角的角度,或三角形的表面积,或组合),然后将乘以的值(无论是正常的还是你的“力量”)相加通过权重因子得出总结果。正常化就完成了。

这样你就有了可以发送到顶点着色器的纹理“强度”。现代的解决方案是在稍微修改混合值以提供更好的传输之后,使用字符并在像素着色器中对纹理数组进行采样。

所以,如果我正确地理解了你的问题:

预处理:

forearch vertex in mesh
  vertexvalue = 0
  normalization = 0
  foreach adjacent triangle of vertex
      angle = calculateAngleBetween3Vertices(vertex,triangle.someothervertex,triangle.theotherothervertex)
      vertexvalue += triangle.value * angle
      normalization += angle
  vertexvalue/=normalization

渲染时间:

将每个顶点的值通过管道传输到片段着色器,并在片段着色器中执行此操作:

basecolour = 0;
foreach value    
   basecolour = mix(basecolour, texture2D(textureSamplerForThisValue,uv), value)
   //this is simple, but we could do better once we have this working

或者,或者,你可以仔细看看你的几何体。如果您有大三角形和小三角形的组合,您将拥有不均匀的数据分布,并且由于您的数据是每个顶点的,因此您将在更多几何图形的地方获得更多细节。在这种情况下,您可能会想做其他人正在做的事情,并通过使用混合贴图将纹理与几何体分离。这些可以是低分辨率的,并且不会增加太多的内存消耗或着色器执行时间。

If I understand you correctly, here is what my first shot would be:

Your problem is, more or less, how to distribute your your per-face value over vertices. This is actually similar to normal generation on a mesh: first you would generate a normal for each triangle, and then calculate them per vertex. Google "normal generation" and you'll get there, but here's the gist. For each adjacent triangle, find a weighing factor (often angle of the corner that uses the vertex, or the surface area of the triangle, or a combination), and then sum up the value (be it normal or your "strengths") multiplied by the weighing factor to a total result. Normalize and you're done.

So then you have your texture "strengths" that you can send to your vertex shader. The modern solution would be to use chars and sample a texture array in the pixel shader, after you've fudged the blend values a bit to give you nicer transfers.

So, if I get your problem correctly :

Preprocess:

forearch vertex in mesh
  vertexvalue = 0
  normalization = 0
  foreach adjacent triangle of vertex
      angle = calculateAngleBetween3Vertices(vertex,triangle.someothervertex,triangle.theotherothervertex)
      vertexvalue += triangle.value * angle
      normalization += angle
  vertexvalue/=normalization

Rendering time:

pipe the value(s) of each vertex to the fragmentshader, and do this in the fragment shader:

basecolour = 0;
foreach value    
   basecolour = mix(basecolour, texture2D(textureSamplerForThisValue,uv), value)
   //this is simple, but we could do better once we have this working

Or, alternatively, you can take a good look at your geometry. If you have a combination of big triangles and tiny ones, you will have an unequal spread of data, and since your data is per vertex, you will have more detail where this is more geometry. In that case ,you will probably want to do what everyone else is doing and decouple your texturing from your geometry by using blend maps. These can be low-resolution and shouldn't increase your memory consumption or shader execution time that much.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文