OpenGLES 2.0:3D 平铺视觉工件
我正在努力寻找一种方法来更好地处理游戏引擎中 3D 图块对象之间的接缝。只有当相机以足够远的角度向下倾斜时才能看到它们......我不相信这是纹理问题或纹理渲染问题(但我可能是错的)。
下面是两个屏幕截图 - 第一个演示了问题,而第二个是我在 Blender 中对图块使用的 UV 包裹。我在 UV 中提供了重叠空间,这样如果纹理需要在较小的 mip 贴图期间过度绘制,我仍然应该做得很好。我正在使用以下纹理参数加载纹理:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
在我看来,3D 图块的侧面略有绘制,并且您特别注意到由于从该角度应用的照明角度(方向)而产生的伪影。
有什么技巧或东西我可以检查来消除这种影响吗?我正在“图层”中渲染,但在这些图层中基于相机距离(最远的首先)。所有这些对象都位于同一层中。任何想法将不胜感激!
如果有用的话,这是一个使用 OpenGLES2.0 的 iPhone/iPad 项目。我很乐意提供任何代码片段 - 只需让我知道什么可能是一个好的起点。
使用顶点/像素着色器进行更新模型顶点
目前,我正在使用 PowerVR 的 POD 格式来存储从 Blender 导出的模型数据(通过 PowerVR 的 Collada,然后是 Collada2Pod 转换器)。这是 GL_SHORT 顶点坐标(模型空间):
64 -64 32
64 64 32
-64 64 32
-64 -64 32
64 -64 -32
-64 -64 -32
-64 64 -32
64 64 -32
64 -64 32
64 -64 -32
64 64 -32
64 64 32
64 64 32
64 64 -32
-64 64 -32
-64 64 32
-64 64 32
-64 64 -32
-64 -64 -32
-64 -64 32
64 -64 -32
64 -64 32
-64 -64 32
-64 -64 -32
所以我期望一切都应该完全齐平。这是着色器:
attribute highp vec3 inVertex;
attribute highp vec3 inNormal;
attribute highp vec2 inTexCoord;
uniform highp mat4 ProjectionMatrix;
uniform highp mat4 ModelviewMatrix;
uniform highp mat3 ModelviewITMatrix;
uniform highp vec3 LightColor;
uniform highp vec3 LightPosition1;
uniform highp float LightStrength1;
uniform highp float LightStrength2;
uniform highp vec3 LightPosition2;
uniform highp float Shininess;
varying mediump vec2 TexCoord;
varying lowp vec3 DiffuseLight;
varying lowp vec3 SpecularLight;
void main()
{
// transform normal to eye space
highp vec3 normal = normalize(ModelviewITMatrix * inNormal);
// transform vertex position to eye space
highp vec3 ecPosition = vec3(ModelviewMatrix * vec4(inVertex, 1.0));
// initalize light intensity varyings
DiffuseLight = vec3(0.0);
SpecularLight = vec3(0.0);
// Run the directional light
PointLight(true, normal, LightPosition1, ecPosition, LightStrength1);
PointLight(true, normal, LightPosition2, ecPosition, LightStrength2);
// Transform position
gl_Position = ProjectionMatrix * ModelviewMatrix * vec4(inVertex, 1.0);
// Pass through texcoords and filter
TexCoord = inTexCoord;
}
I'm having a terrible time figuring out a way to better-handle the seams between 3D tile objects in my game engine. You only see them when the camera is tilted down at a far enough angle like this... I do not believe it is a texture problem or a texture rendering problem (but I could be wrong).
Below are two screenshots - the first one demonstrates the problem, while the second is the UV wrapping I'm using for the tiles in Blender. I'm providing room in the UVs for overlap, such that if the texture needs to overdraw during smaller mip-maps, I should still be good. I am loading textures with the following texture params:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
It appears to me that the sides of the 3D tiles are slightly being drawn, and you especially notice the artifact due to the lighting angle (directional) that is being applied from this angle.
Are there any tricks or things I can check to eliminate this effect? I am rendering in "layers", but within those layers based on camera distance (furthest away first). All of these objects are in the same layer. Any ideas would be greatly appreciated!
If useful, this is a project for iPhone/iPad using OpenGLES2.0. I'm happy to provide any code snippets - just let me know what might be a good place to start.
UPDATE WITH VERTEX/PIXEL SHADER & MODEL VERTICES
Presently, I am using PowerVR's POD format to store model data exported from Blender (via Collada then Collada2Pod converter by PowerVR). Here's the GL_SHORT vertex coords (model space):
64 -64 32
64 64 32
-64 64 32
-64 -64 32
64 -64 -32
-64 -64 -32
-64 64 -32
64 64 -32
64 -64 32
64 -64 -32
64 64 -32
64 64 32
64 64 32
64 64 -32
-64 64 -32
-64 64 32
-64 64 32
-64 64 -32
-64 -64 -32
-64 -64 32
64 -64 -32
64 -64 32
-64 -64 32
-64 -64 -32
So everything should be perfectly flush, I would expect. Here's the shaders:
attribute highp vec3 inVertex;
attribute highp vec3 inNormal;
attribute highp vec2 inTexCoord;
uniform highp mat4 ProjectionMatrix;
uniform highp mat4 ModelviewMatrix;
uniform highp mat3 ModelviewITMatrix;
uniform highp vec3 LightColor;
uniform highp vec3 LightPosition1;
uniform highp float LightStrength1;
uniform highp float LightStrength2;
uniform highp vec3 LightPosition2;
uniform highp float Shininess;
varying mediump vec2 TexCoord;
varying lowp vec3 DiffuseLight;
varying lowp vec3 SpecularLight;
void main()
{
// transform normal to eye space
highp vec3 normal = normalize(ModelviewITMatrix * inNormal);
// transform vertex position to eye space
highp vec3 ecPosition = vec3(ModelviewMatrix * vec4(inVertex, 1.0));
// initalize light intensity varyings
DiffuseLight = vec3(0.0);
SpecularLight = vec3(0.0);
// Run the directional light
PointLight(true, normal, LightPosition1, ecPosition, LightStrength1);
PointLight(true, normal, LightPosition2, ecPosition, LightStrength2);
// Transform position
gl_Position = ProjectionMatrix * ModelviewMatrix * vec4(inVertex, 1.0);
// Pass through texcoords and filter
TexCoord = inTexCoord;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我不知道你的盒子是如何绘制的 - 但我相信这就是问题所在。当计算每个框的顶点时,我猜你会做这样的事情(伪代码)
这会失败,因为添加时你会失去精度,所以应该共享相同位置的角实际上不会。这就是造成差距的原因。
相反,你应该这样做,
这将确保角将使用相同的坐标。
编辑
这仍然是一个精度问题。据我了解,您正在对每个块执行一次绘制调用,其中每个块唯一更改的是 ModelviewMatrix。这意味着您期望这条线
位置 = ProjectionMatrix * ModelviewMatrix * vec4(inVertex, 1.0);
将为 inVertex 和 ModelviewMatrix 的两个不同值给出相同的值,这是错误的。
为了解决这个问题,你可以通过将每个实例的值保存在uniform中并根据属性中给定的索引计算每个属性的值来进行“假”实例化(因为ES不支持实例化)。
编辑2:
好的,请暂时不要考虑“假”实例。现在,请确保坐标相同。因此,不要只为一个块提供坐标,而是为所有块提供坐标,并且只使用一个 ModelViewMatrix。这可能也会更快。
I do not know how your boxes are drawn - but I believe this is the issue. When computing the vertices for each box I guess you do something like this (pseudocode)
This will fail because you lose precision when adding, so corners that should share the same position actually do not. This is what is creating the gaps.
instead you should do something like this
This will ensure that the corners will use the same coordinates.
EDIT
It is still a precision problem. From what I understand, you are doing a drawcall per block, where the only thing that changes per block is the
ModelviewMatrix
. This means that you are expecting that this lineposition = ProjectionMatrix * ModelviewMatrix * vec4(inVertex, 1.0);
will give the same value for two different values of inVertex and ModelviewMatrix, which is wrong.
To solve this you can do "fake" instancing (since ES do not support instancing), by saving the per-instance values in uniforms and compute the per-attribute values from an index given in an attribute.
Edit 2:
Ok, please do not think about "fake" instancing for now. Now, please just ensure that the coordinates are the same. So instead of providing coordinates for only one block, provide them for all the blocks instead, and use only one ModelViewMatrix. That will probably be faster as well.
试试这些
try these