具有非标准投影矩阵的 GLSL 法线
经过几天让我的 GLSL 顶点着色器正确显示顶点后,我现在开始转向照明!我对 openGL 光照/法线的理解无论怎么想都不是很好,所以请耐心等待。我不确定需要对法线应用什么翻译才能使它们正确显示。这是设置我的灯光的应用程序代码:
final float diffuseIntensity = 0.9f;
final float ambientIntensity = 0.5f;
final float position[] = { 0f, 0f, 25000f, 0f};
gl.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, position, 0);
final float diffuse[] = { 0, diffuseIntensity, 0, 1f};
gl.glLightfv(GL.GL_LIGHT0, GL.GL_DIFFUSE, diffuse, 0);
final float ambient[] = { ambientIntensity, ambientIntensity, ambientIntensity, 1f};
gl.glLightfv(GL.GL_LIGHT0, GL.GL_AMBIENT, ambient, 0);
到目前为止,这是相当标准的东西。现在,由于应用程序的要求,这里是(有点奇怪)顶点着色器:
void main()
{
// P is the camera matrix, model_X_matrices are the relative translation/rotation etc of the model currently being rendered.
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
据我所知,我需要将 gl_Normal
转换为世界坐标。对于我的着色器,我相信这将是:
vec4 normal = modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * vec4(gl_Normal);
然后我需要获取灯光的位置(它已经在世界空间的应用程序代码中声明)。我想我会这样做:
vec3 light_position = gl_LightSource[0].position.xyz;
然后通过找到法线和灯光位置的点积来找到灯光的漫反射值。
此外,我认为在片段着色器中,我只需要将颜色乘以这个漫反射值,它就应该可以工作。我只是真的不确定如何正确转换法线坐标。我的假设正确还是我完全偏离了球?
编辑: 在读到法线矩阵 (gl_NormalMatrix
) 只是 gl_ModelView
矩阵的 3x3 逆矩阵后,我猜测计算世界空间中法线的正确方法是将 gl_Normal
乘以 modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix
的倒数?我是否仍然需要将其乘以 P 矩阵,或者这与正常计算无关?
After a few days of getting my GLSL vertex shader to display the vertices correctly, I've now moved onto lighting! My understanding of openGL lighting/normals isn't great by any stretch of the imagination so bear with me. I'm unsure of what translations I need to apply to my normals to get them to display correctly. Here is the application code that sets up my lights:
final float diffuseIntensity = 0.9f;
final float ambientIntensity = 0.5f;
final float position[] = { 0f, 0f, 25000f, 0f};
gl.glLightfv(GL.GL_LIGHT0, GL.GL_POSITION, position, 0);
final float diffuse[] = { 0, diffuseIntensity, 0, 1f};
gl.glLightfv(GL.GL_LIGHT0, GL.GL_DIFFUSE, diffuse, 0);
final float ambient[] = { ambientIntensity, ambientIntensity, ambientIntensity, 1f};
gl.glLightfv(GL.GL_LIGHT0, GL.GL_AMBIENT, ambient, 0);
Pretty standard stuff so far. Now because of the requirements of the application, here is the (somewhat odd) vertex shader:
void main()
{
// P is the camera matrix, model_X_matrices are the relative translation/rotation etc of the model currently being rendered.
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
}
It's my understanding that I need to transform the gl_Normal
into world coordinates. For my shader, I believe this would be:
vec4 normal = modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * vec4(gl_Normal);
And then I would need to get the position of the light (which was already declared in the application code in world space). I think I would do this by:
vec3 light_position = gl_LightSource[0].position.xyz;
and then find the diffuse value of the light by find the dot product of the normal and the light position.
Furthermore, I think in the fragment shader I just need to multiply the color by this diffuse value and it should all work. I'm just really not sure how to transform the normal coordinates correctly. Is my assumption correct or am I totally off the ball?
EDIT:
After reading that the normal matrix (gl_NormalMatrix
) is just the 3x3 inverse of the gl_ModelView
matrix, I'm guessing that a correct way to calculate the normal in world space is to multiply the gl_Normal
by the inverse of modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix
? Would I still need to multiply this by the P
matrix or is that irrelevant for normal calculations?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您应该预乘
modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix
并仅将其作为单个统一传递。法线乘以模型视图矩阵的逆转置,详细信息在此处找到的优秀文章中进行了解释 http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/
在 GLSL 中,执行此操作的代码是
但是不要通过 GLSL 来实现! 相反,您应该在主机代码中进行计算并通过统一传递矩阵。矩阵-矩阵乘法的复杂度为 O(n3),因此相当昂贵。在 GPU 端执行此计算会在每个 GPU 线程上并行创建大量计算负载。我们不希望这种情况发生。相反,使用 linmath.h 或 GLM 或 Eigen 或等效的东西计算 CPU 上的单次时间。
You should premultiply
modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix
and only pass that as a single uniform.Normals are multiplied by the inverse transpose of the modelview matrix, the details are explained in the excellent article found here http://www.lighthouse3d.com/tutorials/glsl-tutorial/the-normal-matrix/
In GLSL the code to do this would be
However do not implement this by means of GLSL! Instead you should do the calculation in the host code and pass the matrix by a uniform. Matrix-matrix multiplication is O(n³) and thus fairly expensive. Doing this calculation GPU side would create a lot of computational load, in parallel, on each of the GPUs threads. We don't want that to happen. Instead calculate is a single time on the CPU using linmath.h or GLM or Eigen or something equivalent.
尽管我意识到这是一个老问题了,但由于我自己也在为此苦苦挣扎,所以我想我应该分享我的答案。
opengl 中的向量与点同义表示(即每个轴一个坐标)。这意味着它的方向是由从原点到这些坐标的平移来定义的。
当您平移概念数学向量时,您不会改变它的方向。当您翻译 opengl 矢量而不翻译其原点时,您就这样做了。
这就是通过模型视图矩阵(或任何自定义矩阵堆栈)平移法线向量的错误之处 - 一般来说,它将包含旋转和平移(以及问题中的缩放,但这既不是这里也不是那里)。
通过应用平移,您可以更改法线的方向。长话短说:距离顶点越远,法线就越接近平行于相机顶点向量。
因此,而不是
您实际想要的
因此,排除翻译术语,但保留任何旋转、缩放和剪切。
至于使用相机矩阵,那取决于你想做什么。重要的是方程中的所有值都位于同一坐标空间中。话虽这么说,乘以相机投影可能会导致问题,因为它(可能)被设置为从相对于相机的 3D 世界坐标投影到屏幕坐标+深度。
通常,您会计算世界空间中的照明 - 乘以模型平移而不是相机投影。
Although I realise this is a year old question, since I was just strugling with this myself I thought I'd share my answer.
A vector in opengl is represented synonymously to a point (i.e. one coordinate per axis). What that means is that its direction is defined by the translation from the origin to those coordinates.
When you translate a conceptual mathematical vector, you don't change it's direction. When you translate an opengl vector without translating it's origin, you do.
And that's what's wrong about translating the normal vectors by the modelview matrix (or whatever your custom matrix stack is) - generally speaking that will contain rotation and translation (and scaling in the question, but that's neither here nor there).
By applying the translation you change the direction of the normals. Long story short: further away the verticies, closer the normals become to being parallel to the camera-vertex vector.
Ergo, rather than
you actually want
Hence excluding the translation terms, but retaining any rotation, scale and shear.
As for using the camera matrix, that depends on what you want to do. The important thing is that all values in an equation are in the same coordinate space. That being said, multiplying by the camera projection will likely cause problems, since it's (probably) set up to project from 3d world coordinates relative to the camera into screen co-ordinates + depth.
Generally you'd calculate lighting in world space - multiplying by the model translation but not the camera projection.