GLSL/GL2.1 照明:转变为眼睛空间
那么,在进行照明计算之前将所有内容转换为眼睛空间会有所帮助吗?我在转换部分遇到麻烦。我已经正确转换了法线,但是当我应用平移时(当对象不在世界坐标系的中心时),照明保持完全相同。
我已经确认任何C++代码都没有问题。 我将粘贴我的着色器...
问题:我想知道我没有正确转换什么,以及我应该如何转换它。
顶点着色器...
const int MAXLIGHTS = 4;
uniform int lightcount;
uniform vec4 lPositions[MAXLIGHTS];
//V = transformed vertex
//N = transformed normal
//E = eye vector
//L = vector from vertex to light
varying vec3 V, N, E, L[MAXLIGHTS];
void main()
{
int lcount = lightcount > MAXLIGHTS ? MAXLIGHTS : lightcount;
V = vec3(gl_ModelViewMatrix * gl_Vertex);
N = gl_NormalMatrix * gl_Normal;
E = normalize(-V);
for(int i = 0; i < lcount; i++)
{
L[i] = gl_NormalMatrix * normalize(vec3(lPositions[i] - gl_Vertex));
}
gl_FrontColor = gl_Color;
gl_Position = ftransform();
}
片段着色器...
const int MAXLIGHTS = 4;
uniform int lightcount;
uniform vec4 lDiffuses[MAXLIGHTS];
uniform vec4 lAmbients[MAXLIGHTS];
varying vec3 V, N, E, L[MAXLIGHTS];
uniform bool justcolor;
void main()
{
if(justcolor)
{
gl_FragColor = gl_Color;
return;
}
int lcount = lightcount > MAXLIGHTS ? MAXLIGHTS : lightcount;
vec4 ambient;
vec4 diffuse;
vec4 specular = vec4(0.0, 0.0, 0.0, 0.0);
vec4 color = vec4(0.0, 0.0, 0.0, 1.0);
vec3 H;
float NL;
float NH;
for(int i = 0; i < lcount; i++)
{
specular = vec4(0.0, 0.0, 0.0, 0.0);
ambient = lAmbients[i];
NL = dot(N, L[i]);
diffuse = lDiffuses[i] * max(NL, 0.0);
if(NL > 0.0)
{
H = normalize(E + L[i]);
NH = max(0.0, dot(N, H));
specular = pow(NH, 40.0) * vec4(0.3, 0.3, 0.3, 1.0);
}
color += gl_Color * (diffuse + ambient) + specular;
}
gl_FragColor = color;
}
So, it helps to transform everything to eye space before doing lighting calculations? I'm having trouble with the transforming part. I've got the normals transformed right, but when I apply translations (when the object is not in the center of the world coordinate system), the lighting remains exactly the same.
I have confirmed that there are no problems with any C++ code.
I will paste my shaders...
QUESTION: I would like to know what I'm not transforming right, and how I'm supposed to transform it.
vertex shader...
const int MAXLIGHTS = 4;
uniform int lightcount;
uniform vec4 lPositions[MAXLIGHTS];
//V = transformed vertex
//N = transformed normal
//E = eye vector
//L = vector from vertex to light
varying vec3 V, N, E, L[MAXLIGHTS];
void main()
{
int lcount = lightcount > MAXLIGHTS ? MAXLIGHTS : lightcount;
V = vec3(gl_ModelViewMatrix * gl_Vertex);
N = gl_NormalMatrix * gl_Normal;
E = normalize(-V);
for(int i = 0; i < lcount; i++)
{
L[i] = gl_NormalMatrix * normalize(vec3(lPositions[i] - gl_Vertex));
}
gl_FrontColor = gl_Color;
gl_Position = ftransform();
}
fragment shader...
const int MAXLIGHTS = 4;
uniform int lightcount;
uniform vec4 lDiffuses[MAXLIGHTS];
uniform vec4 lAmbients[MAXLIGHTS];
varying vec3 V, N, E, L[MAXLIGHTS];
uniform bool justcolor;
void main()
{
if(justcolor)
{
gl_FragColor = gl_Color;
return;
}
int lcount = lightcount > MAXLIGHTS ? MAXLIGHTS : lightcount;
vec4 ambient;
vec4 diffuse;
vec4 specular = vec4(0.0, 0.0, 0.0, 0.0);
vec4 color = vec4(0.0, 0.0, 0.0, 1.0);
vec3 H;
float NL;
float NH;
for(int i = 0; i < lcount; i++)
{
specular = vec4(0.0, 0.0, 0.0, 0.0);
ambient = lAmbients[i];
NL = dot(N, L[i]);
diffuse = lDiffuses[i] * max(NL, 0.0);
if(NL > 0.0)
{
H = normalize(E + L[i]);
NH = max(0.0, dot(N, H));
specular = pow(NH, 40.0) * vec4(0.3, 0.3, 0.3, 1.0);
}
color += gl_Color * (diffuse + ambient) + specular;
}
gl_FragColor = color;
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
眼睛空间是场景在通过投影矩阵之前变换到的空间。这就是 ftransform() 方便地包装的内容(我的意思是从模型空间通过眼睛空间(模型视图变换)到剪辑空间(投影变换)的完整路径)。
模型视图矩阵包含从局部对象到眼睛空间的完整变换。但是,您的灯光不会位于(每个)对象的本地空间中,而是位于世界空间中。因此,我们在这里处理两个不同的转换:
所以从技术上来说这是可能的通过提供分解的模型视图作为模型和视图统一矩阵输入来变换顶点着色器中的灯光和对象顶点。然后,您只需通过视图部分变换光源位置,通过模型变换对象的顶点,然后通过视图部分变换灯光位置。但我建议不要这样做。着色器单元的计算资源应保留给每个顶点输入具有不同结果的计算。光位置变换不会这样做。
相反,您应该在将灯光位置传递到着色器(制服)之前将其预先转换为眼睛空间。那么如何做到这一点。首先,我强烈建议您摆脱旧的 OpenGL 矩阵操作函数(glRotate、glTranslate、glScale 等以及 GLU 帮助程序,如 gluPerspective 等)。没有它们,事情会变得更容易,而且它们已从更高的 OpenGL 版本中删除。
那该怎么办呢。假设您有一个矩阵库,例如 GLM,但任何其他库也都可以。要渲染场景,请遵循该方案(类似 Python 的伪代码)
如您所见,灯光通过使用 view_matrix“手动”转换为眼睛空间,而对象的顶点不会在 GPU 上触及,而是着色器的参数设置为绘图。
Eye space is the space your scene is transformed to right before it goes through the projection matrix. That's what
ftransform()
conveniently wraps (by this I meant the full path from model space through eye space (modelview transform) to clip space (projection transform)).The modelview matrix contains the full transformation from object local to eye space. However your lights will not be in (each) object's local space, but in world space. So we're dealing with two distinct transformations here:
So technically it was possible to transform both lights and object vertices in the vertex shader, by supplying the decomposed modelview as model and view uniform matrix inputs. Then you'd transform light positions by just the view part, and object's vertices by model, then view part. But I recommend not doing it that way. The computing resources of the shader units should be reserved to computations that have a different result for each vertex input. Light position transformations don't do this.
Instead you should pre-transform your light positions to eye space before passing them to the shader (uniforms). So how to do this. First I strongly suggest you rid yourself of the old OpenGL matrix manipulation functions (glRotate, glTranslate, glScale, … and GLU helpers like gluPerspective, …). Things get easier without them, plus they have been removed from later OpenGL versions.
So how to do it then. Say you've got a matrix library, like GLM, but any other will work, too. To render your scene you follow about that scheme (Python-like pseudocode)
As you can see, the lights are "manually" transformed to eye space by use of the view_matrix whereas object's vertices are not touched on the GPU, but the parameters for shader set for the drawing.
仅当
lPositions
位于模型空间中时,此代码才有意义。这是极不可能的。其工作原理的一般方式是在眼睛空间中传递灯光位置,因此无需转换它们。
而且,L和E是完全多余的。通过在片段着色器中计算这些,您将获得更准确的结果。计算非常简单且便宜,并且由于无论如何您都需要在片段着色器中重新规范化它们(您没有这样做),所以您实际上没有得到任何东西。
L 就是眼睛空间光位置 - 眼睛空间表面位置。 E只是从眼睛到位置的方向,它是眼睛空间表面位置的归一化否定。
This code only makes sense if
lPositions
is in model space. And that's highly unlikely.The general way this works is that you pass light positions in eye space, so there's no need to transform them.
Also, L and E are entirely superfluous. You will get more accurate results by computing these in the fragment shader. The computations are quite simple and cheap, and since you need to renormalize them (which you don't do) in the fragment shader anyway, you're not really getting anything.
L is just the eye-space light position - the eye-space surface position. E is just the direction from the eye to the position, which is the normalized negation of the eye-space surface position.