有人可以帮我解释一下这段代码吗
我一直在四处寻找不同的 OpenGL 3+ 代码,只是为了更加熟悉。我还遵循教程并尝试使用 OpenGL 进行编码。
所以我找到了一个绘制球体的代码。下面找到了一些顶点,结果如下:
.
..
...
Sphere vertices are: >> Vertice 10281: X>0.000747, Y>-0.000275, Z>1.000000!
Sphere vertices are: >> Vertice 10282: X>0.000769, Y>-0.000208, Z>1.000000!
Sphere vertices are: >> Vertice 10283: X>0.084840, Y>-0.023011, Z>0.996129!
Sphere vertices are: >> Vertice 10284: X>0.084840, Y>-0.023011, Z>0.996129!
Sphere vertices are: >> Vertice 10285: X>0.000769, Y>-0.000208, Z>1.000000!
Sphere vertices are: >> Vertice 10286: X>0.000784, Y>-0.000141, Z>1.000000!
Sphere vertices are: >> Vertice 10287: X>0.086522, Y>-0.015533, Z>0.996129!
Sphere vertices are: >> Vertice 10288: X>0.086522, Y>-0.015533, Z>0.996129!
Sphere vertices are: >> Vertice 10289: X>0.000784, Y>-0.000141, Z>1.000000!
Sphere vertices are: >> Vertice 10290: X>0.000793, Y>-0.000072, Z>1.000000!
Sphere vertices are: >> Vertice 10291: X>0.087546, Y>-0.007936, Z>0.996129!
...
..
.
现在,我下载的代码在屏幕中间绘制了一个球体。它使用 glm 包。我不明白的是,“X=0.087546”或“Z=0.996129”如何转换为像素并绘制到屏幕的轴上。
这是 glm 代码:
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
angle = (GLfloat) (i/50 % 360); //to dia gia nan pio argo
//printf("Angle: >>>> %f, \n", angle);
//VIEW
glm::mat4 View = glm::mat4(1.);
View = glm::translate(View, glm::vec3(0.f, 0.f, -5.0f)); // x, y, z position ?
//View = glm::rotate(View, angle * -1.0f, glm::vec3(1.f, 0.f, 0.f));
View = glm::rotate(View, angle * 0.5f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.5f, glm::vec3(0.f, 0.f, 1.f));
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "mvpmatrix"), 1, GL_FALSE, glm::value_ptr(MVP));
它正在生成的窗口是一个 600*600 像素的窗口。那么 X 和 Z 是如何映射到这些坐标的呢?
I have been going around and looking at different OpenGL 3+ code just to get more familiar. I also follow tutorials and try to code with OpenGL.
so I found a code that draws a sphere. some vertices are found below, and here is the result:
.
..
...
Sphere vertices are: >> Vertice 10281: X>0.000747, Y>-0.000275, Z>1.000000!
Sphere vertices are: >> Vertice 10282: X>0.000769, Y>-0.000208, Z>1.000000!
Sphere vertices are: >> Vertice 10283: X>0.084840, Y>-0.023011, Z>0.996129!
Sphere vertices are: >> Vertice 10284: X>0.084840, Y>-0.023011, Z>0.996129!
Sphere vertices are: >> Vertice 10285: X>0.000769, Y>-0.000208, Z>1.000000!
Sphere vertices are: >> Vertice 10286: X>0.000784, Y>-0.000141, Z>1.000000!
Sphere vertices are: >> Vertice 10287: X>0.086522, Y>-0.015533, Z>0.996129!
Sphere vertices are: >> Vertice 10288: X>0.086522, Y>-0.015533, Z>0.996129!
Sphere vertices are: >> Vertice 10289: X>0.000784, Y>-0.000141, Z>1.000000!
Sphere vertices are: >> Vertice 10290: X>0.000793, Y>-0.000072, Z>1.000000!
Sphere vertices are: >> Vertice 10291: X>0.087546, Y>-0.007936, Z>0.996129!
...
..
.
Now the code that I downloaded, draws a sphere in the middle of the screen. It uses glm package. What I don't get is that, how "X=0.087546" or "Z=0.996129" get translated into pixels and drawn onto the screen's axis.
Here is the glm code:
//PROJECTION
glm::mat4 Projection = glm::perspective(45.0f, 1.0f, 0.1f, 100.0f);
angle = (GLfloat) (i/50 % 360); //to dia gia nan pio argo
//printf("Angle: >>>> %f, \n", angle);
//VIEW
glm::mat4 View = glm::mat4(1.);
View = glm::translate(View, glm::vec3(0.f, 0.f, -5.0f)); // x, y, z position ?
//View = glm::rotate(View, angle * -1.0f, glm::vec3(1.f, 0.f, 0.f));
View = glm::rotate(View, angle * 0.5f, glm::vec3(0.f, 1.f, 0.f));
//View = glm::rotate(View, angle * 0.5f, glm::vec3(0.f, 0.f, 1.f));
//MODEL
glm::mat4 Model = glm::mat4(1.0);
glm::mat4 MVP = Projection * View * Model;
glUniformMatrix4fv(glGetUniformLocation(shaderprogram, "mvpmatrix"), 1, GL_FALSE, glm::value_ptr(MVP));
The window that it is being produced is a 600*600 pixels window. So how X, and Z are mapped to those coordinates?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
通过转型管道。顶点坐标与模型视图矩阵相乘,得到的眼睛位置坐标与投影矩阵相乘,图元被裁剪,然后应用透视除法产生 NDC。然后它们被映射到视口。
http://www.opengl.org/wiki/Vertex_Transformation
By going through the transformation pipeline. Vertex coordinates are multiplied with the Modelview matrix, the resulting eye position coordinates are multiplied with the projection matrix, primitives are clipped, then the perspective divide is applied yielding NDC. Those are mapped to the viewport then.
http://www.opengl.org/wiki/Vertex_Transformation
您显示的转换只是将事物映射到由 (-1,-1,-1) 到 (1,1,1) 界定的立方体中。这些是“标准化设备坐标”。在标准化设备坐标中,X 和 Y 映射到像素,其中 (-1,-1) 是视口的一个角,(1,1) 是对角。 Z 映射到深度缓冲区中的值,-1 是近裁剪平面,1 是远裁剪平面。
然后 glViewport 将标准化设备坐标 (X,Y) = (-1,-1)..(1,1) 映射到视口的实际像素坐标 (0,0)..(599,599) 在您的情况下。
The transformations you've shown just map things into the cube bounded by (-1,-1,-1) to (1,1,1). These are "normalized device coordinates". In normalized device coordinates, X and Y map onto pixels, with (-1,-1) being one corner of the viewport and (1,1) being the opposite corner. Z maps onto values in the depth buffer, with -1 being the near clip plane and 1 being the far clip plane.
glViewport then maps normalized device coordinates, (X,Y) = (-1,-1)..(1,1), to your viewport's actual pixel coordinates, (0,0)..(599,599) in your case.