OpenGL 定义顶点位置(以像素为单位)

发布于 2024-12-04 06:34:38 字数 140 浏览 1 评论 0原文

我一直在用 OpenGL/C++ 编写一个 2D 基本游戏引擎,并一边学习一边学习所有内容。我仍然对定义顶点及其“位置”感到相当困惑。也就是说,我仍在尝试了解OpenGL的顶点到像素的转换机制。能否简要解释一下,或者有人可以指出一篇文章或其他内容来解释这一点。谢谢!

I've been writing a 2D basic game engine in OpenGL/C++ and learning everything as I go along. I'm still rather confused about defining vertices and their "position". That is, I'm still trying to understand the vertex-to-pixels conversion mechanism of OpenGL. Can it be explained briefly or can someone point to an article or something that'll explain this. Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

最佳男配角 2024-12-11 06:34:38

这是相当基础的知识,您最喜欢的 OpenGL 学习资源应该将其作为首要任务之一教给您。但无论如何,标准 OpenGL 管道如下:

  1. 顶点位置从对象空间(某些对象的本地)转换到世界空间(相对于某些全局坐标系)。此转换指定您的对象(顶点所属)在世界中的位置

  2. 现在,世界空间位置将转换为相机/视图空间。此变换由您查看场景的虚拟相机的位置和方向决定。在 OpenGL 中,这两种变换实际上合并为一个,即模型视图矩阵,它直接将顶点从对象空间变换到视图空间。

  3. 接下来应用投影变换。模型视图变换应该只包含仿射变换(旋转、平移、缩放),而投影变换可以是透视变换,它基本上扭曲对象以实现真正的透视视图(距离较远的对象较小)。但在 2D 视图的情况下,它可能是正交投影,它只不过是平移和缩放。此变换在 OpenGL 中由投影矩阵表示。

  4. 经过这 3(或 2)次变换(然后用 w 分量进行透视除法,这实际上实现了透视变形,如果有的话),您所拥有的是标准化的设备坐标。这意味着在这些转换之后,可见对象的坐标应在[-1,1]范围内。此范围之外的所有内容都会被剪掉。

  5. 最后一步,应用视口变换,并将坐标从 [-1,1] 范围变换为 [0,w]x[0,h]x [0,1] 立方体(假设调用 glViewport(0, w, 0, h) ),它们是顶点在帧缓冲区中的最终位置,因此也是其像素坐标。< /p>

使用顶点着色器时,步骤 1 到 3 实际上是在着色器中完成的,因此可以按照您喜欢的任何方式完成,但通常符合此标准模型视图 -> 。投影管道也是如此。

要记住的主要事情是,在模型视图和投影变换后,坐标在 [-1,1] 范围之外的每个顶点都将被剪掉。因此,[-1,1] 框决定了这两次转换后的可见场景。

因此,从您的问题来看,我假设您想使用以像素为单位的 2D 坐标系来进行顶点坐标和变换?在这种情况下,最好使用 glOrtho(0.0, w, 0.0, h, -1.0, 1.0) 来完成,其中 wh 为视口的尺寸。这基本上抵消了视口转换,因此将顶点从 [0,w]x[0,h]x[-1,1] 框转换为 [-1,1 ]-box,然后视口转换将其转换回 [0,w]x[0,h]x[0,1]-box。

这些都是相当笼统的解释,没有提到实际的变换是通过矩阵向量乘法完成的,也没有讨论齐次坐标,但它们应该解释了本质。这个 gluProject 文档 也可能会给您一些见解,因为它实际上是对单个顶点的转换管道进行建模。但在本文档中,他们实际上忘记提及 v' = P x M x v 之后除以 w 分量 (v" = v' / v'(3)) > 步骤

编辑:不要忘记查看第一个链接< /a> 在 epatel 的回答中,更实际和详细地解释了转换管道。

This is rather basic knowledge that your favourite OpenGL learning resource should teach you as one of the first things. But anyway the standard OpenGL pipeline is as follows:

  1. The vertex position is transformed from object-space (local to some object) into world-space (in respect to some global coordinate system). This transformation specifies where your object (to which the vertices belong) is located in the world

  2. Now the world-space position is transformed into camera/view-space. This transformation is determined by the position and orientation of the virtual camera by which you see the scene. In OpenGL these two transformations are actually combined into one, the modelview matrix, which directly transforms your vertices from object-space to view-space.

  3. Next the projection transformation is applied. Whereas the modelview transformation should consist only of affine transformations (rotation, translation, scaling), the projection transformation can be a perspective one, which basically distorts the objects to realize a real perspective view (with farther away objects being smaller). But in your case of a 2D view it will probably be an orthographic projection, that does nothing more than a translation and scaling. This transformation is represented in OpenGL by the projection matrix.

  4. After these 3 (or 2) transformations (and then following perspective division by the w component, which actually realizes the perspective distortion, if any) what you have are normalized device coordinates. This means after these transformations the coordinates of the visible objects should be in the range [-1,1]. Everything outside this range is clipped away.

  5. In a final step the viewport transformation is applied and the coordinates are transformed from the [-1,1] range into the [0,w]x[0,h]x[0,1] cube (assuming a glViewport(0, w, 0, h) call), which are the vertex' final positions in the framebuffer and therefore its pixel coordinates.

When using a vertex shader, steps 1 to 3 are actually done in the shader and can therefore be done in any way you like, but usually one conforms to this standard modelview -> projection pipeline, too.

The main thing to keep in mind is, that after the modelview and projection transforms every vertex with coordinates outside the [-1,1] range will be clipped away. So the [-1,1]-box determines your visible scene after these two transformations.

So from your question I assume you want to use a 2D coordinate system with units of pixels for your vertex coordinates and transformations? In this case this is best done by using glOrtho(0.0, w, 0.0, h, -1.0, 1.0) with w and h being the dimensions of your viewport. This basically counters the viewport transformation and therefore transforms your vertices from the [0,w]x[0,h]x[-1,1]-box into the [-1,1]-box, which the viewport transformation then transforms back to the [0,w]x[0,h]x[0,1]-box.

These have been quite general explanations without mentioning that the actual transformations are done by matrix-vector-multiplications and without talking about homogenous coordinates, but they should have explained the essentials. This documentation of gluProject might also give you some insight, as it actually models the transformation pipeline for a single vertex. But in this documentation they actually forgot to mention the division by the w component (v" = v' / v'(3)) after the v' = P x M x v step.

EDIT: Don't forget to look at the first link in epatel's answer, which explains the transformation pipeline a bit more practical and detailed.

怀里藏娇 2024-12-11 06:34:38

这就是所谓的转变。

顶点以 3D 坐标设置,该坐标会转换为视口坐标(转换为窗口视图)。这种转换可以通过多种方式设置。作为初学者,正交变换是最容易理解的。

http://www.songho.ca/opengl/gl_transform.html

http://www.opengl.org/wiki/Vertex_Transformation

http://www.falloutsoftware.com/tutorials/gl/gl5.htm

It is called transformation.

Vertices are set in 3D coordinates which is transformed into a viewport coordinates (into your window view). This transformation can be set in various ways. Orthogonal transformation can be easiest to understand as a starter.

http://www.songho.ca/opengl/gl_transform.html

http://www.opengl.org/wiki/Vertex_Transformation

http://www.falloutsoftware.com/tutorials/gl/gl5.htm

无边思念无边月 2024-12-11 06:34:38

首先要注意 OpenGL 不使用标准像素坐标。我的意思是针对特定分辨率,即。 800x600 您没有 0-799 或 1-800 范围内的水平坐标(步进一)。您宁愿将范围从 -1 到 1 的坐标发送到图形卡光栅化单元,然后匹配特定的分辨率。

我在这里省略了一步 - 首先你有一个 ModelViewProjection 矩阵(或在一些简单情况下的 viewProjection 矩阵),它会将你使用的坐标投射到投影平面。默认用途是实现一个转换世界 3D 空间的相机(用于将相机放置到正确位置的视图和用于将 3D 坐标投射到屏幕平面的投影。在 ModelViewProjection 中,这也是将模型放置到世界中正确位置的步骤)。

另一种情况(您可以通过这种方式使用投影矩阵来实现您想要的效果)是使用这些矩阵将一个分辨率范围转换为另一个分辨率范围。

您需要一个技巧。如果你想认真学习,你应该阅读 openGL 中的 modelViewProjection 矩阵和相机。但现在我将告诉您,使用适当的矩阵,您可以将自己的坐标系(即使用水平范围 0-799 和垂直范围 0-599)转换为标准化的 -1:1 范围。这样你就不会看到底层的 openGL api 使用他自己的 -1 到 1 系统。

实现此目的最简单的方法是 glOrtho 函数。这是文档的链接:
http://www.opengl.org/sdk/docs/man/xhtml /glOrtho.xml

这是正确使用的示例:
glMatrix模式(GL_PROJECTION)
glLoadIdentity();
glOrtho (0, 800, 600, 0, 0, 1)
glMatrixMode (GL_MODELVIEW)

现在您可以使用自己的 modelView 矩阵,即。用于平移(移动)对象,但不要触摸您的投影示例。此代码应在任何绘图命令之前执行。 (实际上,如果您不使用 3D 图形,则可以在初始化 opengl 之后进行)。

这是工作示例: http://nehe.gamedev.net/tutorial/2d_texture_font/18002/< /a>

只需绘制图形而不是绘制文本。还有另一件事 - glPushMatrix 和 glPopMatrix 用于选择矩阵(在本例中是投影矩阵) - 在将 3D 与 2D 渲染结合起来之前,您不会使用它。

您仍然可以使用模型矩阵(即,用于在世界中的某处放置图块)和视图矩阵(例如用于缩放视图或滚动世界 - 在这种情况下,您的世界可能大于分辨率,您可以通过简单的翻译来裁剪视图)

在查看我的答案后,我发现它有点混乱,但如果您感到困惑 - 只需阅读模型、视图和投影矩阵并尝试使用 glOrtho 示例。如果您仍然感到困惑,请随时询问。

Firstly be aware that OpenGL not uses standard pixel coordinates. I mean by that for particular resolution, ie. 800x600 you dont have horizontal coordinates in range 0-799 or 1-800 stepped by one. You rather have coordinates ranged from -1 to 1 later send to graphic card rasterizing unit and after that matched to particular resolution.

I ommited one step here - before all that you have an ModelViewProjection matrix (or viewProjection matrix in some simple cases) which before all that will cast coordinates you use to an projection plane. Default use of that is to implement a camera which converts 3D space of world (View for placing an camera into right position and Projection for casting 3d coordinates into screen plane. In ModelViewProjection it's also step of placing a model into right place in world).

Another case (and you can use Projection matrix this way to achieve what you want) is to use these matrixes to convert one range of resolutions to another.

And there's a trick you will need. You should read about modelViewProjection matrix and camera in openGL if you want to go serious. But for now I will tell you that with proper matrix you can just cast your own coordinate system (and ie. use ranges 0-799 horizontaly and 0-599 verticaly) to standarized -1:1 range. That way you will not see that underlying openGL api uses his own -1 to 1 system.

The easiest way to achieve this is glOrtho function. Here's the link to documentation:
http://www.opengl.org/sdk/docs/man/xhtml/glOrtho.xml

This is example of proper usage:
glMatrixMode (GL_PROJECTION)
glLoadIdentity ();
glOrtho (0, 800, 600, 0, 0, 1)
glMatrixMode (GL_MODELVIEW)

Now you can use own modelView matrix ie. for translation (moving) objects but don't touch your projection example. This code should be executed before any drawing commands. (Can be after initializing opengl in fact if you wont use 3d graphics).

And here's working example: http://nehe.gamedev.net/tutorial/2d_texture_font/18002/

Just draw your figures instead of drawing text. And there is another thing - glPushMatrix and glPopMatrix for choosen matrix (in this example projection matrix) - you wont use that until you combining 3d with 2d rendering.

And you can still use model matrix (ie. for placing tiles somewhere in world) and view matrix (in example for zooming view, or scrolling through world - in this case your world can be larger than resolution and you could crop view by simple translations)

After looking at my answer I see it's a little chaotic but If you confused - just read about Model, View, and Projection matixes and try example with glOrtho. If you're still confused feel free to ask.

断肠人 2024-12-11 06:34:38

MSDN 有一个很好的解释。可能是在 DirectX 方面,但 OpenGL 或多或少是相同的。

MSDN has a great explanation. It may be in terms of DirectX but OpenGL is more-or-less the same.

倾`听者〃 2024-12-11 06:34:38

谷歌搜索“opengl 渲染管道”。前五篇文章都提供了很好的阐述。

从顶点到像素(实际上是片段,但如果您认为“像素”,您就不会离得太远)的关键过渡是在光栅化阶段,该阶段发生在所有顶点从世界坐标转换为屏幕坐标之后,并且被剪掉了。

Google for "opengl rendering pipeline". The first five articles all provide good expositions.

The key transition from vertices to pixels (actually, fragments, but you won't be too far off if you think "pixels") is in the rasterization stage, which occurs after all vertices have been transformed from world-coordinates to screen coordinates and clipped.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文