3D图形算法(硬件)

发布于 2024-10-29 23:07:23 字数 372 浏览 7 评论 0原文

我正在尝试设计一个asic图形处理器。我对这个主题做了广泛的研究,但我对如何平移和旋转点仍然有点模糊。我正在使用正交投影来光栅化变换后的点。

我一直在使用以下关于矩阵乘法(齐次坐标)的讲座 http://www.cs.kent.edu/~zhao/gpu /lectures/Transformation.pdf

有人可以向我更深入地解释一下吗?我对算法仍然有些动摇。我传递一个相机 (x,y,z) 和一个表示相机角度的相机矢量 (x,y,z),以及一个点 (x,y,z)。矩阵中应该做什么才能将点转换到新的适当位置?

I am trying to design an asic graphics processor. I have done extensive research on the topic but I am still kind of fuzzy on how to translate and rotate points. I am using orthographic projection to rasterize the transformed points.

I have been using the following lecture regarding the matrix multiplication (homogenous coordinates)
http://www.cs.kent.edu/~zhao/gpu/lectures/Transformation.pdf

Could someone please explain this a little more in depth to me. I am still somewhat shakey on the algorithm. I am passing a camera (x,y,z) and a camera vector (x,y,z) representing the camera angle, along with a point (x,y,z). What should go where within the matrices to transform the point to the new appropriate location?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

浴红衣 2024-11-05 23:07:23

以下是伪代码中的完整变换算法:

void project(Vec3d objPos, Matrix4d modelViewMatrix,
    Matrix4d projMatrix, Rect viewport, Vec3d& winCoords)
{
    Vec4d in(objPos.x, objPos.y, objPos.z, 1.0);
    in = projMatrix * modelViewMatrix * in;
    in /= in.w; // perspective division
    // "in" is now in normalized device coordinates, which are in the range [-1, 1].

    // Map coordinates to range [0, 1]
    in.x = in.x / 2 + 0.5;    
    in.y = in.y / 2 + 0.5;    
    in.z = in.z / 2 + 0.5;    

    // Map to viewport
    winCoords.x = in.x * viewport.w + viewport.x;    
    winCoords.y = in.y * viewport.h + viewport.y;    
    winCoords.z = in.z;    
}

然后使用 winCoords.x 和 winCoords.y 进行光栅化。

有关此算法各个阶段的说明,请参阅 OpenGL 常见问题解答。

Here's the complete transformation algorithm in pseudocode:

void project(Vec3d objPos, Matrix4d modelViewMatrix,
    Matrix4d projMatrix, Rect viewport, Vec3d& winCoords)
{
    Vec4d in(objPos.x, objPos.y, objPos.z, 1.0);
    in = projMatrix * modelViewMatrix * in;
    in /= in.w; // perspective division
    // "in" is now in normalized device coordinates, which are in the range [-1, 1].

    // Map coordinates to range [0, 1]
    in.x = in.x / 2 + 0.5;    
    in.y = in.y / 2 + 0.5;    
    in.z = in.z / 2 + 0.5;    

    // Map to viewport
    winCoords.x = in.x * viewport.w + viewport.x;    
    winCoords.y = in.y * viewport.h + viewport.y;    
    winCoords.z = in.z;    
}

Then rasterize using winCoords.x and winCoords.y.

For an explanation of the stages of this algorithm, see question 9.011 from the OpenGL FAQ.

尾戒 2024-11-05 23:07:23

在销售的最初几年,大众市场的 PC 图形处理器根本不转换或旋转点。您需要实现此功能吗?如果没有,您可能希望让软件来完成。根据您的情况,软件可能是更明智的途径。

如果您需要实现该功能,我会告诉您他们早期是如何做到的。

硬件有 16 个浮点寄存器,代表 4x4 矩阵。应用程序开发人员在渲染三角形网格之前使用 ModelViewProjection 矩阵加载这些寄存器。 ModelViewProjection 矩阵为:

Model * View * Projection

其中“Model”是将顶点从“model”坐标带入“world”坐标的矩阵,“View”是将顶点从“world”坐标带入“camera”坐标的矩阵,“投影”是将顶点从“相机”坐标带到“屏幕”坐标的矩阵。它们一起将顶点从“模型”坐标(相对于它们所属的 3D 模型的坐标)引入“屏幕”坐标,您打算将它们光栅化为三角形。

这是三个不同的矩阵,但它们相乘并将 4x4 结果写入硬件寄存器。

当顶点缓冲区要渲染为三角形时,硬件从内存中将顶点读取为 [x,y,z] 向量,并将它们视为 [x,y,z,w],其中 w 始终为 1然后将每个向量乘以 4x4 ModelViewProjection 矩阵以获得 [x',y',z',w']。如果有透视图(你说没有),那么我们除以 w' 即可得到透视图 [x'/w',y'/w',z'/w',w'/w']。

然后用新计算的顶点对三角形进行光栅化。如果需要,这使得模型的顶点能够位于只读存储器中,尽管模型和相机可能处于运动状态。

For the first few years they were for sale, mass-market graphics processors for PC didn't translate or rotate points at all. Are you required to implement this feature? If not, you may wish to let software do it. Depending on your circumstances, software may be the more sensible route.

If you are required to implement the feature, I'll tell you how they did it in the early days.

The hardware has sixteen floating point registers that represent a 4x4 matrix. The application developer loads these registers with the ModelViewProjection matrix just before rendering a mesh of triangles. The ModelViewProjection matrix is:

Model * View * Projection

Where "Model" is a matrix that brings vertices from "model" coordinates into "world" coordinates, "View" is a matrix that brings vertices from "world" coordinates into "camera" coordinates, and "Projection" is a matrix that brings vertices from "camera" coordinates to "screen" coordinates. Together they bring vertices from "model" coordinates - coordinates relative to the 3D model they belong to - into "screen" coordinates, where you intend to rasterize them as triangles.

Those are three different matrices, but they're multiplied together and the 4x4 result is written to hardware registers.

When a buffer of vertices is to be rendered as triangles, the hardware reads in vertices as [x,y,z] vectors from memory, and treats them as if they were [x,y,z,w] where w is always 1. It then multiplies each vector by the 4x4 ModelViewProjection matrix to get [x',y',z',w']. If there is perspective (you said there wasn't) then we divide by w' to get perspective [x'/w',y'/w',z'/w',w'/w'].

Then triangles are rasterized with the newly computed vertices. This enables a model's vertices to be in read-only memory if desired, though the model and camera may be in motion.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文