关于使用 XNA 进行光线追踪的世界视图投影矩阵问题感到困惑

发布于 2024-11-26 10:17:53 字数 1436 浏览 2 评论 0原文

因此,我决定重写一个用 C++ 编写的旧光线追踪器,并利用 XNA 框架用 C# 来实现。

我仍然保留着我的旧书,可以按照笔记进行操作,但是我对一些想法感到困惑,我想知道是否有人可以很好地阐明它。


    for each x pixel do
        for each y pixel do
        //Generate Ray
        //1 - Calculate world coordinates of current pixel
           //1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v) 
           u = (2*x/ WIDTH) - 1 ;
           v = (2*y/HEIGHT) - 1 ;
           Vector3 rayDirection = -1*focalLength + u'*u + v'*v


在上面的代码中,u'和v'是为给定相机计算的标准正交基(我知道相同的名称会让人感到困惑)

如果我按照书本并按照其表达的方式进行操作,它就可以工作。然而,我正在尝试利用 XNA,但对如何使用矩阵执行相同的操作感到困惑。

所以我尝试用 XNA 代码替换以下步骤


    class Camera
        {
           public Camera(float width, float height)
           {
            AspectRatio = width/height;
            FOV = Math.PI / 2.0f;
            NearPlane = 1.0f;
            FarPlane = 100.0f;
            ViewMatrix = Matrix.CreateLookAt(Position, Direction,this.Up);
            ProjectionMatrix=Matrix.CreatePerspectiveFieldOfView(FOV,
                                              AspectRatio,NearPlane,FarPlane); 
           }
        }

此时我对为了获取任何像素 (x, y) 的方向向量而应该应用的操作顺序感到困惑?

在我的脑海里我在想: (u,v) = ProjectionMatrix * ViewMatrix * ModelToWorld * Vertex(在模型空间中)

有道理

因此, Vertex (在世界空间中) = Inverse(ViewMatrix) * Inverse(ProjectionMatrix) * [u, v, 0]

我也 记得有关视图矩阵如何转置和反转的一些信息,因为它是正交的。

So I've decided to rewrite an old ray tracer I had which was written in C++ and to do it in C#, leveraging the XNA framework.

I still have my old book and can follow the notes however I am confused regarding a few ideas and I was wondering whether someone could articulate it nicely.


    for each x pixel do
        for each y pixel do
        //Generate Ray
        //1 - Calculate world coordinates of current pixel
           //1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v) 
           u = (2*x/ WIDTH) - 1 ;
           v = (2*y/HEIGHT) - 1 ;
           Vector3 rayDirection = -1*focalLength + u'*u + v'*v


In the above code u' and v' are the orthnormal basis calculated for the given camera (I know the same names make it confusing)

If I follow the book and do it the way it expresses, it works. However I am trying to leverage XNA and getting confused on how to perform the same actions but using Matrices.

So I've tried to replace the following steps with the XNA code


    class Camera
        {
           public Camera(float width, float height)
           {
            AspectRatio = width/height;
            FOV = Math.PI / 2.0f;
            NearPlane = 1.0f;
            FarPlane = 100.0f;
            ViewMatrix = Matrix.CreateLookAt(Position, Direction,this.Up);
            ProjectionMatrix=Matrix.CreatePerspectiveFieldOfView(FOV,
                                              AspectRatio,NearPlane,FarPlane); 
           }
        }

It's at this point I'm confused in the order of operations I am supposed to apply in order to get the direction vector for any pixel (x, y) ?

In my head I'm thinking:
(u,v) = ProjectionMatrix * ViewMatrix * ModelToWorld * Vertex(in model space)

Therefore it would make sense that

Vertex (in world space) = Inverse(ViewMatrix) * Inverse(ProjectionMatrix) * [u, v, 0]

I also remembered something about how the view Matrix can be Transposed as well as Inverted since it is orthonormal.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

人心善变 2024-12-03 10:17:53

实际上没有必要使用矩阵进行光线追踪。透视投影刚刚脱离了系统。这是光线追踪的好处之一。

你的评论也很混乱。

//1 - 计算当前像素的世界坐标
//1.1 计算当前像素 1- 到 -1 (u, v) 的归一化设备坐标

NDC 在光线追踪中没有任何作用,所以我不知道你在这里在说什么。你用 u, v 代码所做的就是计算方向
用于基于您在世界空间中设置的虚拟像素网格的光线。然后你将追踪光线进入场景并查看它是否相交
与任何东西。

事实上,你现在真的不需要担心不同的空间仪式。只需将所有内容放入世界坐标中即可。如果你想做
复杂的模型(模型变换,如缩放旋转和拉屎)可能需要模型->世界变换,但是当您第一次开始编写射线时
追踪器你不需要担心那些东西。

如果你想使用 XNA,你可以使用该相机类,但就像某些成员将毫无用处。即矩阵以及近平面和远平面。

Theres really no need to be using matrices for ray tracing. Perspective projection just falls out of the system. That's one of the benefits of ray tracing.

ur comments are confusing too.

//1 - Calculate world coordinates of current pixel
//1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v)

NDC doesnt have any role in ray tracing so I don't know what you are talking about here. All you are doing with that u, v code is calculating a direction
for the ray based on the virtual grid of pixels you set up in world space. then you are going to trace the ray out into the scene and see if it intersects
with anything.

Really you don't really need to be worrying about different spaces rite now. Just put everything into world co-ordinates and call it a day. if you want to do
complicated models (model transforms like scale rotate and shit) a model->world transform might be needed but when you first start writing a ray
tracer you don't need to worry about that stuff.

if you want to use XNA you can use that camera class but like some of the members are going to be useless. i.e. the matricies and near and far plane.

舞袖。长 2024-12-03 10:17:53

NDC 的原因是您可以将图像高度/宽度(以像素为单位)映射到任意大小的图像(不一定是 1:1)
基本上我的理解如下:

  1. 您想要将像素 X 和 Y 转换为从 -1 到 1 的统一矩形(基本上将相机置于观察框架内)
  2. 执行使用 FOV、长宽比和近平面的逆投影矩阵将像素(在 NDC 坐标中)放入世界空间坐标中
  3. 执行相机矩阵的逆操作,将相对于相机的坐标放入世界空间中
  4. 计算方向

The reason for NDC is to that you can map an image height/width in pixels to an arbitrary sized image (not necessarily 1:1)
Essentially what I understood was the following:

  1. You want to convert pixel X&Y to a uniform rectangle from -1 to 1 (essentially centering the camera within the viewing frame)
  2. Perform the inverse projection matrix to which uses FOV, aspect ratio and near plane to place the pixel (in NDC coordinates) into world space coordinates
  3. Perform the inverse of camera matrix to put the coordinate relative to the camera in world space
  4. Calculate direction
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文