关于使用 XNA 进行光线追踪的世界视图投影矩阵问题感到困惑
因此,我决定重写一个用 C++ 编写的旧光线追踪器,并利用 XNA 框架用 C# 来实现。
我仍然保留着我的旧书,可以按照笔记进行操作,但是我对一些想法感到困惑,我想知道是否有人可以很好地阐明它。
for each x pixel do
for each y pixel do
//Generate Ray
//1 - Calculate world coordinates of current pixel
//1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v)
u = (2*x/ WIDTH) - 1 ;
v = (2*y/HEIGHT) - 1 ;
Vector3 rayDirection = -1*focalLength + u'*u + v'*v
在上面的代码中,u'和v'是为给定相机计算的标准正交基(我知道相同的名称会让人感到困惑)
如果我按照书本并按照其表达的方式进行操作,它就可以工作。然而,我正在尝试利用 XNA,但对如何使用矩阵执行相同的操作感到困惑。
所以我尝试用 XNA 代码替换以下步骤
class Camera
{
public Camera(float width, float height)
{
AspectRatio = width/height;
FOV = Math.PI / 2.0f;
NearPlane = 1.0f;
FarPlane = 100.0f;
ViewMatrix = Matrix.CreateLookAt(Position, Direction,this.Up);
ProjectionMatrix=Matrix.CreatePerspectiveFieldOfView(FOV,
AspectRatio,NearPlane,FarPlane);
}
}
此时我对为了获取任何像素 (x, y) 的方向向量而应该应用的操作顺序感到困惑?
在我的脑海里我在想: (u,v) = ProjectionMatrix * ViewMatrix * ModelToWorld * Vertex(在模型空间中)
有道理
因此, Vertex (在世界空间中) = Inverse(ViewMatrix) * Inverse(ProjectionMatrix) * [u, v, 0]
我也 记得有关视图矩阵如何转置和反转的一些信息,因为它是正交的。
So I've decided to rewrite an old ray tracer I had which was written in C++ and to do it in C#, leveraging the XNA framework.
I still have my old book and can follow the notes however I am confused regarding a few ideas and I was wondering whether someone could articulate it nicely.
for each x pixel do
for each y pixel do
//Generate Ray
//1 - Calculate world coordinates of current pixel
//1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v)
u = (2*x/ WIDTH) - 1 ;
v = (2*y/HEIGHT) - 1 ;
Vector3 rayDirection = -1*focalLength + u'*u + v'*v
In the above code u' and v' are the orthnormal basis calculated for the given camera (I know the same names make it confusing)
If I follow the book and do it the way it expresses, it works. However I am trying to leverage XNA and getting confused on how to perform the same actions but using Matrices.
So I've tried to replace the following steps with the XNA code
class Camera
{
public Camera(float width, float height)
{
AspectRatio = width/height;
FOV = Math.PI / 2.0f;
NearPlane = 1.0f;
FarPlane = 100.0f;
ViewMatrix = Matrix.CreateLookAt(Position, Direction,this.Up);
ProjectionMatrix=Matrix.CreatePerspectiveFieldOfView(FOV,
AspectRatio,NearPlane,FarPlane);
}
}
It's at this point I'm confused in the order of operations I am supposed to apply in order to get the direction vector for any pixel (x, y) ?
In my head I'm thinking:
(u,v) = ProjectionMatrix * ViewMatrix * ModelToWorld * Vertex(in model space)
Therefore it would make sense that
Vertex (in world space) = Inverse(ViewMatrix) * Inverse(ProjectionMatrix) * [u, v, 0]
I also remembered something about how the view Matrix can be Transposed as well as Inverted since it is orthonormal.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
实际上没有必要使用矩阵进行光线追踪。透视投影刚刚脱离了系统。这是光线追踪的好处之一。
你的评论也很混乱。
//1 - 计算当前像素的世界坐标
//1.1 计算当前像素 1- 到 -1 (u, v) 的归一化设备坐标
NDC 在光线追踪中没有任何作用,所以我不知道你在这里在说什么。你用 u, v 代码所做的就是计算方向
用于基于您在世界空间中设置的虚拟像素网格的光线。然后你将追踪光线进入场景并查看它是否相交
与任何东西。
事实上,你现在真的不需要担心不同的空间仪式。只需将所有内容放入世界坐标中即可。如果你想做
复杂的模型(模型变换,如缩放旋转和拉屎)可能需要模型->世界变换,但是当您第一次开始编写射线时
追踪器你不需要担心那些东西。
如果你想使用 XNA,你可以使用该相机类,但就像某些成员将毫无用处。即矩阵以及近平面和远平面。
Theres really no need to be using matrices for ray tracing. Perspective projection just falls out of the system. That's one of the benefits of ray tracing.
ur comments are confusing too.
//1 - Calculate world coordinates of current pixel
//1.1 Calculate Normalized Device coordinates for current pixel 1- to -1 (u, v)
NDC doesnt have any role in ray tracing so I don't know what you are talking about here. All you are doing with that u, v code is calculating a direction
for the ray based on the virtual grid of pixels you set up in world space. then you are going to trace the ray out into the scene and see if it intersects
with anything.
Really you don't really need to be worrying about different spaces rite now. Just put everything into world co-ordinates and call it a day. if you want to do
complicated models (model transforms like scale rotate and shit) a model->world transform might be needed but when you first start writing a ray
tracer you don't need to worry about that stuff.
if you want to use XNA you can use that camera class but like some of the members are going to be useless. i.e. the matricies and near and far plane.
NDC 的原因是您可以将图像高度/宽度(以像素为单位)映射到任意大小的图像(不一定是 1:1)
基本上我的理解如下:
The reason for NDC is to that you can map an image height/width in pixels to an arbitrary sized image (not necessarily 1:1)
Essentially what I understood was the following: