Raytracer - 计算眼睛光线
我正在编写一个光线追踪器(主要是为了好玩),虽然我过去写过一个光线追踪器,并花了相当多的时间进行搜索,但似乎没有任何教程能够阐明如何计算透视投影中的眼睛光线,不使用矩阵。
我相信我上次这样做是通过(可能)低效地使用四元数类将眼睛向量从相机方向向量旋转 x/y 度。这是用 C++ 编写的,我正在用 C# 编写,尽管这并不是那么重要。
伪代码(假设 V * Q = 变换操作)
yDiv = fovy / height
xDiv = fovx / width
for x = 0 to width
for y = 0 to height
xAng = (x / 2 - width) * xDiv
yAng = (y / 2 - height) * yDiv
Q1 = up vector, xAng
Q2 = camera right vector, yAng
Q3 = mult(Q1, Q2)
pixelRay = transform(Q3, camera direction)
raytrace pixelRay
next
next
我认为实际问题是它模拟的是球形屏幕表面,而不是平面屏幕表面。
请注意,虽然我知道如何以及为何使用叉积、点积、矩阵等,但我实际的 3D 数学问题解决能力并不出色。
给出:
- 相机位置、方向和上矢量
- 视场
- 屏幕像素和/或子采样划分
为光线追踪器生成 x/y 像素坐标的眼睛光线的实际方法是什么?
澄清一下:我正是要计算的,我只是不太擅长用 3D 数学来计算它,并且我发现没有光线追踪器代码似乎有我的代码需要计算单个像素的眼睛光线。
I'm writing a ray tracer (mostly for fun) and whilst I've written one in the past, and spent a decent amount of time searching, no tutorials seem to shed light on the way to calculate the eye rays in a perspective projection, without using matrices.
I believe the last time I did it was by (potentially) inefficiently rotating the eye vectors x/y
degrees from the camera direction vector using a Quaternion
class. This was in C++, and I'm doing this one in C#, though that's not so important.
Pseudocode (assuming V * Q = transform operation)
yDiv = fovy / height
xDiv = fovx / width
for x = 0 to width
for y = 0 to height
xAng = (x / 2 - width) * xDiv
yAng = (y / 2 - height) * yDiv
Q1 = up vector, xAng
Q2 = camera right vector, yAng
Q3 = mult(Q1, Q2)
pixelRay = transform(Q3, camera direction)
raytrace pixelRay
next
next
I think the actual problem with this is that it's simulating a spherical screen surface, not a flat screen surface.
Mind you, whilst I know how and why to use cross products, dot products, matrices and such, my actual 3D mathematics problem solving skills aren't fantastic.
So given:
- Camera position, direction and up-vector
- Field of view
- Screen pixels and/or sub-sampling divisions
What is the actual method to produce an eye ray for x/y pixel coordinates for a raytracer?
To clarify: I exactly what I'm trying to calculate, I'm just not great at coming up with the 3D math to compute it, and no ray tracer code I find seems to have the code I need to compute the eye ray for an individual pixel.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
x
表示叉积编辑:
答案假设
direction_vec
已标准化,正如它应该的那样。right_vec
在图片中(看起来左边应该在的位置),但right_vec
不是必需的,如果包含的话,应该始终与方向相同 - (up_vec x Direction_vec)
。此外,该图还表明,x 坐标随着向右移动而增加,y 坐标随着向下移动而增加。标志已略有改变以反映这一点。缩放可以通过将方程中的 x 项和 y 项相乘来执行,或者更有效地,将向量相乘并使用scaled_up_vec
和scaled_right_vec
来执行。然而,变焦相当于改变视场(FoV)(因为光圈并不重要;这是一个完美的针孔相机),这是一个比任意“变焦”更好处理的量。有关如何实现 FoV 的信息,请参阅下面我的评论。x
means the cross productedit:
The answer assumed that
direction_vec
is normalized, as it should be.right_vec
is in the picture (seemingly where the left should be), butright_vec
is not necessary and, if included, should always be in the same direction as-(up_vec x direction_vec)
. Furthermore the picture implies the x-coord increases as one goes right, and the y-coord increases as one goes down. The signs have been changed slightly to reflect that. A zoom may either be performed by multiplying the x- and y- terms in the equation, or more efficiently, multiplying the vectors and usingscaled_up_vec
andscaled_right_vec
. A zoom is however equivalent (since aperture doesn't matter; this is a perfect pinhole camera) to changing the field of view (FoV) which is a much better nicer quantity to deal with than arbitrary "zoom". For information about how to implement FoV, seem my comment below.