Raytracer - 计算眼睛光线

发布于 2024-11-05 21:41:12 字数 965 浏览 3 评论 0原文

我正在编写一个光线追踪器(主要是为了好玩),虽然我过去写过一个光线追踪器,并花了相当多的时间进行搜索,但似乎没有任何教程能够阐明如何计算透视投影中的眼睛光线,不使用矩阵。

我相信我上次这样做是通过(可能)低效地使用四元数类将眼睛向量从相机方向向量旋转 x/y 度。这是用 C++ 编写的,我正在用 C# 编写,尽管这并不是那么重要。

伪代码(假设 V * Q = 变换操作)

yDiv = fovy / height
xDiv = fovx / width

for x = 0 to width
    for y = 0 to height

        xAng = (x / 2 - width) * xDiv
        yAng = (y / 2 - height) * yDiv
        Q1 = up vector, xAng
        Q2 = camera right vector, yAng
        Q3 = mult(Q1, Q2)

        pixelRay = transform(Q3, camera direction)
        raytrace pixelRay

    next
next

我认为实际问题是它模拟的是球形屏幕表面,而不是平面屏幕表面。

请注意,虽然我知道如何以及为何使用叉积、点积、矩阵等,但我实际的 3D 数学问题解决能力并不出色。

给出:

  • 相机位置、方向和上矢量
  • 视场
  • 屏幕像素和/或子采样划分

为光线追踪器生成 x/y 像素坐标的眼睛光线的实际方法是什么?

澄清一下:我正是要计算的,我只是不太擅长用 3D 数学来计算它,并且我发现没有光线追踪器代码似乎有我的代码需要计算单个像素的眼睛光线。

在此处输入图像描述

I'm writing a ray tracer (mostly for fun) and whilst I've written one in the past, and spent a decent amount of time searching, no tutorials seem to shed light on the way to calculate the eye rays in a perspective projection, without using matrices.

I believe the last time I did it was by (potentially) inefficiently rotating the eye vectors x/y degrees from the camera direction vector using a Quaternion class. This was in C++, and I'm doing this one in C#, though that's not so important.

Pseudocode (assuming V * Q = transform operation)

yDiv = fovy / height
xDiv = fovx / width

for x = 0 to width
    for y = 0 to height

        xAng = (x / 2 - width) * xDiv
        yAng = (y / 2 - height) * yDiv
        Q1 = up vector, xAng
        Q2 = camera right vector, yAng
        Q3 = mult(Q1, Q2)

        pixelRay = transform(Q3, camera direction)
        raytrace pixelRay

    next
next

I think the actual problem with this is that it's simulating a spherical screen surface, not a flat screen surface.

Mind you, whilst I know how and why to use cross products, dot products, matrices and such, my actual 3D mathematics problem solving skills aren't fantastic.

So given:

  • Camera position, direction and up-vector
  • Field of view
  • Screen pixels and/or sub-sampling divisions

What is the actual method to produce an eye ray for x/y pixel coordinates for a raytracer?

To clarify: I exactly what I'm trying to calculate, I'm just not great at coming up with the 3D math to compute it, and no ray tracer code I find seems to have the code I need to compute the eye ray for an individual pixel.

enter image description here

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

十年不长 2024-11-12 21:41:12

在此处输入图像描述

INPUT: camera_position_vec, direction_vec, up_vec, screen_distance

right_vec = direction_vec x up_vec
for y from 0 to 1600:
    for x from 0 to 2560:
        # location of point in 3d space on screen rectangle
        P_3d = camera_position_vec + screen_distance*direction_vec
               + (y-800)*-up_vec
               + (x-1280)*right_vec

        ray = Ray(camera_position_vec, P_3d)
        yield "the eye-ray for `P_2d` is `ray`"

x 表示叉积

编辑
答案假设 direction_vec 已标准化,正如它应该的那样。 right_vec 在图片中(看起来左边应该在的位置),但 right_vec 不是必需的,如果包含的话,应该始终与 方向相同 - (up_vec x Direction_vec)。此外,该图还表明,x 坐标随着向右移动而增加,y 坐标随着向下移动而增加。标志已略有改变以反映这一点。缩放可以通过将方程中的 x 项和 y 项相乘来执行,或者更有效地,将向量相乘并使用 scaled_up_vecscaled_right_vec 来执行。然而,变焦相当于改变视场(FoV)(因为光圈并不重要;这是一个完美的针孔相机),这是一个比任意“变焦”更好处理的量。有关如何实现 FoV 的信息,请参阅下面我的评论。

enter image description here

INPUT: camera_position_vec, direction_vec, up_vec, screen_distance

right_vec = direction_vec x up_vec
for y from 0 to 1600:
    for x from 0 to 2560:
        # location of point in 3d space on screen rectangle
        P_3d = camera_position_vec + screen_distance*direction_vec
               + (y-800)*-up_vec
               + (x-1280)*right_vec

        ray = Ray(camera_position_vec, P_3d)
        yield "the eye-ray for `P_2d` is `ray`"

x means the cross product

edit:
The answer assumed that direction_vec is normalized, as it should be. right_vec is in the picture (seemingly where the left should be), but right_vec is not necessary and, if included, should always be in the same direction as -(up_vec x direction_vec). Furthermore the picture implies the x-coord increases as one goes right, and the y-coord increases as one goes down. The signs have been changed slightly to reflect that. A zoom may either be performed by multiplying the x- and y- terms in the equation, or more efficiently, multiplying the vectors and using scaled_up_vec and scaled_right_vec. A zoom is however equivalent (since aperture doesn't matter; this is a perfect pinhole camera) to changing the field of view (FoV) which is a much better nicer quantity to deal with than arbitrary "zoom". For information about how to implement FoV, seem my comment below.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文