光线追踪中的 3D 仿射变换问题

发布于 2024-11-04 15:59:35 字数 2227 浏览 1 评论 0原文

总之,

我正在编写一个相当非传统的光线追踪器来计算场景中各种物体的传热属性。在此光线追踪器中,随机光线从原始对象的表面射入场景以检查交叉点。

这种特殊的算法要求每条光线在原始空间中发展,然后由源对象仿射变换到世界空间,然后随后仿射变换回场景中其他对象的原始空间以检查相交。

一切都很好,直到我进行各向异性缩放,例如按 [2 2 1] 缩放对象(各向同性缩放很好)。这让我相信我没有正确转换光线的方向分量。目前,我通过将方向分量乘以源对象逆变换矩阵的转置,将光线方向从原始空间变换到世界空间,然后通过乘以目标对象变换的转置,将光线从世界空间变换到每个原始空间矩阵。

我还尝试乘以源基元的变换矩阵以从基元到世界空间,并乘以目标逆变换以从世界空间到基元空间,但这不成功。

我相信从原始物体表面(在随机点和随机方向)发射的光线应该以与“常规”光线追踪中的表面法线相同的方式进行转换,但我不确定。

有哪位专家知道我的方法有什么缺陷吗?请随时询问是否需要更多信息。


该光线追踪器的基本算法如下:

For each object, i, in scene
{
    for each ray, r, in number of rays per object
    {
        determine random ray from primitive i
        convert ray from primitive space of i to world space

        for each object, j, in scene
        {
            convert ray to primitive space of object j
            check for intersection with object j
        }
    }
}

希望能够澄清问题,让我们看一个示例。假设我有一个沿 z 轴延伸的圆柱体(单位半径和高度)和一个位于 xy 平面上的圆环,内径为 7,外径为 8。我希望圆柱体在 x 和 y 上的比例为 6 倍方向(但不是 z 方向),所以我的仿射变换矩阵如下:

M(cylinder) = |2 0 0 0|        M^-1(cylinder) = | .5 0. 0. 0. |
              |0 2 0 0|                         | 0. .5 0. 0. |
              |0 0 1 0|                         | 0. 0. 1. 0. |
              |0 0 0 1|                         | 0. 0. 0. 1. |

M(annulus) =  |1 0 0 0|        M^-1(annulus) =  |1 0 0 0|
              |0 1 0 0|                         |0 1 0 0|
              |0 0 1 0|                         |0 0 1 0|
              |0 0 0 1|                         |0 0 0 1|

现在假设我有一条射线,它在圆柱体 s 的表面上有一个随机起点,并且有一个远离圆柱体 c 表面的随机方向,给出射线 r(os) = s + ct。

我想将这条射线从原始(对象)空间转换到世界空间,然后测试与场景中其他对象(环带)的相交。

第一个问题是使用 M(圆柱体) 或 M^-1(圆柱体) 将光线 r(os) 转换为世界空间 r(ws) 的正确方法是什么。

第二个问题是,将光线 r(ws) 从世界空间变换到物体空间,以使用 M(annulus) 和 M^-1(annulus) 检查与其他物体的相交的正确方法是什么。


一些额外的背景信息:

该应用程序用于计算 N 个物体之间的辐射热传递。射线从物体上的随机点发射,其方向被随机选择以位于以随机点处的表面法线定向的半球形分布内。


这是我的问题的一些可视化。第一次产生时的光线方向分布: 初始光线方向分布

如果我使用变换矩阵 M 将变换应用于世界坐标: M 变换的方向

如果我使用逆变换矩阵 M^-1 将变换应用于世界坐标 M^-1 变换的方向

All,

I am writing a rather non conventional ray tracer to calculate heat transfer properties of various objects in a scene. In this ray tracer random rays are shot from the surface of my primitive objects into a scene to check for intersections.

This particular algorithm requires each ray to be developed in primitive space then affine transformed by the source object into world space then subsequently affine transformed back into the primitive space of other objects in the scene to check for intersection.

All is good until I do a anisotropic scale for example scaling a object by [2 2 1] (isotropic scales are fine). This leads me to believe I am not transforming the directional component of the ray correctly. Currently I transform the ray direction from primitive space to world space by multiplying the directional component by the transpose of the source objects inverse transformation matrix and then I transform the ray from world space to each primitive space by multiplying by the transpose of the destination objects transformation matrix.

I have also tried multiplying by the source primitive's transformation matrix to go from primitive to world space and multiplying by the the destinations inverse transformation to go from world space to primitive space but this was unsuccessful.

I believe a ray launched from the surface of a primitive object (at a random point and in a random direction) should be transformed in the same manner as a surface normal in 'regular' ray tracing however I am not certain.

Any of the experts out there know what the flaw in my methodology is? Feel free to ask if more information is required.


The basic algorithm for this ray tracer is as follows:

For each object, i, in scene
{
    for each ray, r, in number of rays per object
    {
        determine random ray from primitive i
        convert ray from primitive space of i to world space

        for each object, j, in scene
        {
            convert ray to primitive space of object j
            check for intersection with object j
        }
    }
}

Hopefully to clear up the question lets look at an example. Lets assume I have a cylinder extending along the z-axis (unit radius and height) and an annulus lying in the x-y plane with inner diameter 7 and outer diameter 8. I want the scale the cylinder by a factor 6 in the x and y directions (but not the z direction) so my affine transformation matrix are as follows:

M(cylinder) = |2 0 0 0|        M^-1(cylinder) = | .5 0. 0. 0. |
              |0 2 0 0|                         | 0. .5 0. 0. |
              |0 0 1 0|                         | 0. 0. 1. 0. |
              |0 0 0 1|                         | 0. 0. 0. 1. |

M(annulus) =  |1 0 0 0|        M^-1(annulus) =  |1 0 0 0|
              |0 1 0 0|                         |0 1 0 0|
              |0 0 1 0|                         |0 0 1 0|
              |0 0 0 1|                         |0 0 0 1|

Now assume I have a ray which has a random starting point on the surface of the cylinder s and a random direction away from the surface of the cylinder c giving the ray r(os) = s + ct.

I want to transform this ray from primitive (object) space to world space and then test for intersection with the other objects in the scene (the annulus).

The first question is what is the correct way to transform the ray, r(os), to world space, r(ws), using M(cylinder) or M^-1(cylinder).

The second question is what is the correct way to then transform the ray, r(ws), from world space to object space to check for intersection with the other objects using M(annulus) and M^-1(annulus).


Some additional background information:

This application is for calculating radiative heat transfer between N objects. The ray is launched from a random point on the object with its direction being randomly selected to lie within a hemispherical distribution orientated with the surface normal at the random point.


Here is some visualisation of my problem. The ray directional distribution when it is first generated:
Initial ray directional distribution

If I apply the transformation to world co-ordinates using the transformation matrix M:
Direction transformed by M

If I apply the transformation to world co-ordinates using the inversetransformation matrix M^-1
Direction transformed by M^-1

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

思慕 2024-11-11 15:59:35

逆转置变换矩阵保持旋转分量恒定,但反转缩放。这意味着缩放仍然存在。这对于法线来说是正确的:考虑在 2d 中从 (0,0)(.707,.707) 的线段。法线为(-.707,.707)。如果我们按 (s,1) 缩放,我们会得到从 (0,0)(s*.707,.707) 的分段>。在极限情况下,随着 s 变大,我们基本上有一条平行于 x 轴的直线。这意味着法线应指向 y 轴。所以我们得到的法线为(-.707/s,.707)。然而,从这个例子中应该清楚,变换后的向量不再是单位长度。也许您需要标准化方向分量?

如果我们首先使用变换矩阵可以表示为夹在两个旋转之间的缩放(SVD)的属性,我们会得到如下所示的出站变换矩阵:R2out*Sout^-1*R1out 然后你的入站转换矩阵如下所示:R1in^-1*Sin*R2in^-1 (我多么希望 SO 使用 Mathjax...)。只要您重新规范化向量,这似乎是正确的事情。


编辑:

一夜之间思考这个问题,我决定逆转置可能对正常情况有效。考虑上面的例子。如果s=2,则线段的斜率由原来的1变为1/2。同样,法线的斜率从 -1 变为 -2。线段和射线之间仍然存在 90 度角。到目前为止,一切都很好。现在...如果所考虑的向量实际上平行于线段怎么办?我们得到2的斜率,不再平行。

所以,我想我现在有两个问题。您的程序实际上出了什么问题/是什么让您认为它不正确?什么是正确行为?也许你可以制作一个二维图。

The inverse transpose transformation matrix keeps the rotation component constant, but inverts the scaling. That means that the scaling is still there. This is correct for normals: Consider, in 2d, a line segment from (0,0) to (.707,.707). The normal is (-.707,.707). If we scale by (s,1), we get a segment from (0,0) to (s*.707,.707). In the limit, as s grows large, we essentially have a flat line parallel to the x axis. That means that the normal should point along the y axis. So we get a normal of (-.707/s,.707). It should be clear from this example, however, that the transformed vector is no longer unit length. Perhaps you need to normalize the directional component?

If we start by using the property that a transformation matrix can be represented as a scaling sandwiched between two rotations (a la SVD), we get your outbound transformation matrix looks like so: R2out*Sout^-1*R1out and then your inbound transformation matrix looks like so: R1in^-1*Sin*R2in^-1 (how I wish SO used Mathjax...). This seems like the right thing, as long as you re-normalize your vectors.


Edit:

Thinking about this overnight, I decided that the inverse-transpose thing might only be valid for the normals case. Consider the example above. If s=2, then the slope of the line-segment, originally 1, turns into 1/2. Likewise, the slope of the normal turns from -1 into -2. There's still a 90 degree angle between the line segment and the ray. So far so good. Now... what if the vector under consideration is actually parallel to the line segment. We get a slope of 2, no longer parallel.

So, I guess I have two questions at this point. What is actually going wrong in your program/What makes you think it's not correct? And what is the correct behavior? Perhaps you can make a 2D plot.

陌伤ぢ 2024-11-11 15:59:35

前几天这个问题刚刚出现在 这个问题的

答案之一链接到光线追踪新闻文章,该文章讨论了法线逆变换转置的使用。

我必须同意 JCooper 的观点,即“到底出了什么问题?”。我的第一个想法是,您似乎正在模拟辐射传热,并且您必须小心物体的不均匀缩放。如果在发射的物体表面上有均匀分布的“光子”,然后对该物体应用非均匀缩放,则离开该表面的光子将是非均匀分布的。这是一个可能的陷阱,但由于您没有指出出了什么问题,所以很难说这是否是您的问题。

要回答有关正确转换方法的问题,请点击 此链接光线追踪新闻

This just came up the other day in This question

One of the answers links to a Ray Tracing News article discussing the use of the transpose of the inverse transform for normals.

I have to agree with JCooper in asking "what is actually going wrong?". My first thought is that you seem to be simulating radiated heat transfer, and you have to be careful will non-uniform scaling of objects. If you have a uniform distribution of "photons" on an objects surface being emitted, and then you apply a non-uniform scaling of that object, you will then have a non-uniform distribution of photons leaving the surface. This is one possible pitfall, but since you don't indicate what's going wrong it's hard to say if this is your problem.

To answer your questions about the correct way to do the transformations, follow This link to Ray Tracing News

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文