iOS:有关 GLKMatrix4MakeLookAt 结果中相机信息的问题

发布于 2025-01-01 04:35:12 字数 2011 浏览 2 评论 0原文

iOS 5 文档显示 GLKMatrix4MakeLookAt 的操作方式与 gluLookAt 相同。

此处提供了定义:

static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
                                                  float centerX, float centerY, float centerZ,
                                                  float upX, float upY, float upZ)
{
    GLKVector3 ev = { eyeX, eyeY, eyeZ };
    GLKVector3 cv = { centerX, centerY, centerZ };
    GLKVector3 uv = { upX, upY, upZ };
    GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev, GLKVector3Negate(cv)));
    GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv, n));
    GLKVector3 v = GLKVector3CrossProduct(n, u);

    GLKMatrix4 m = { u.v[0], v.v[0], n.v[0], 0.0f,
                     u.v[1], v.v[1], n.v[1], 0.0f,
                     u.v[2], v.v[2], n.v[2], 0.0f,
                     GLKVector3DotProduct(GLKVector3Negate(u), ev),
                     GLKVector3DotProduct(GLKVector3Negate(v), ev),
                     GLKVector3DotProduct(GLKVector3Negate(n), ev),
                     1.0f };

    return m;
}

我试图从中提取相机信息:

1. Read the camera position
    GLKVector3 cPos = GLKVector3Make(mx.m30, mx.m31, mx.m32);
2. Read the camera right vector as `u` in the above
    GLKVector3 cRight = GLKVector3Make(mx.m00, mx.m10, mx.m20);
3. Read the camera up vector as `u` in the above
    GLKVector3 cUp = GLKVector3Make(mx.m01, mx.m11, mx.m21);
4. Read the camera look-at vector as `n` in the above
    GLKVector3 cLookAt = GLKVector3Make(mx.m02, mx.m12, mx.m22);

有两个问题:


  1. 注视向量在定义时似乎被否定,因为它们执行(眼睛 - 中心) 而不是 (中心 - 眼睛)。事实上,当我使用 (0,0,-10) 相机位置和 (0,0,1) 中心调用 GLKMatrix4MakeLookAt 时> 我提取的结果是 (0,0,-1),即我所期望的负值。那么我应该否定我提取的内容吗?

  2. 我提取的相机位置是视图变换矩阵预乘视图旋转矩阵的结果,因此是其定义中的点积。我认为这是不正确的 - 谁能建议我应该如何计算位置?


非常感谢您抽出时间。

The iOS 5 documentation reveals that GLKMatrix4MakeLookAt operates the same as gluLookAt.

The definition is provided here:

static __inline__ GLKMatrix4 GLKMatrix4MakeLookAt(float eyeX, float eyeY, float eyeZ,
                                                  float centerX, float centerY, float centerZ,
                                                  float upX, float upY, float upZ)
{
    GLKVector3 ev = { eyeX, eyeY, eyeZ };
    GLKVector3 cv = { centerX, centerY, centerZ };
    GLKVector3 uv = { upX, upY, upZ };
    GLKVector3 n = GLKVector3Normalize(GLKVector3Add(ev, GLKVector3Negate(cv)));
    GLKVector3 u = GLKVector3Normalize(GLKVector3CrossProduct(uv, n));
    GLKVector3 v = GLKVector3CrossProduct(n, u);

    GLKMatrix4 m = { u.v[0], v.v[0], n.v[0], 0.0f,
                     u.v[1], v.v[1], n.v[1], 0.0f,
                     u.v[2], v.v[2], n.v[2], 0.0f,
                     GLKVector3DotProduct(GLKVector3Negate(u), ev),
                     GLKVector3DotProduct(GLKVector3Negate(v), ev),
                     GLKVector3DotProduct(GLKVector3Negate(n), ev),
                     1.0f };

    return m;
}

I'm trying to extract camera information from this:

1. Read the camera position
    GLKVector3 cPos = GLKVector3Make(mx.m30, mx.m31, mx.m32);
2. Read the camera right vector as `u` in the above
    GLKVector3 cRight = GLKVector3Make(mx.m00, mx.m10, mx.m20);
3. Read the camera up vector as `u` in the above
    GLKVector3 cUp = GLKVector3Make(mx.m01, mx.m11, mx.m21);
4. Read the camera look-at vector as `n` in the above
    GLKVector3 cLookAt = GLKVector3Make(mx.m02, mx.m12, mx.m22);

There are two questions:


  1. The look-at vector seems negated as they defined it, since they perform (eye - center) rather than (center - eye). Indeed, when I call GLKMatrix4MakeLookAt with a camera position of (0,0,-10) and a center of (0,0,1) my extracted look at is (0,0,-1), i.e. the negative of what I expect. So should I negate what I extract?

  2. The camera position I extract is the result of the view transformation matrix premultiplying the view rotation matrix, hence the dot products in their definition. I believe this is incorrect - can anyone suggest how else I should calculate the position?


Many thanks for your time.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

乖乖兔^ω^ 2025-01-08 04:35:12

根据其文档,gluLookAt计算中心-眼睛,将其用于一些中间步骤,然后将其取负以放置到结果矩阵中。因此,如果您想要中心眼向后看,那么取负片显然是正确的。

您还会注意到,返回的结果相当于一个 multMatrix,其中结果的旋转部分后跟 -eye 的 glTranslate。由于经典的 OpenGL 矩阵运算是后乘,这意味着 gluLookAt 被定义为后乘旋转乘以平移。所以苹果的实现是正确的,就像先移动相机,然后旋转它一样——这是正确的。

因此,如果您定义 R =(定义指令旋转部分的矩阵),T =(平移模拟),您将得到 RT 如果您想提取 T,您可以预乘 R 的倒数,然后将结果从最后一列,因为矩阵乘法是结合的。

额外的好处是,因为 R 是正交的,所以逆矩阵只是转置。

Per its documentation, gluLookAt calculates centre - eye, uses that for some intermediate steps, then negatives it for placement into the resulting matrix. So if you want centre - eye back, the taking negative is explicitly correct.

You'll also notice that the result returned is equivalent to a multMatrix with the rotational part of the result followed by a glTranslate by -eye. Since the classic OpenGL matrix operations post multiply, that means gluLookAt is defined to post multiply the rotational by the translational. So Apple's implementation is correct, and the same as first moving the camera, then rotating it — which is correct.

So if you define R = (the matrix defining the rotational part of your instruction), T = (the translational analogue), you get R.T. If you want to extract T you could premultiply by the inverse of R and then pull the results out of the final column, since matrix multiplication is associative.

As a bonus, because R is orthonormal, the inverse is just the transpose.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文