不使用 glutLookAt* 函数的点 - 平面碰撞

发布于 2024-09-17 00:52:11 字数 1151 浏览 14 评论 0原文

据我了解,建议使用 glTranslate / glRotate 以支持 glutLootAt。除了明显的硬件与软件计算模式之外,我不会寻找其他原因,而只是随波逐流。然而,这让我有些头痛,因为我不知道如何有效地阻止相机突破墙壁。我只对点平面相交感兴趣,而不是 AABB 或其他任何东西。

因此,使用 glTranslates 和 glRotates 意味着视点保持静止(为了简单起见,在 (0,0,0)),而世界围绕它旋转。这对我来说意味着,为了检查任何交点,我现在需要为每个相机移动重新计算世界的顶点坐标(这对于 glutLookAt 方法来说是不需要的)。 由于无法从 GPU 区域获取所需的新坐标,因此需要在 CPU 区域手动计算它们。对于每次相机移动...:(

似乎需要保留 3 个轴中的每一个轴的当前旋转,并且对于平移也是如此。我的程序中没有使用缩放。我的问题:

1 - 是上述推理有缺陷怎么办? 2 - 如果没有,必须有一种方法来避免这种重新计算。 我看待它的方式(并通过查看 http://www.glprogramming.com/red/ appendixf.html)它需要一个矩阵乘法用于平移,另一个矩阵乘法用于旋转(仅需要 y 轴)。然而,必须计算如此多的加法/乘法,尤其是正弦/余弦肯定会影响 FPS。将有数千甚至数万个顶点需要计算。每一帧...所有的数学...计算出世界的新坐标后,事情似乎非常简单 - 只需查看是否有任何平面改变了其“d”符号(来自平面方程 ax + by + cz + d = 0)。如果确实如此,请使用轻量级叉积方法来测试该点是否位于该平面的每个“移动”三角形内部的空间内。

谢谢

编辑:我已经找到了关于 glGet 的信息,我认为这是可行的方法,但我不知道如何正确使用它:

    // Retains the current modelview matrix
//glPushMatrix();
glGetFloatv(GL_MODELVIEW_MATRIX, m_vt16CurrentMatrixVerts);
//glPopMatrix();

m_vt16CurrentMatrixVerts 是一个 float[16],它被 0.f 或 8.67453e-13 或类似的东西填充。我哪里搞砸了?

As I have understood, it is recommended to use glTranslate / glRotate in favour of glutLootAt. I am not going to seek the reasons beyond the obvious HW vs SW computation mode, but just go with the wave. However, this is giving me some headaches as I do not exactly know how to efficiently stop the camera from breaking through walls. I am only interested in point-plane intersections, not AABB or anything else.

So, using glTranslates and glRotates means that the viewpoint stays still (at (0,0,0) for simplicity) while the world revolves around it. This means to me that in order to check for any intersection points, I now need to recompute the world's vertices coordinates (which was not needed with the glutLookAt approach) for every camera movement.
As there is no way in obtaining the needed new coordinates from GPU-land, they need to be calculated in CPU land by hand. For every camera movement ... :(

It seems there is the need to retain the current rotations aside each of the 3 axises and the same for translations. There is no scaling used in my program. My questions:

1 - is the above reasoning flawed ? How ?
2 - if not, there has to be a way to avoid such recalculations.
The way I see it (and by looking at http://www.glprogramming.com/red/appendixf.html) it needs one matrix multiplication for translations and another one for rotating (only aside the y axis needed). However, having to compute so many additions / multiplications and especially the sine / cosine will certainly be killing FPS. There are going to be thousands or even tens of thousands of vertices to compute on. Every frame... all the maths... After having computed the new coordinates of the world things seem to be very easy - just see if there is any plane that changed its 'd' sign (from the planes equation ax + by + cz + d = 0). If it did, use a lightweight cross products approach to test if the point is inside the space inside each 'moving' triangle of that plane.

Thanks

edit: I have found about glGet and I think it is the way to go but I do not know how to properly use it:

    // Retains the current modelview matrix
//glPushMatrix();
glGetFloatv(GL_MODELVIEW_MATRIX, m_vt16CurrentMatrixVerts);
//glPopMatrix();

m_vt16CurrentMatrixVerts is a float[16] which gets filled with 0.f or 8.67453e-13 or something similar. Where am I screwing up ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

萌辣 2024-09-24 00:52:11

gluLookAt 是一个非常方便的函数,绝对没有性能损失。没有理由不使用它,最重要的是,没有“硬件与软件”的考虑。正如Mk12所说,glRotatef也是在CPU上完成的。 GPU部分是:gl_Position = ProjectionMatrix x ViewMatrix x ModelMatrix x VertexPosition。

“使用 glTranslates 和 glRotates 意味着视点保持静止” -> gluLookAt 也是如此,

“为简单起见,位于 (0,0,0)”->不是为了简单,这是事实。然而,这个(0,0,0)是在Camera坐标系中的。这是有道理的:相对于相机,相机位于原点......

现在,如果你想防止相机穿过墙壁,通常的方法是追踪来自相机的光线。我怀疑这就是你所说的(“检查任何交叉点”)。但在相机空间中不需要这样做。您可以在世界空间中执行此操作。这是一个比较:

  • 在相机空间中追踪光线:光线总是从 (0,0,0) 开始并到达 (0,0,-1)。几何图形必须从模型空间转换到世界空间,然后转换到相机空间,这会让您烦恼
  • 在世界空间中跟踪光线:光线从相机位置(在世界空间中)开始并转到 (eyeCenter - eyePos).normalize() 。几何图形必须从模型空间转换到世界空间。

请注意,没有第三个选项(在模型空间中跟踪光线)可以避免将几何体从模型空间转换到世界空间。但是,您有两种解决方法:

  • 首先,您的游戏世界可能是静止的:模型矩阵可能始终是恒等式。因此,将其几何体从模型空间转换到世界空间相当于什么都不做。
  • 其次,对于所有其他对象,您可以采取相反的方法。不要沿一个方向变换整个几何体,而只以相反的方式变换光线:将模型矩阵反转,然后您就得到了一个从世界空间到模型空间的矩阵。将光线的原点和方向乘以该矩阵:您的光线现在位于模型空间中。按正常方式相交。完毕。

请注意,我所说的都是标准技术。没有黑客或其他奇怪的东西,只有数学:)

gluLookAt is a very handy function with absolutely no performance penalty. There is no reason not to use it, and, above all, no "HW vs SW" consideration about that. As Mk12 stated, glRotatef is also done on the CPU. The GPU part is : gl_Position = ProjectionMatrix x ViewMatrix x ModelMatrix x VertexPosition.

"using glTranslates and glRotates means that the viewpoint stays still" -> same thing for gluLookAt

"at (0,0,0) for simplicity" -> not for simplicity, it's a fact. However, this (0,0,0) is in the Camera coordinate system. It makes sense : relatively to the camera, the camera is at the origin...

Now, if you want to prevent the camera from going through the walls, the usual method is to trace a ray from the camera. I suspect this is what you're talking about ("to check for any intersection points"). But there is no need to do this in camera space. You can do this in world space. Here's a comparison :

  • Tracing rays in camera space : ray always starts from (0,0,0) and goes to (0,0,-1). Geometry must be transformed from Model space to World space, and then to Camera space, which is what annoys you
  • Tracing rays in world space : ray starts from camera position (in world space) and goes to (eyeCenter - eyePos).normalize(). Geometry must be transformed from Model space to World space.

Note that there is no third option (Tracing rays in Model space) which would avoid to transform the geometry from Model space to World space. However, you have a pair of workarounds :

  • First, your game's world is probably still : the Model matrix is probably always identity. So transforming its geometry from Model to World space is equivalent to doing nothing at all.
  • Secondly, for all other objets, you can take the opposite approach. Intead of transforming the entire geometry in one direction, transform only the ray the other way around : Take your Model matrix, inverse it, and you've got a matrix which goes from world space to model space. Multiply your ray's origin and direction by this matrix : your ray is now in model space. Intersect the normal way. Done.

Note that all I've said is standard techniques. No hacks or other weird stuff, just math :)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文