4 维隐藏表面去除

发布于 2024-12-01 12:43:49 字数 474 浏览 3 评论 0原文

我正在尝试编写一个小型 4d 游戏。 我使用 C++ 和 OpenGL。该网站对如何增强 4d 图像提供了很好的解释:
http://eusebeia.dyndns.org/4d/vis/07-hsr .html#Enhancing_4D_Projection_Images

他们告诉我们应用 4d 隐藏表面去除 (HSR)算法。
我不得不说我是编程和算法方面的新手,我真的不知道从哪里开始组装 4d HSR,也不知道从哪里开始组装 3d HSR。
如果有人有此类算法的经验,可以解释一下如何将其翻译成 C++ 吗?

顺便说一句:我在 3d 中投影 4d 空间,所以我需要一种用于顶点去除的算法,而不是像素修改,或者至少是我的想法......我可能是错的......

I'm trying to program a little 4d game.
I use C++ and OpenGL. This website brings a good explanation on how to enhance 4d images:
http://eusebeia.dyndns.org/4d/vis/07-hsr.html#Enhancing_4D_Projection_Images

they tell to apply a 4d Hidden Surface Removal (HSR) algorithm.
I have to say I am a newbie in programming and algorithms, and I don't really have an idea where to start to put together a 4d HSR, nor a 3d one.
If somebody have experience with those kind of algorithms, can explain me how to translate it in C++?

btw: I project the 4d space in 3d, so I will need an algorithm for vertex removal, rather than pixel modification, or at least is what I think... I can be wrong...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

或十年 2024-12-08 12:43:49

在许多情况下,3D 隐藏表面去除意味着当您绘制表面时,您还会记住所绘制的每个像素的深度(距“眼睛”的距离)。当你去绘制一个已经绘制过的表面的表面时,你只会绘制比已经存在的像素更靠近眼睛的像素。在 3D 图形库 OpenGL(将 3D 场景描述投影到 2D 显示)中,这称为 深度缓冲测试

您还可以跟踪表面面向的方向。如果它背对着“眼睛”,那么根本就不要画它。在 OpenGL 中,这称为背面剔除。如果您想创建文章中描述的分层透明度外观,那么您必须根据深度对像素进行排序,并首先绘制最深的像素。然后在顶部绘制较近的像素,用旧像素颜色和当前表面颜色的凸组合替换当前像素。

对于您的 4d 案例,您需要确定投影 4d 事物意味着什么。我认为任何带有动画效果的 3D 模型实际上都是 4D 模型。如果您愿意,您可以一次绘制所有帧,就像这个模拟壁球运动员一样:
4d 壁球运动员

在此图像中,时间由灰度值表示。因此,您可以在单个静态图像中看到球员在球场上移动并击球。当然,正确的 4d 到 2d 投影可能不会显示一堆绘制在一起的离散时间帧,而是宁愿连接桥接时间的顶点,这样您就不会看到一堆单独的球,而是看到一个“管子”代表球的轨迹。但是,您链接的文章中提到了类似的情况,在某些情况下,沟通可能会较少。

In many cases, 3d hidden surface removal means that when you draw a surface, you also remember the depth of each pixel that you draw (distance from the 'eye'). When you go to draw a surface where a surface has already been drawn, you only draw the pixel if it's closer to the eye than the pixel that's already there. In the 3d graphics library, OpenGL (projects 3d scene descriptions to a 2d display), this is called the Depth Buffer Test.

You might also keep track of which direction the surface is facing. If it faces away from the 'eye', then don't draw it at all. In OpenGL, this is call backface culling. If you want to create the layered transparency look that your article describes, then you have to sort the pixels according to depth and draw the deepest ones first. Then draw the nearer ones on top, replacing the current pixel with a convex combination of the old pixel color and the current surface's color.

For your 4d case, you need to decide what it means to project a 4d thing. I think that any 3d model that's also animated is actually a 4d model. You could, if you wanted, draw all of the frames at once as with this simulated racquetball player:
4d racquetball player

In this image, time is represented by grayscale value here. So you can see the player move across the court and hit the ball, all in a single static image. Of course, a proper 4d to 2d projection probably wouldn't display a bunch of discrete time frames drawn together, but would rather connect the vertices that bridge time so that instead of seeing a bunch of individual balls, you would see a 'tube' that represents the ball's trajectory. But, kind of like is mentioned in the article you link, for some cases, that might communicate less.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文