实时光线追踪 - Sult(Loonies、4k 简介)
我想知道 Sult 的效果如何(链接:http://www.youtube.com /watch?v=FWmv1ykGzis) 已实现。这个介绍怎么可能进行实时光线追踪?
I'd like to know how were effects from Sult (link: http://www.youtube.com/watch?v=FWmv1ykGzis) achieved. How is it possible that this intro does realtime raytracing ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
实际上,这个作品是使用类似于光线追踪的非常好的算法编写的......光线行进有符号距离场。
这是我使用相同技术制作的 4k 演示:
http://www.youtube.com/watch?v=t7anicyRI3w
有一个很棒的算法介绍如下:
http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf
Actually this production was written using a really nice algorithm similar to ray tracing... Ray marching signed distance fields.
Here's a 4k demo I made using the same techniques:
http://www.youtube.com/watch?v=t7anicyRI3w
There is an excellent presentation on the algorithm here:
http://www.iquilezles.org/www/material/nvscene2008/rwwtt.pdf
它可以是简单的反射立方体贴图。特别是在快速移动的场景中,很难发现立方体贴图而不是光线追踪的不准确性。然而,简介中的实时光线追踪并非闻所未闻:有一个名为“Heaven 7”的 64k 简介
http ://www.pouet.net/prod.php?which=5
以及开发人员关于技术细节的文章:
http://www.exceed.hu/h7/subsample.htm
It could be simple reflection cube mapping. Especially in fast moving scenes it's difficult to find the inaccuracies of cube maps instead of raytracing. However realtime raytracing in a intro is not unheared of: There's a 64k intro called "Heaven 7"
http://www.pouet.net/prod.php?which=5
And an article by the developer on the technical details:
http://www.exceed.hu/h7/subsample.htm
如前所述,甚至有软件渲染的演示实现了实时光线追踪,例如 Heaven7 或自然Suxx。我记得当时在 Pentium MMX 上以 200Mhz、320*200 分辨率以不错的速度运行它们。通过将屏幕细分为图块并对图块角上的像素进行光线追踪,并且只有当值差异很大时,这些方法才能如此快速地细分并在 4*4 图块或 2*2 图块等中进行更多计算。否则,在两者之间插入值。
如今,使用着色器代码在 GPU 上优化光线追踪变得更加容易,并且可以在良好的帧速率下以高清分辨率对许多球体进行每像素光线追踪。 Sult 只是使用另一种称为光线行进的技术,但本质上它是光线追踪的另一种方式(取决于您如何定义术语),它有其优点和缺点,例如,它非常适合小尺寸介绍和特定的扭曲对象以及空间中对象的重复这并不像传统光线追踪那样容易或快速实现。
但根据你原来问题的措辞,我想你可能也会想知道,为什么这些人这样做,而我们在主流中没有看到它?我们都从游戏行业开发者那里听说,光线追踪是未来的事情,但在游戏中尚无法实现。那么,为什么一些来自演示场景的爱好者使之成为可能,而行业资深人士却说时机尚未成熟呢?从技术上讲,我们已经可以在 50 个球体上进行光线追踪,或者重复同一球体进行光线行进,并使其扭曲和变形。但是,游戏使用的是数十万或数百万个多边形。将单条光线与数百万个多边形进行检查是完全不同的事情。我知道,有一些方法,比如空间的 kd 树细分,只检查本地很少有多边形的光线,但即使使用强大的 GPU,这仍然是一个非常困难的问题。也许除了免费获得完美的阴影和反射(在多边形引擎中你必须通过繁琐的方式来实现)之外并没有太多收获,而且会失去很多。虽然演示场景介绍主要是在具有抽象几何形状、隐式函数或体素数据的场景中进行光线追踪,但这些对于现实生活中的 3D 几何场景和视频游戏角色来说都非常抽象。而且大多数场景都很小,在开放的沙盒游戏中不容易工作。
因此,虽然这些介绍确实实现了实时光线追踪,而且自 2000 年以来我们甚至在 CPU 上也做到了这一点,但它对于游戏开发来说并不实用,因为多边形引擎在现实世界中仍然更高效、更有用。因此,您会听到专业人士声称硬件尚未准备好进行光线追踪(在数百万个多边形场景中),但会看到一些业余爱好者在 GPU 上甚至在 4k 尺寸上进行此操作。
As said before, there were even software rendered demos achieving realtime raytracing, like Heaven7 or Nature Suxx. I remember running them at decent speeds in a Pentium MMX at 200Mhz at the time, in 320*200 resolution. These worked so fast by subdividing the screen in tiles and doing the raytracing for the pixels in the corners of the tiles, and only if the values where much different subdivide and do more calculations in 4*4 tiles or 2*2 tiles, etc. Else, interpolate the values in between.
Nowadays it's much easier to optimize raytracing on the GPU with shader code and have per pixel raytracing of many spheres in HD resolution at good frame rates. Sult is just using another technique called raymarching, but essentially it's another way to raytrace (depending on how you define the terms) with it's advantages and disadvantages, for example it's very suitable for small size intros and specific twisted objects and repeatitions of objects in space that are not as easy or fast to achieve with traditional raytracing.
But based on the phrasing of your original question, I guess you might also be wondering, why these people do it and we haven't seen it in the mainstream? We have all heard from game industry developers that raytracing is something from the future and not achievable yet in gaming. So, how come some hobbyists from the demoscene make it possible, but industry veterans say the time is not yet? Technically we can already raytracing on 50 spheres or raymarch in repeatition of the same sphere and make it twist and distort. But, games are using polygons, hundreds of thousands or millions of them. To check a single ray against millions of polygons is an entirely different story. I know, there are methods like kd-tree subdivision of the space to only check a ray with few of the polygons locally, but it's still a very hard problem even with powerful GPUs. And maybe there is not much to gain, besides having perfect shadows and reflections for free (which in the polygon engines you have to achieve it with tedious ways) and a lot to lose. While the demoscene intros are mostly raytracing at scenes with abstract geometrical shapes, implicit functions or voxel data, which are all very abstract for real life 3d geometric scenes and video game characters. And most of the scenes are small, not gonna easily work in an open sandbox game.
So, while these intros really achieve realtime raytracing and we have done it even on the CPU since 2000, it's not really practical for game developing where polygon engines are still more efficient and useful in the real world. So, you'll hear professionals claiming the hardware is not ready for raytracing (in their million polygon scenes) yet see some hobbyist doing it on the GPU and in 4k sizes even.