使用位移贴图对球体进行光线追踪

发布于 2024-10-08 02:00:17 字数 193 浏览 1 评论 0原文

我有兴趣构建一个简单的“Google Earth”类型的应用程序(用于覆盖我自己的信息,而不是Google拥有的大量数据)。我希望它只是一个简单的 X11 应用程序,用位移(地形)信息对球体进行光线追踪。光线球体相交非常简单,但是当将显示映射放入其中时,我的头脑开始变得混乱。

我想知道是否有一种简单的技术来扩展基本的射线球体相交以包括位移数据......

I'm interested in building a simple "Google Earth" type app (for overlaying my own information, not the huge quantity of data that Google has). I'd like it to just be a simple X11 app that ray-traces a sphere with displacement (topographic) information. Ray-sphere intersection is pretty simple, but when the displayment mapping is throw in there it starts to get muddy in my head.

I was wondering if there's a simple technique to extend basic ray-sphere intersection to include displacement data...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

简单气质女生网名 2024-10-15 02:00:17

输入图片这里的描述

我知道距离第一篇文章已经过去 12 年了,但我已经在这里上传了这个问题的完整解决方案:
https://github.com/ramakarl/just_math

请参阅 math_displace_sphere 的示例。此光线追踪任意细节的位移球体地形,无需任何网格。纯CPU代码,可以移植到着色器/GPU。麻省理工学院许可。

解决方案是沿着射线行进,同时将射线样本点投影回经纬度,并对地形纹理进行采样以确定该样本处的位移高度。高度是相对于球体中心指定的。

伪代码:

for ( ; ray_hgt >= terrain_hgt && ray_hgt <= shell_hgt; ) { 

        sample += ray_dir * dt; 

        ray_hgt = (sample - sphere_center).Length();

        // given sample, compute surface_pnt and lat, long
        ComputeSphereUV ( sample, surface_pnt, lat, long ); 
                
        pixel_val = terrain_map->GetPixelUV ( lat, long ).x;                                    

        terrain_hgt = sphere_radius + pixel_val * terrain_depth;    
}

enter image description here

I know its been 12 years since first post, but I've uploaded a complete solution to this problem here:
https://github.com/ramakarl/just_math

See the sample for math_displace_sphere. This raytraces a displaced sphere terrain at arbitrary detail w/o any mesh. Pure CPU code, could be ported to shader/GPU. MIT licensed.

The solution is to march along a ray, while projecting the ray sample points back down to lat-long, and sampling the terrain texture to determine the displaced height at that sample. Heights are specified relative to center of sphere.

Pseudo-code:

for ( ; ray_hgt >= terrain_hgt && ray_hgt <= shell_hgt; ) { 

        sample += ray_dir * dt; 

        ray_hgt = (sample - sphere_center).Length();

        // given sample, compute surface_pnt and lat, long
        ComputeSphereUV ( sample, surface_pnt, lat, long ); 
                
        pixel_val = terrain_map->GetPixelUV ( lat, long ).x;                                    

        terrain_hgt = sphere_radius + pixel_val * terrain_depth;    
}
骷髅 2024-10-15 02:00:17

位移贴图非常简单——只需对球体进行细分,根据从贴图采样的高度向顶点位置添加偏移,并对所有部分进行光线追踪。

相机距球体/地球有多远?如果你就在表面附近,可能根本不值得制作一个完整的“球体”,只需制作一个“高度场”即可。如果您距离很远(一次查看整个星球),那么即使是最高的山脉也不应该明显地取代表面,因此您应该使用简单的凹凸贴图。还可以考虑使用组合——真正位移的粗细分,以及残余高度差上的凹凸贴图。

但无论如何,我无法想象你为什么要进行光线追踪,正如你所描述的问题一样。只需将其切成三角形并使用 OpenGL 即可。您可能不需要任何光线追踪效果。

Displacement mapping is pretty easy -- just tessellate the sphere, add offsets to the vertex positions based on the altitude sampled from the maps, and ray-trace all the pieces.

How far away is the camera from the sphere/earth? If you're right near the surface, it's probably not worth making a whole "sphere" at all, just make a "height field." If you're far (viewing the whole planet at once), then even the tallest mountains shouldn't visibly displace the surface, so you should be using simple bump mapping instead. Consider also using a combination -- a coarse tessellation that's truly displaced, and bump mapping on the residual height differences.

But in any case, I can't imagine why you would ray trace, as you've described the problem. Just chop it into triangles and use OpenGL. You probably don't need any ray-traced effects.

半暖夏伤 2024-10-15 02:00:17

我找到了这篇论文: http://www.cgl.uwaterloo.ca/~ ecdfourq/GI2008/FourquetGI2008.pdf

我想分享一下,因为它似乎完全涵盖了我想做的事情,谢谢大家!

I found this paper: http://www.cgl.uwaterloo.ca/~ecdfourq/GI2008/FourquetGI2008.pdf

Thought I'd share as it seems to cover exactly what I want to do, thanks guys!

2024-10-15 02:00:17

好吧,无论您是否有一个简单的 20,光线追踪过程都是相同的- 多边形球体或复杂的位移 2k 多边形球体。无论场景包含什么,光线都会穿过场景。但它用于实现视觉效果,如透明度、反射、折射等,从你在问题中所说的来看,我认为你可以在你的项目中没有这些效果。您很可能可以在这里使用简单的低成本光线投射

因此,一旦渲染引擎就位,您就可以将所需的所有位移添加到场景中。修改几何体的两种最常见的方式是:

  1. 凹凸贴图和
  2. 置换贴图

置换贴图将真实的多边形添加到现有几何体中,而凹凸贴图仅通过弯曲表面法线来模拟视觉效果从而影响物体的阴影。虽然弯曲法线比镶嵌几何体和添加新多边形要快得多且成本低得多,但它不会产生精确的阴影结果,因此如果它与您的应用程序有任何关系,请注意这一点。

另外,请考虑使用自适应详细级别算法和数据结构,因为距离几何体越远,它需要的细节就越少。

Well the Ray-Tracing process is the same no matter if you have a simple 20-poly sphere or a complex displaced 2k-poly sphere. The ray traverses the scene no matter what it contains. But it is used for achieving visual effects such as transparency, reflections, refractions, etc. and from what you said in your question I think you could go without those in your project. It is very likely that you can get away with simple low-cost Ray-Casting here.

So, once you get the Rendering engine in place, you can add all the displacement you want to the scene. Two most common ways of modifying geometry are:

  1. Bump mapping, and
  2. Displacement mapping

Displacement mapping adds real polygons to existing geometry, while bump mapping only simulates the visual effect by bending surface normals and thus influencing shadowing of the object. And while bending normals is a far quicker and less costly operation than tessalating geometry and adding new polys, it does not produce precise shadowing results, so watch out for that if it is of any concern to your application.

Also, consider using adaptive Level of Detail algorithms and data structures, because the further away you are from the geometry the less detail it needs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文