是否有光栅化或光线追踪的渲染替代方案?

发布于 2024-07-17 03:04:30 字数 86 浏览 4 评论 0原文

光栅化(三角形)和光线追踪是我遇到过的渲染 3D 场景的唯一方法。 还有其他人吗? 另外,我很想知道任何其他真正“现有”的 3D 方法,例如不使用多边形。

Rasterisation (triangles) and ray tracing are the only methods I've ever come across to render a 3D scene. Are there any others? Also, I'd love to know of any other really "out there" ways of doing 3D, such as not using polygons.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

说好的呢 2024-07-24 03:04:30

啊啊! 这些答案太无知了!

当然,问题不精确也无济于事。

好吧,“渲染”是一个非常广泛的话题。 渲染中的一个问题是相机可见性或“隐藏表面算法”——找出每个像素中看到的对象。 可见性算法有多种分类。 这可能就是发帖者所问的问题(因为他们认为这是“光栅化”和“光线追踪”之间的二分法)。

一个经典的(虽然现在有点过时)分类参考文献是 Sutherland 等人的“十个隐藏表面算法的特征”,ACM 计算机调查 1974。它非常过时,但它仍然非常适合提供一个框架来思考如何对此类算法进行分类。

一类隐藏表面算法涉及“光线投射”,它计算从相机穿过每个像素的线与对象(可以有各种表示形式,包括三角形、代数曲面、NURBS 等)的交集。

其他类别的隐藏表面算法包括“z 缓冲区”、“扫描线技术”、“列表优先级算法”等。 在没有太多计算周期并且没有足够内存来存储 z 缓冲区的日子里,他们的算法非常有创意。

如今,计算和内存都很便宜,因此三种技术几乎胜出:(1)将所有内容切成三角形并使用 z 缓冲区; (2)射线投射; (3) 类似 Reyes 的算法,使用扩展的 z 缓冲区来处理透明度等。 现代显卡做到了#1; 高端软件渲染通常执行#2 或#3 或组合。 尽管已经提出了各种光线追踪硬件,有时甚至构建了这些硬件,但从未流行起来,而且现代 GPU 现在也具有足够的可编程性,可以​​实际进行光线追踪,尽管其硬编码光栅化技术在速度上存在严重劣势。 多年来,其他更奇特的算法大多已被淘汰。 (尽管各种排序/泼溅算法可用于体积渲染或其他特殊目的。)

“光栅化”实际上只是意味着“找出对象位于哪些像素上”。 惯例规定它排除光线追踪,但这是不稳定的。 我想你可以证明光栅化回答“这个形状重叠的像素”,而光线追踪回答“哪个对象在这个像素后面”,如果你看到差异的话。

那么,隐藏表面去除并不是“渲染”领域唯一需要解决的问题。 了解每个像素中可见的对象只是一个开始; 您还需要知道它是什么颜色,这意味着有某种方法来计算光线如何在场景中传播。 有一大堆技术,通常分为处理阴影、反射和“全局照明”(在对象之间反弹,而不是直接来自灯光)。

“光线追踪”意味着应用光线投射技术来确定阴影、反射、全局照明等的可见性。可以对所有内容使用光线追踪,或者使用各种光栅化方法来实现相机可见性,并使用光线追踪来实现阴影、反射、和地理标志。 “光子映射”和“路径追踪”是计算某些类型的光传播的技术(使用光线追踪,因此说它们在本质上是一种不同的渲染技术是错误的)。 还有一些不使用光线追踪的全局照明技术,例如“光能传递”方法(这是一种解决全局光传播的有限元方法,但在该领域的大多数部分最近已失宠)。 但是使用光能传递或光子映射进行光传播仍然需要您以某种方式制作最终图片,通常使用标准技术之一(光线投射、z 缓冲/光栅化等)。

提到特定形状表示(NURBS、体积、三角形)的人也有点困惑。 这是光线追踪与光栅化的正交问题。 例如,您可以直接对 nurb 进行光线追踪,也可以将 nurb 切成三角形并进行追踪。 您可以直接将三角形栅格化到 z 缓冲区中,但您也可以按扫描线顺序直接栅格化高阶参数化曲面(参见 Lane/Carpenter/etc CACM 1980)。

Aagh! These answers are very uninformed!

Of course, it doesn't help that the question is imprecise.

OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms. That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing").

A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten Hidden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.

One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc.).

Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.

These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques. Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.)

"Rasterizing" really just means "figuring out which pixels an object lies on." Convention dictates that it excludes ray tracing, but this is shaky. I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference.

Now then, hidden surface removal is not the only problem to be solved in the field of "rendering." Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene. There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights).

"Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately). But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc.).

People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization. For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).

做个ˇ局外人 2024-07-24 03:04:30

有一种名为光子映射的技术,它实际上与光线追踪非常相似,但在复杂的情况下提供了各种优势场景。 事实上,如果操作得当,这是唯一提供真正逼真(即遵守所有光学定律)渲染的方法(至少我知道)。 据我所知,这是一种很少使用的技术,因为它的性能甚至比光线追踪差得多(因为它有效地做了相反的事情并模拟光子从光源到相机所采取的路径) - 但这就是它的唯一的缺点。 这当然是一个有趣的算法,尽管直到光线追踪之后(如果有的话)之后你才会看到它的广泛使用。

There's a technique called photon mapping that is actually quite similar to ray tracing, but provides various advantages in complex scenes. In fact, it's the only method (at least of which I know) that provides truly realistic (i.e. all the laws of optics are obeyed) rendering if done properly. It's a technique that's used sparingly as far as I know, since it's performance is hugely worse than even ray tracing (given that it effectively does the opposite and simulates the paths taken by photons from the light sources to the camera) - yet this is it's only disadvantage. It's certainly an interesting algorithm, though you're not going to see it in widescale use until well after ray tracing (if ever).

听,心雨的声音 2024-07-24 03:04:30

维基百科上的渲染文章涵盖了各种技术

介绍段落:

许多渲染算法已经
研究和使用的软件
渲染可以采用多种
不同的技术来获得最终的
图片。

追踪场景中的每一束光线
是不切实际的,需要
大量的时间。 甚至追踪
足够大的部分可以产生
图像占用过多
如果采样时间不
智能限制。

因此,四个松散的家庭
更高效的光传输
建模技术已经出现:
光栅化,包括扫描线
渲染、几何投影
场景中的物体到图像
平面,无先进光学
影响; 光线投射考虑
从特定的场景观察到的
的观点,计算
仅基于几何的观察图像
和非常基本的光学定律
反射强度,也许
使用蒙特卡罗技术来减少
文物; 光能传递使用有限
模拟的元素数学
光的漫射传播
表面; 和光线追踪类似
光线投射,但使用更多
先进的光学模拟,以及
通常使用蒙特卡罗技术
获得更现实的结果
速度通常是
幅度较慢。

最先进的软件结合了两个或
更多的技术来获得
在合理的情况下获得足够好的结果
成本。

另一个区别是图像之间
排序算法,迭代
图像平面和物体的像素
排序算法,迭代
场景中的物体。 一般反对
订单效率更高,因为有
场景中的物体通常少于
像素。

从这些描述来看,只有光能传递对我来说在概念上似乎有所不同。

The Rendering article on Wikipedia covers various techniques.

Intro paragraph:

Many rendering algorithms have been
researched, and software used for
rendering may employ a number of
different techniques to obtain a final
image.

Tracing every ray of light in a scene
is impractical and would take an
enormous amount of time. Even tracing
a portion large enough to produce an
image takes an inordinate amount of
time if the sampling is not
intelligently restricted.

Therefore, four loose families of
more-efficient light transport
modelling techniques have emerged:
rasterisation, including scanline
rendering, geometrically projects
objects in the scene to an image
plane, without advanced optical
effects; ray casting considers the
scene as observed from a specific
point-of-view, calculating the
observed image based only on geometry
and very basic optical laws of
reflection intensity, and perhaps
using Monte Carlo techniques to reduce
artifacts; radiosity uses finite
element mathematics to simulate
diffuse spreading of light from
surfaces; and ray tracing is similar
to ray casting, but employs more
advanced optical simulation, and
usually uses Monte Carlo techniques to
obtain more realistic results at a
speed that is often orders of
magnitude slower.

Most advanced software combines two or
more of the techniques to obtain
good-enough results at reasonable
cost.

Another distinction is between image
order algorithms, which iterate over
pixels of the image plane, and object
order algorithms, which iterate over
objects in the scene. Generally object
order is more efficient, as there are
usually fewer objects in a scene than
pixels.

From those descriptions, only radiosity seems different in concept to me.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文