使用 Z 缓冲区与根据深度确定像素优先级的优点
这更像是一个学术问题。事实上,我正在准备考试,我只是想真正理解这个概念。
请允许我稍微解释一下背景。当前的问题是在绘制到屏幕时将对象(或更具体地说是多边形)隐藏在彼此后面。需要进行计算来决定哪一个最后绘制并因此位于最前面。
在前几天的一次讲座中,我的教授表示,根据深度值对像素进行优先级排序在计算上效率低下。然后,他向我们简要解释了 Z 缓冲区以及它们如何测试像素的深度值并将其与缓冲区中像素的深度值进行比较。这与“根据深度优先考虑像素”有什么不同。
谢谢!
This is a bit more of an academic question. Indeed I am preparing for a an exam and I'm just trying to truly understand this concept.
Allow me to explain somewhat the context. The issue at hand is hiding objects (or more specifically polygons) behind each other when drawing to the screen. A calculation needs to be done to decide which one gets drawn last and therefore to the forefront.
In a lecture I was at the other day my professor stated that prioritising pixels in terms of their depth value was computationally inefficient. He then gave us a short explanation of Z-buffers and how they test depth values of pixels and compare them with the the depth values of pixels in a buffer. How is this any different then 'prioritising pixels in terms of their depth'.
Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
确定片段属于哪个多边形的计算成本很高,因为这需要为每个像素找到最接近的多边形(并且在像素着色期间具有可用的整个几何信息!)。
根据深度对整个对象进行排序是很容易的,几乎是微不足道的,每个对象都由许多三角形组成(一个多边形不超过一个或几个三角形)。然而,这只是一个粗略的近似值,附近的物体会重叠并产生伪像,因此需要采取一些措施使其像素完美。
这就是 z 缓冲区发挥作用的地方。如果结果表明片段的计算深度大于 z 缓冲区中已存储的深度,则意味着该片段“位于某物后面”,因此会被丢弃。否则,片段将写入颜色缓冲区,并将深度值写入 z 缓冲区。当然,这意味着当 20 个三角形在彼此后面时,那么同一个像素将被徒劳地着色 19 次。唉,运气不好。
现代图形硬件通过根据三角形顶点的插值深度在实际着色像素之前进行 z 测试来解决这个问题(如果每个像素计算深度)。
此外,它们还采用保守的(有时是分层的,有时只是平铺的)优化,可以快速丢弃整组片段。为此,z 缓冲区保存一些附加的(您不知道的)信息,例如渲染到 64x64 矩形区域的最大深度。有了这个信息,它可以立即丢弃该屏幕区域中大于该屏幕区域的任何片段,而无需实际查看存储的深度,并且它可以完全丢弃属于所有顶点都具有更大深度的三角形的任何片段。因为,显然,其中任何一个都是不可能可见的。
这些是实现细节,而且非常特定于平台。
编辑:虽然这可能是显而易见的,但我不确定我是否把这一点说得足够清楚:当进行排序以利用 z 剔除时,您将执行与使用画家算法所做的完全相反的。您希望首先绘制最接近的事物(粗略地说,不必是 100% 精确),因此您不是以“最后一个站立者”的方式确定像素的最终颜色,而是以“先到先得”的方式确定像素的最终颜色已送达,且仅送达一份”。
Deciding which polygon a fragment belongs to is computionally expensive, because that would require to find the closest polygon (and, having the entire geometry information available during pixel shading!) for every single pixel.
It is easy, almost trivial to sort entire objects, each consisting of many triangles (a polygon is no more than one or several triangles) according to their depth. This, however, is only a rough approximation, nearby objects will overlap and produce artefacts, so something needs to be done to make it pixel perfect.
This is where the z buffer comes in. If it turns out that a fragment's calculated depth is greater than what's already stored in the z-buffer, this means the fragment is "behind something", so it is discarded. Otherwise, the fragment is written to the color buffer and the depth value is written to the z-buffer. Of course that means that when 20 triangles are behind each other, then the same pixel will be shaded 19 times in vain. Alas, bad luck.
Modern graphics hardware addresses this by doing the z test before actually shading a pixel, according to the interpolated depth of the triangle's vertices (this optimization is obviously not possible if per-pixel depth is calculated).
Also, they employ conservative (sometimes hierarchical, sometimes just tiled) optimizations which discard entire groups of fragments quickly. For this, the z-buffer holds some additional (unknown to you) information, such as for example the maximum depth rendered to a 64x64 rectangular area. With this information, it can immediately discard any fragments in this screen area which are greater than that, without actually looking at the stored depths, and it can fully discard any fragments belonging to a triangle of which all vertices have a greater depth. Because, obviously, there is no way that any of it could be visible.
Those are implementation details, and very platform specific, though.
EDIT: Though this is probably obvious, I'm not sure if I made that point clear enough: When sorting to exploit z-culling, you would do the exact opposite of what you do with painter's algorithm. You want the closest things drawn first (roughly, does not have to be 100% precise), so instead of determining a pixel's final color in the sense of "last man standing", you have it in the sense of "first come, first served, and only one served".
您需要了解的第一件事是您的教授所说的“根据深度优先考虑像素”的含义。我的猜测是,它是关于存储给定屏幕像素的所有请求的片段,然后通过选择最接近的片段来生成结果颜色。它效率低下,因为 Z 缓冲区允许我们仅存储单个值而不是所有值。
First thing you need to understand is what your professor meant by 'prioritising pixels in terms of their depth'. My guess is that it's about storing all requested fragments for a given screen pixel and then producing the resulting color by choosing closest fragment. It's inefficient because Z buffer allows us to store only a single value instead of all of them.