There are two different thing to consider: Do you just want to look it properly, i.e. hidden surface removal? Then simple depth testing will do the job; the overhead is, that you process geometry that doen't make it on the screen at all. However if that's a (very) old game, you took the data from, it's very likely that a full map with all it's assets has fewer polygons, than what's visible in modern games on a screenfull. In that case you'll not run in any performance problems.
If you really run into performance problems, you'll need to find a balance on how much time you want to spend, determining what's (not) visible, and actually rendering it. 10 years ago it was still crucial to be almost pixel perfect to save as much rasterizing time as possible. Modern GPUs have so much spare power, that it suffices to just do a coarse selection of what to include in rendering.
These calculations are however completely outside the scope of OpenGL or any other 3D rasterizing API (e.g. Direct3D) — their task is just drawing triangles to the screen using sophisticated rasterization methods; there's no object management, no higher level functions. So it's up to you to implement this.
The typical approach is using a spatial subdivision structure. Most popular are Kd trees, octrees and BSP trees. BSP trees are spatially very efficient, but heavier in computation. Personally I prefer a hybrid/combination of Kd tree and octree, since those are easy to modify to follow dynamic changes in the scene. BSP trees are a lot heavier to update (usually requires a full recomputation).
Given such a spatial structuring it's very easy to determine if a point lies in a specific region of interest. It is also very simple to select nodes in the tree by geometric constraints, like planes. This makes implementing a coarse frustum culling very easy: One uses the frustum clipping planes to select all the nodes from the tree within the planes. To make the GPUs life easier you then might want to sort the nodes near to far; again the tree structure helps you there, as you can recursively sort down the tree, resulting in a nearly optimal O(n log(n)) complexity.
If you still need to improve rendering performance, you could use the spatial divisions defined by the tree, to (invisibly) render testing geometry in a occlusion query, before recursing into the subtree limited by the tested bounds.
I know I can cull polygons not in the frustum and that will help alleviate some of the load but would I be able to say, choose to not render polygons that are a certain distance from the camera? What is this called?
This is already done by the frustum it self. The far plane set a camera distance limit to the object to be rendered.
发布评论
评论(2)
有两个不同的事情需要考虑:您是否只想正确地查看它,即隐藏表面去除?然后简单的深度测试就可以完成工作;开销是,您处理的几何图形根本不会出现在屏幕上。然而,如果这是一个(非常)古老的游戏,您从中获取数据,那么包含所有资源的完整地图很可能比现代游戏中全屏可见的多边形更少。在这种情况下,您不会遇到任何性能问题。
如果您确实遇到性能问题,则需要在要花费多少时间、确定什么(不)可见以及实际渲染它之间找到平衡。 10 年前,为了尽可能节省光栅化时间,实现近乎完美的像素仍然至关重要。现代 GPU 拥有如此多的闲置功能,只需对渲染中包含的内容进行粗略选择就足够了。
然而,这些计算完全超出了 OpenGL 或任何其他 3D 光栅化 API(例如 Direct3D)的范围 - 它们的任务只是使用复杂的光栅化方法在屏幕上绘制三角形;没有对象管理,没有更高级别的功能。因此,由您来实现这一点。
典型的方法是使用空间细分结构。最流行的是Kd 树、八叉树 和BSP 树。 BSP 树在空间上非常高效,但计算量较大。就我个人而言,我更喜欢 Kd 树和八叉树的混合/组合,因为它们很容易修改以跟随场景中的动态变化。 BSP 树的更新量要大得多(通常需要完全重新计算)。
给定这样的空间结构,很容易确定一个点是否位于特定的感兴趣区域。通过几何约束(如平面)选择树中的节点也非常简单。这使得实现粗视锥体剔除变得非常容易:使用视锥体剪切平面从平面内的树中选择所有节点。为了让 GPU 的工作更轻松,您可能需要对从近到远的节点进行排序;树结构再次为您提供帮助,因为您可以递归地对树进行排序,从而获得近乎最佳的 O(n log(n)) 复杂度。
如果您仍然需要提高渲染性能,则可以使用树定义的空间划分,在递归到受测试边界限制的子树之前,在遮挡查询中(不可见地)渲染测试几何图形。
There are two different thing to consider: Do you just want to look it properly, i.e. hidden surface removal? Then simple depth testing will do the job; the overhead is, that you process geometry that doen't make it on the screen at all. However if that's a (very) old game, you took the data from, it's very likely that a full map with all it's assets has fewer polygons, than what's visible in modern games on a screenfull. In that case you'll not run in any performance problems.
If you really run into performance problems, you'll need to find a balance on how much time you want to spend, determining what's (not) visible, and actually rendering it. 10 years ago it was still crucial to be almost pixel perfect to save as much rasterizing time as possible. Modern GPUs have so much spare power, that it suffices to just do a coarse selection of what to include in rendering.
These calculations are however completely outside the scope of OpenGL or any other 3D rasterizing API (e.g. Direct3D) — their task is just drawing triangles to the screen using sophisticated rasterization methods; there's no object management, no higher level functions. So it's up to you to implement this.
The typical approach is using a spatial subdivision structure. Most popular are Kd trees, octrees and BSP trees. BSP trees are spatially very efficient, but heavier in computation. Personally I prefer a hybrid/combination of Kd tree and octree, since those are easy to modify to follow dynamic changes in the scene. BSP trees are a lot heavier to update (usually requires a full recomputation).
Given such a spatial structuring it's very easy to determine if a point lies in a specific region of interest. It is also very simple to select nodes in the tree by geometric constraints, like planes. This makes implementing a coarse frustum culling very easy: One uses the frustum clipping planes to select all the nodes from the tree within the planes. To make the GPUs life easier you then might want to sort the nodes near to far; again the tree structure helps you there, as you can recursively sort down the tree, resulting in a nearly optimal O(n log(n)) complexity.
If you still need to improve rendering performance, you could use the spatial divisions defined by the tree, to (invisibly) render testing geometry in a occlusion query, before recursing into the subtree limited by the tested bounds.
这已经由截锥体本身完成了。远平面设置了到要渲染的对象的相机距离限制。
看看 glFrustum。
This is already done by the frustum it self. The far plane set a camera distance limit to the object to be rendered.
Have a look at glFrustum.