OpenGL ES 2 / iOS GLKit 的设计建议

发布于 2024-12-13 02:55:11 字数 504 浏览 1 评论 0原文

我想使用新的 GLKit 框架构建一个应用程序,并且需要一些设计建议。我想创建一个应用程序,它将呈现多达几千个“砖块”(具有非常简单几何形状的对象)。大多数将具有相同的纹理,但最多数百个将具有独特的纹理。我希望砖块每隔几秒钟出现一次,移动到位,然后保持原状(在世界坐标中)。我想模拟一个相机,其位置和方向由用户手势控制。

我需要的建议是关于如何组织代码。我希望我的模型是一个砖块的集合,这些砖块不仅仅具有与其关联的图形数据:

  • 将类似视图的对象与每个手柄几何形状、纹理等相关联是否有意义?
  • 每个砖块都应该有自己的顶点缓冲区吗?
  • 每个都应该有自己的 GLKBaseEffect 吗?
  • 我正在寻求帮助来组织哪些对象在设置过程中应该做什么,然后进行渲染。

我希望我能保持接近典型的 MVC 模式,用我的 GLKViewController 观察模型状态变化,根据手势控制眼睛坐标,等等。

如果您能提供一些建议或引导我找到一个好的例子,我将不胜感激。提前致谢!

I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.

The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:

  • Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
  • Should every brick have it's own vertex buffer?
  • Should each have it's own GLKBaseEffect?
  • I'm looking for help organizing what object should do what during setup, then rendering.

I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.

Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

ぶ宁プ宁ぶ 2024-12-20 02:55:12

关于模型,我认为类似于 UIImage 和 UIImageView 之间关系的方法是合适的。因此,每种类型的砖块都有一个顶点缓冲区、GLKBaseEffect、纹理等。每个砖块可能会出现多次,就像多个 UIImageViews 可能使用相同的 UIImage 一样。就保留多个参考框架而言,构建一个基本上相当于 UIView 的层次结构实际上是一个非常好的主意,每个层次结构都包含一些相对于父级的转换,并且一种能够显示模型。

从 GLKit 文档中,我认为保留您想要的相机类型(实际上是对象位置)的最佳方法是将其直接存储为 GLKMatrix4GLKQuaternion - 因此,您不需要从相机的其他描述中导出矩阵或四元数(加上位置),而是矩阵或四元数直接是相机的存储。

这两个类都有内置的方法来应用旋转,并且 GLKMatrix4 可以直接处理平移。因此您可以直接将相关手势映射到这些功能。

当以这种方式处理相机时,我能想到的唯一稍微不明显的事情是您想要将发送到OpenGL而不是事物本身。假设您使用矩阵,原因是如果您想在该位置绘制对象,您将直接加载矩阵然后绘制对象。当您在与相机相同的位置绘制对象时,您希望它最终在原点绘制。因此,您必须为相机加载的矩阵是您要加载以在该位置绘制的矩阵的逆矩阵,因为您希望将两者相乘作为单位矩阵。

我不确定您的积木模型有多复杂,但如果它们很简单并且完全独立移动,您可能会遇到性能瓶颈。处理 OpenGL 时的一般规则是,一次可以提交的几何体越多,一切进展得越快。因此,例如,像大多数游戏中那样的完全静态的世界比一切都可以独立移动的世界更容易有效地绘制。如果您正在绘制六面立方体并独立移动它们,那么您可能会看到比您预期的性能更差的性能。

如果您有任何一致移动的砖块,那么将它们绘制为单个几何体会更有效。如果你有任何绝对不可见的砖块,那么就不要尝试绘制它们。从 iOS 5 开始,可以使用 GL_EXT_occlusion_query_boolean,这是一种将某些几何图形传递给 OpenGL 并询问其中是否有任何可见的方法。您可以在实时场景中使用它,方法是构建描述数据的层次结构(如果您直接遵循 UIView 类比,那么您已经拥有了该结构),计算或存储每个视图的一些边界几何图形,以及仅当遮挡查询表明至少某些边界几何体可见时才进行绘制。通过遵循这种逻辑,您通常可以在提交之前很久就丢弃大片几何图形。

With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.

From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.

Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.

The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.

I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.

If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文