2D OpenGL ES 架构
我正在开发的应用程序显示一个大的(大约 1M 个顶点)静态 2D 图像。由于内存限制问题,每次用户滚动或缩放时,我都会用新数据填充 VBO,给人的印象是整个图像“存在”,尽管实际上并不存在。 这种方法有两个问题:1)虽然响应能力“足够好”,但如果我能让滚动缩放更快、不那么断断续续,那就更好了。 2) 我一直坚持单次绘制可以完成的 64k 顶点限制,这对一次可以显示多少图像设置了严格限制。如果能够看到图像的更多部分,甚至同时看到全部图像,那就太好了。虽然目前的性能再次足够好,因为我们处于原型阶段并且已经设置了数据来处理这些限制,但要进入产品级别,我们必须摆脱这个限制。
最近我发现,通过使用“android:largeHeap”选项,我可以在 Motorola Xoom 上获得 256 MB 的堆空间,这意味着我可以将整个映像存储在 VBO 中。在我的理想世界中,我只需将 OpenGL 引擎传递给 VBO,然后告诉它相机已移动,或者使用 glScale/glTranslate 进行缩放/滚动。
我的问题是:我走在正确的道路上吗?我是否应该始终“绘制”所有块并让 OpenGl 找出哪些块实际上会被看到,或者自己找出哪些块是可见的?使用 gluLookAt 和 glScale/glTranslate 之间有什么区别吗?
我不关心长宽比失真(图像是数学生成的,而不是照片),它的宽度比高度要大得多,并且将来顶点的数量可能会变得非常非常大(例如60M)。感谢您抽出时间。
The app I'm developing displays a large-ish (about 1M vertices), static 2D image. Due to memory limitation issues I have been filling the VBOs with new data every time the user scrolls or zooms, giving the impression that the entire image "exists", even though it doesn't.
There are two problems with this approach: 1) although the responsiveness is "good enough", it would be better if I could make the scrolling zooming faster and less choppy. 2) I've been sticking to the 64k vertex limit that can be done in a single draw, which puts a hard limit of how much of the image can be shown at a time. It would be nice to be able to see more of the image, or even all of it at the same time. Although the performance at the moment is, again, good enough because we are at the prototype stage and have set up the data to work with the limitations, to go to the product level we will have to get rid of this limitation.
Recently I discovered that by using the "android:largeHeap" option I can get 256 MB of heap space on a Motorola Xoom, which means that I can store the entire image in VBOs. In my ideal world I would simply pass the the OpenGL engine the VBOs and either tell it that the camera has moved, or use glScale/glTranslate to zoom/scroll.
My questions are these: am I on the right track? Should I always "draw" all of the chunks and let OpenGl figure out which will actually be seen, or figure out which chunks are visible myself? Any difference between using something like gluLookAt and glScale/glTranslate?
I don't care about aspect ratio distortion (the image is mathematically generated, not a photo), it is much wider than it is high, and in the future the number of vertexes could get much, much larger (e.g. 60M). Thanks for your time.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
永远不要让 OpenGL 自己弄清楚屏幕上的内容。他不会。所有顶点都会被变换,所有不在屏幕上的顶点都会被剪掉;但你对场景有比 OpenGL 更好的了解。
使用huuuge 256Mo VBO会让你每次渲染整个场景,并且每次变换所有顶点,这对性能不利。
制作许多小的VBO(例如,任何时候都应该只显示3x3 网格),并仅显示那些可见的。可以选择根据运动推断预先填充未来的 VBO...
gluLookAt 和 glTranslate/... 之间没有区别。两者都计算矩阵,就是这样。
顺便说一句,如果您的图像是静态的,您不能预先计算它(类似于 Google 地图)吗?同样,您的数据是否提供了某种缩小时“缩小”的方法?例如,对于点云,仅显示 N 个点中的 1 个...
Never let OpenGL figure out itself what's on screen. He won't. All vertices will be transformed, and all those who arent' on screen will be clipped; but you have a better knowledge that OpenGL about your scene.
Using a huuuge 256Mo VBO will make you render the whole scene each time, and transforme ALL vertices each time, which isn't good for preformance.
Make a number of small VBOs (e.g. a 3x3 grid only should be visible at any moment), and display only those that are visible. Optionally pre-fill future VBOs based on movement extrapolation...
There is no difference between gluLookAt and glTranslate/... . Both compute matrices, that's it.
By the way, if your image is static, can't you precompute it ( à la Google Maps ) ? Similarly, does your data offer some way to be "reduced" when zoomed out ? e.g. for a point cloud, only display 1 out of N points...