iOS:以自定义颜色播放逐帧灰度动画

发布于 2024-11-14 22:12:51 字数 1337 浏览 4 评论 0 原文

我有一个钻石爆炸成碎片的 32 帧灰度动画(即 32 个 PNG 图像 @ 1024x1024),

我的游戏由 12 种不同的颜色组成,所以我需要以任何所需的颜色执行动画,

我相信这排除了任何 Apple 框架,它排除了 iOS 中用于逐帧动画的大量公共代码。

我潜在的解决方案路径是什么?

这些是我找到的最好的SO链接:

最后一个只是表明可以将图像加载到每帧的 GL 纹理中(他是从相机中执行此操作,所以如果我将所有内容都存储在内存中,那应该会更快)

我可以请参阅这些选项(列出最懒的第一个,最优化的最后)

选项 A 每帧(由 CADisplayLink 提供),将文件中的相关图像加载到纹理中,并显示该纹理

我很确定这是愚蠢的,所以选择选项 B

选项 B 将所有图像预加载到内存中 然后如上所述,只有我们从内存而不是从文件加载

我认为这将是理想的解决方案,任何人都可以竖起大拇指或竖起大拇指吗?

选项C 将我的所有 PNG 预加载到最大尺寸的单个 GL 纹理中,创建纹理图集。每个帧,将纹理坐标设置为该帧的图集中的矩形。

虽然这可能是编码效率和性能效率之间的完美平衡,但这里的主要问题是失去分辨率;在较旧的 iOS 设备上,最大纹理尺寸为 1024x1024。如果我们将 32 帧塞入其中(实际上这与塞入 64 帧相同),则每帧的分辨率将为 128x128。如果生成的动画在 iPad 上接近全屏,这不会影响它

选项 D 不要加载到单个 GL 纹理中,而是加载到一堆纹理中 此外,我们可以使用所有四个通道将 4 个图像压缩到单个纹理中,

但我对这里所需的大量繁琐编码感到犹豫。即使想到这种方法,我的 RSI 也开始感到刺痛,

我想我已经在这里回答了我自己的问题,但如果有人真正做到了这一点或可以看到解决方法,请回答!

I have a 32 frame greyscale animation of a diamond exploding into pieces (ie 32 PNG images @ 1024x1024)

my game consists of 12 separate colours, so I need to perform the animation in any desired colour

this I believe rules out any Apple frameworks, also it rules out a lot of public code for animating frame by frame in iOS.

what are my potential solution paths?

these are the best SO links I have found:

that last one just shows it is may be possible to load an image into a GL texture each frame ( he is doing it from the camera, so if I have everything stored in memory, that should be even faster )

I can see these options ( listed laziest first, most optimised last )

option A
each frame (courtesy of CADisplayLink), load the relevant image from file into a texture, and display that texture

I'm pretty sure this is stupid, so onto option B

option B
preload all images into memory
then as per above, only we load from memory rather than from file

I think this is going to be the ideal solution, can anyone give it the thumbs up or thumbs down?

option C
preload all of my PNGs into a single GL texture of the maximum size, creating a texture Atlas. each frame, set the texture coordinates to the rectangle in the Atlas for that frame.

while this is potentially a perfect balance between coding efficiency and performance efficiency, the main problem here is losing resolution; on older iOS devices maximum texture size is 1024x1024. if we are cramming 32 frames into this ( really this is the same as cramming 64 ) we would be at 128x128 for each frame. if the resulting animation is close to full screen on the iPad this isn't going to hack it

option D
instead of loading into a single GL texture, load into a bunch of textures
moreover, we can squeeze 4 images into a single texture using all four channels

I baulk at the sheer amount of fiddly coding required here. My RSI starts to tingle even thinking about this approach

I think I have answered my own question here, but if anyone has actually done this or can see the way through, please answer!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

无尽的现实 2024-11-21 22:12:51

如果需要比(B)更高的性能,看起来关键是 glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml

我们可以安排,而不是一次从记忆中拉出一帧比如说内存中连续的 16 个 512x512x8 位灰度帧,将其作为单个 1024x1024x32 位 RGBA 纹理发送到 GL,然后使用上述函数在 GL 中将其分割。

这意味着我们每 16 帧而不是每一帧执行一次 [RAM->VRAM] 传输。

当然,对于更现代的设备,我们可以获得 64 而不是 16,因为更新的 iOS 设备可以处理 2048x2048 纹理。

我将首先尝试技术(B),如果它有效的话就保留它(我不想过度编码),如果需要的话看看这个。

我仍然找不到任何方法来查询图形芯片上可以保存多少个 GL 纹理。有人告诉我,当您尝试为纹理分配内存时,GL 在内存耗尽时只会返回 0。然而,为了正确地实现这一点,我想确保我没有靠近风航行 re: 资源...我不希望我的动画消耗太多 VRAM,以致我的其余渲染失败...

If something higher performance than (B) is needed, it looks like the key is glTexSubImage2D http://www.opengl.org/sdk/docs/man/xhtml/glTexSubImage2D.xml

Rather than pull across one frame at a time from memory, we could arrange say 16 512x512x8-bit greyscale frames contiguously in memory, send this across to GL as a single 1024x1024x32bit RGBA texture, and then split it within GL using the above function.

This would mean that we are performing one [RAM->VRAM] transfer per 16 frames rather than per one frame.

Of course, for more modern devices we could get 64 instead of 16, since more recent iOS devices can handle 2048x2048 textures.

I will first try technique (B) and leave it at that if it works ( I don't want to over code ), and look at this if needed.

I still can't find any way to query how many GL textures it is possible to hold on the graphics chip. I have been told that when you try to allocate memory for a texture, GL just returns 0 when it has run out of memory. however to implement this properly I would want to make sure that I am not sailing close to the wind re: resources... I don't want my animation to use up so much VRAM that the rest of my rendering fails...

傲性难收 2024-11-21 22:12:51

您可以使用 CoreGraphics API 很好地解决这个问题,没有理由深入研究 OpenGL 来解决这样的简单 2D 问题。有关从灰度帧创建彩色帧的一般方法,请参阅 着色图像忽略 alpha 通道原因和修复方法。基本上,您需要使用 CGContextClipToMask(),然后渲染特定颜色,以便剩下的就是用您选择的特定颜色着色的菱形。您可以在运行时执行此操作,也可以离线执行此操作并为您想要支持的每种颜色创建 1 个视频。如果你执行N次运算并将结果保存到文件中,那么你的CPU会更容易,但现代iOS硬件比以前快得多。编写视频处理代码时要注意内存使用问题,请参阅 video-and-memory-usage-on-ios-devices 获取描述问题空间的入门知识。您可以使用纹理图集和复杂的 openGL 内容对其进行编码,但是使用视频的方法会更容易处理,并且您不需要太担心资源使用情况,请参阅内存中链接的我的库如果您有兴趣节省实施时间,请发布更多信息。

You would be able to get this working just fine with CoreGraphics APIs, there is no reason to deep dive into OpenGL for a simple 2D problem like this. For the general approach you should take to creating colored frames from a grayscale frame, see colorizing-image-ignores-alpha-channel-why-and-how-to-fix. Basically, you need to use CGContextClipToMask() and then render a specific color so that what is left is the diamond colored in with the specific color you have selected. You could do this at runtime, or you could do it offline and create 1 video for each of the colors you want to support. It is be easier on your CPU if you do the operation N times and save the results into files, but modern iOS hardware is much faster than it used to be. Beware of memory usage issues when writing video processing code, see video-and-memory-usage-on-ios-devices for a primer that describes the problem space. You could code it all up with texture atlases and complex openGL stuff, but an approach that makes use of videos would be a lot easier to deal with and you would not need to worry so much about resource usage, see my library linked in the memory post for more info if you are interested in saving time on the implementation.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文