如何在 iPhone 上更快地推送像素?
我之前问过有关像素推送的问题,现在已经设法让噪音显示在屏幕上。 这是我的初始化方式:
CGDataProviderRef provider;
bitmap = malloc(320*480*4);
provider = CGDataProviderCreateWithData(NULL, bitmap, 320*480*4, NULL);
CGColorSpaceRef colorSpaceRef;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
ir = CGImageCreate(
320,
480,
8,
32,
4 * 320,
colorSpaceRef,
kCGImageAlphaNoneSkipLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
这是我渲染每一帧的方式:
for (int i=0; i<320*480*4; i++) {
bitmap[i] = rand()%256;
}
CGRect rect = CGRectMake(0, 0, 320, 480);
CGContextDrawImage(context, rect, ir);
问题是这非常慢,大约 5fps。 我认为我发布缓冲区的路径一定是错误的。 是否有可能在不使用 3D 芯片的情况下,以 30fps 的速度更新全屏基于像素的图形?
I asked before about pixel-pushing, and have now managed to get far enough to get noise to show up on the screen. Here's how I init:
CGDataProviderRef provider;
bitmap = malloc(320*480*4);
provider = CGDataProviderCreateWithData(NULL, bitmap, 320*480*4, NULL);
CGColorSpaceRef colorSpaceRef;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
ir = CGImageCreate(
320,
480,
8,
32,
4 * 320,
colorSpaceRef,
kCGImageAlphaNoneSkipLast,
provider,
NULL,
NO,
kCGRenderingIntentDefault
);
Here's how I render each frame:
for (int i=0; i<320*480*4; i++) {
bitmap[i] = rand()%256;
}
CGRect rect = CGRectMake(0, 0, 320, 480);
CGContextDrawImage(context, rect, ir);
Problem is this is awfully awfully slow, around 5fps. I think my path to publish the buffer must be wrong. Is it even possible to do full-screen pixel-based graphics that I could update at 30fps, without using the 3D chip?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
几乎可以肯定,缓慢的原因在于噪音的产生。 如果您在 Instruments 中运行它,您可能会发现大量时间花费在循环中。
另一个较小的问题是你的色彩空间。 如果您使用屏幕的色彩空间,您将避免可能昂贵的色彩空间转换。
如果您可以使用 CoreGraphics 例程进行绘图,那么最好为绘图上下文创建一个 CGLayer,而不是每次都创建一个新对象。
bytesPerRow 组件对于性能也很重要。 它应该是 32 IIRC 的因数。 有一些代码链接文本显示了如何计算它。
是的,对于原始性能,OpenGL。
The slowness is almost certainly in the noise generation. If you run this in Instruments you'll probably see that a ton of time is spent sitting in your loop.
Another smaller issue is your colorspace. If you use the screen's colorspace, you'll avoid a colorspace conversion which is potentially expensive.
If you can use CoreGraphics routines for your drawing, you'd be better served by creating a CGLayer for the drawing context instead of creating a new object each time.
The bytesPerRow component is also important for performance. It should be a factor of 32 IIRC. There's some code available link text that shows how to compute it.
And yeah, for raw performance, OpenGL.
我怀疑进行 614400 (
320*480*4
) 内存写入、随机数生成以及每帧创建一个新对象会减慢您的速度。您是否尝试过将静态位图写入屏幕并看看它有多快? 您是否尝试过分析代码? 是否还需要每次都创建一个新的CGRect?
如果你只是想给出随机性的效果,可能不需要每次都重新生成整个位图。
I suspect doing 614400 (
320*480*4
) memory writes, random number generation and making a new object each frame is slowing you down.Have you tried just writing a static bitmap to screen and seeing how fast that is? Have you perhaps tried profiling the code? Do you also need to make a new CGRect each time?
If you just want to give the effect of randomness, there is probably no need to regenerate the entire bitmap each time.
据我所知,OpenGL 应该是在 iPhone 上处理图形的最快方法。 这包括 2D 和 3D。 UIView 由核心动画层支持,无论如何最终都会使用 OpenGL 进行绘制。 那么为什么不跳过中间人呢?
To my knowledge, OpenGL is supposed to be the fastest way to do graphics on the iPhone. This includes 2D and 3D. A UIView is backed by a core animation layer, which ends up drawing with OpenGL anyway. So why not skip the middle-man.
您可以通过将
CGImageRef
分配给-[CALayer setContents:]
来避免通过CGContextDrawImage
,但请确保在执行此操作时不要释放位图仍在使用它。是的,我知道这已经很旧了,我是从 Google 偶然发现的
You can avoid the trip through
CGContextDrawImage
by assigning yourCGImageRef
to-[CALayer setContents:]
, just be sure not to free bitmap while you're still using it.Yes, I know this is old, I stumbled upon it from Google