CGContextDrawImage 在绘制大型 UIImage 后极其缓慢
似乎 CGContextDrawImage(CGContextRef, CGRect, CGImageRef) 在绘制由 CoreGraphics (即使用 CGBitmapContextCreateImage)创建的 CGImage 时执行的性能比绘制支持 UIImage 的 CGImage 时执行的性能要差得多。请参阅此测试方法:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:@"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(@"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(@"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
所以我想问题是,#1 可能是什么原因造成的,#2 有解决方法吗,即创建可能更快的 CGImageRef 的其他方法?我意识到我可以先将所有内容转换为 UIImages,但这是一个如此丑陋的解决方案。我已经将 CGContextRef 放在那里了。
更新:绘制小图像时这似乎不一定正确?这可能是一个线索——当使用大图像(即全尺寸相机图片)时,这个问题会被放大。 640x480 似乎在执行时间方面与任一方法都非常相似
更新 2:好吧,所以我发现了一些新东西.. 它实际上不是 CGImage 的支持,而是改变了性能。我可以翻转这两个步骤的顺序,并使 UIImage 方法表现缓慢,而“裸”CGImage 将超级快。看来无论你执行第二个,都会遭受糟糕的表现。情况似乎如此,除非我通过在使用 CGBitmapContextCreateImage 创建的图像上调用 CGImageRelease 来释放内存。那么 UIImage 支持的方法随后会很快。反之则不然。什么给? “拥挤”的内存不应该像这样影响性能,不是吗?
更新 3:说得太早了。之前的更新适用于尺寸为 2048x2048 的图像,但提高到 1936x2592(相机尺寸),裸 CGImage 方法仍然慢得多,无论操作顺序或内存情况如何。也许存在一些 CG 内部限制,导致 16MB 图像高效,而 21MB 图像无法高效处理。绘制相机尺寸实际上比 2048x2048 慢 20 倍。不知何故,UIImage 提供其 CGImage 数据的速度比纯 CGImage 对象快得多。 oO
UPDATE 4 :我认为这可能与某些内存缓存有关,但无论 UIImage 是否使用非缓存 [UIImage imageWithContentsOfFile] 加载,结果都是相同的,就像使用 [UIImage imageNamed] 一样。
更新 5(第 2 天):在创建了比昨天回答的问题更多的问题后,我今天有了一些可靠的东西。我可以肯定地说的是:
- UIImage 后面的 CGImage 不使用 alpha。 (kCGImageAlphaNoneSkipLast)。我想也许它们绘制起来会更快,因为我的上下文使用的是 Alpha。所以我更改了上下文以使用 kCGImageAlphaNoneSkipLast。这使得绘图速度更快,除非:
- 首先使用 UIImage 绘制到 CGContextRef 中,会使所有后续图像绘制变慢
我通过 1)首先创建一个非 alpha 上下文(1936x2592)证明了这一点。 2) 用随机颜色的 2x2 方块填充它。 3) 全帧将 CGImage 绘制到该上下文中速度很快(0.17 秒) 4) 重复实验,但使用支持 UIImage 的绘制 CGImage 填充上下文。随后的全帧图像绘制时间为 6 秒以上。慢WWWW。
不知何故,使用(大)UIImage 绘制到上下文中会大大减慢所有后续绘制到该上下文中的速度。
It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:@"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(@"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(@"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
- The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
- Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
经过大量的实验,我想我已经找到了处理此类情况的最快方法。上面的绘图操作需要 6 秒以上,现在是 0.1 秒。是的。这是我的发现:
同质化你的上下文和内容。具有像素格式的图像!我问的问题的根源归结为 UIImage 内的 CGImage 使用与我的上下文相同的像素格式。故而快。 CGImages 是一种不同的格式,因此速度很慢。使用 CGImageGetAlphaInfo 检查图像以查看它们使用哪种像素格式。我现在到处使用 kCGImageAlphaNoneSkipLast 因为我不需要使用 alpha。如果您不在各处使用相同的像素格式,则在将图像绘制到上下文中时,Quartz 将被迫对每个像素执行昂贵的像素转换。 =慢
使用CGLayers!这些使得离屏绘图性能更好。其工作原理基本上如下。 1) 使用 CGLayerCreateWithContext 从上下文创建 CGLayer。 2)在通过 CGLayerGetContext 获取的本层上下文上进行任何绘图/绘图属性设置。从原始上下文中读取任何像素或信息。 3) 完成后,使用 CGContextDrawLayerAtPoint 将这个 CGLayer“标记”回原始上下文。只要记住:
1) 在“之前”释放从上下文创建的任何 CGImage(即使用 CGBitmapContextCreateImage 创建的 CGImage)使用 CGContextDrawLayerAtPoint 将图层标记回 CGContextRef。这使得绘制该图层时的速度提高了 3-4 倍。 2)保持你的像素格式在任何地方都相同! 3) 尽快清理CG对象。内存中徘徊的事情似乎会造成奇怪的减速情况,可能是因为存在与这些强引用相关的回调或检查。只是猜测,但我可以说尽快清理内存对性能有很大帮助。
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
我有类似的问题。我的应用程序必须重新绘制几乎与屏幕尺寸一样大的图片。问题归结为尽可能快地绘制两个相同分辨率的图像,既不旋转也不翻转,而是每次缩放并定位在屏幕的不同位置。毕竟,我在 iPad 1 上获得了约 15-20 FPS,在 iPad4 上获得了约 20-25 FPS。所以...希望这对某人有帮助:
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
(IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow)
。宏 IS_RETINA_DISPLAY 取自 此处。I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
(IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow)
. Macros IS_RETINA_DISPLAY is taken from here.