为什么从相机缩小 UIImage 的速度这么慢?

发布于 2024-08-13 11:28:53 字数 1312 浏览 1 评论 0原文

如果您按照 中的常规方式进行调整,则调整 UIImagePickerController 返回的相机 UIImage 的大小会花费非常长的时间这篇文章

[更新:最后一次征集创意!我想我的下一个选择是去问 Apple。]

是的,像素很多,但是 iPhone 上的图形硬件完全能够在 1/60 秒内将大量 1024x1024 纹理四边形绘制到屏幕上,所以确实应该有一种方法可以在不到 1.5 秒的时间内将 2048x1536 图像调整为 640x480。

那么为什么这么慢呢?操作系统从选择器返回的底层图像数据是否未准备好绘制,因此必须以某种 GPU 无法帮助的方式进行混合?

我最好的猜测是它需要从 RGBA 转换为 ABGR 或类似的东西;任何人都可以想出一种方法,可以说服系统快速向我提供数据,即使数据格式错误,并且我稍后会自己处理它?

据我所知,iPhone没有任何专用的“图形”内存,因此不应该存在将图像数据从一个地方移动到另一个地方的问题。

那么,问题是:除了使用 CGBitmapContextCreate 和 CGContextDrawImage 之外,是否还有其他绘图方法可以更充分地利用 GPU?

需要调查的事情:如果我从不是来自图像选择器的相同大小的 UIImage 开始,它是否同样慢?显然不是...

更新:Matt Long 发现,如果您启用了裁剪功能,则只需 30 毫秒即可调整从 [info objectForKey:@"UIImagePickerControllerEditedImage"] 中的选取器返回的图像大小手动相机控制。这对于我关心使用 takePicture 以编程方式拍照的情况没有帮助。我看到编辑后的图像是 kCGImageAlphaPremultipliedFirst 但原始图像是 kCGImageAlphaNoneSkipFirst

进一步更新:Jason Crawford 建议使用 CGContextSetInterpolationQuality(context, kCGInterpolationLow),这实际上将时间从大约 1.5 秒缩短到 1.3 秒,但代价是图像质量 - 但这距离速度还很远。 GPU应该可以的!

本周结束前的最后一次更新:用户 refulgentis 进行了一些分析,这似乎表明花费了 1.5 秒的时间将捕获的相机图像以 JPEG 格式写入磁盘,然后将其读回。如果属实,非常奇怪。

resizing a camera UIImage returned by the UIImagePickerController takes a ridiculously long time if you do it the usual way as in this post.

[update: last call for creative ideas here! my next option is to go ask Apple, I guess.]

Yes, it's a lot of pixels, but the graphics hardware on the iPhone is perfectly capable of drawing lots of 1024x1024 textured quads onto the screen in 1/60th of a second, so there really should be a way of resizing a 2048x1536 image down to 640x480 in a lot less than 1.5 seconds.

So why is it so slow? Is the underlying image data the OS returns from the picker somehow not ready to be drawn, so that it has to be swizzled in some fashion that the GPU can't help with?

My best guess is that it needs to be converted from RGBA to ABGR or something like that; can anybody think of a way that it might be possible to convince the system to give me the data quickly, even if it's in the wrong format, and I'll deal with it myself later?

As far as I know, the iPhone doesn't have any dedicated "graphics" memory, so there shouldn't be a question of moving the image data from one place to another.

So, the question: is there some alternative drawing method besides just using CGBitmapContextCreate and CGContextDrawImage that takes more advantage of the GPU?

Something to investigate: if I start with a UIImage of the same size that's not from the image picker, is it just as slow? Apparently not...

Update: Matt Long found that it only takes 30ms to resize the image you get back from the picker in [info objectForKey:@"UIImagePickerControllerEditedImage"], if you've enabled cropping with the manual camera controls. That isn't helpful for the case I care about where I'm using takePicture to take pictures programmatically. I see that that the edited image is kCGImageAlphaPremultipliedFirst but the original image is kCGImageAlphaNoneSkipFirst.

Further update: Jason Crawford suggested CGContextSetInterpolationQuality(context, kCGInterpolationLow), which does in fact cut the time from about 1.5 sec to 1.3 sec, at a cost in image quality--but that's still far from the speed the GPU should be capable of!

Last update before the week runs out: user refulgentis did some profiling which seems to indicate that the 1.5 seconds is spent writing the captured camera image out to disk as a JPEG and then reading it back in. If true, very bizarre.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

后知后觉 2024-08-20 11:28:53

似乎您在这里做出了一些可能正确也可能错误的假设。我的经历与你不同。 此方法在我的 3G 上缩放时似乎只需要 20-30 毫秒照片从相机拍摄到原始尺寸的 0.31,并调用:(

CGImageRef scaled = CreateScaledCGImageFromCGImage([image CGImage], 0.31);

顺便说一句,我通过采用宽度比例 640.0/2048.0 得到 0.31)

我已经检查以确保图像与您的尺寸相同与. 一起工作。这是我的 NSLog 输出:

2009-12-07 16:32:12.941 ImagePickerThing[8709:207] Info: {
    UIImagePickerControllerCropRect = NSRect: {{0, 0}, {2048, 1536}};
    UIImagePickerControllerEditedImage = <UIImage: 0x16c1e0>;
    UIImagePickerControllerMediaType = "public.image";
    UIImagePickerControllerOriginalImage = <UIImage: 0x184ca0>;
}

我不确定为什么存在差异,并且我无法回答您的问题,因为它与 GPU 有关,但我认为 1.5 秒和 30 毫秒是非常显着的差异。也许可以将该博客文章中的代码与您正在使用的代码进行比较?

此致。

Seems that you have made several assumptions here that may or may not be true. My experience is different than yours. This method seems to only take 20-30ms on my 3Gs when scaling a photo snapped from the camera to 0.31 of the original size with a call to:

CGImageRef scaled = CreateScaledCGImageFromCGImage([image CGImage], 0.31);

(I get 0.31 by taking the width scale, 640.0/2048.0, by the way)

I've checked to make sure the image is the same size you're working with. Here's my NSLog output:

2009-12-07 16:32:12.941 ImagePickerThing[8709:207] Info: {
    UIImagePickerControllerCropRect = NSRect: {{0, 0}, {2048, 1536}};
    UIImagePickerControllerEditedImage = <UIImage: 0x16c1e0>;
    UIImagePickerControllerMediaType = "public.image";
    UIImagePickerControllerOriginalImage = <UIImage: 0x184ca0>;
}

I'm not sure why the difference and I can't answer your question as it relates to the GPU, however I would consider 1.5 seconds and 30ms a very significant difference. Maybe compare the code in that blog post to what you are using?

Best Regards.

若无相欠,怎会相见 2024-08-20 11:28:53

使用 Shark,分析它,找出为什么花了这么长时间。

我必须使用 MediaPlayer.framework 进行大量工作,当您获取 iPod 上歌曲的属性时,与后续请求相比,第一个属性请求非常慢,因为在第一个属性请求中,MobileMediaPlayer 打包了一个包含所有属性的字典并传递它到我的应用程序。

我敢打赌这里也会发生类似的情况。

编辑:我能够在 Shark 中对 Matt Long 的 UIImagePickerControllerEditedImage 情况和通用 UIImagePickerControllerOriginalImage 情况进行时间配置文件。

在这两种情况下,大部分时间都被 CGContextDrawImage 占用。在 Matt Long 的例子中,UIImagePickerController 在用户捕获图像和图像进入“编辑”模式之间负责处理此问题。

将所用时间的百分比缩放到 CGContextDrawImage = 100%,然后 CGContextDelegateDrawImage 占用 100%,然后 ripc_DrawImage(来自 libRIP.A.dylib)占用 100%,然后 ripc_AcquireImage(看起来像是解压缩 JPEG,并占用其大部分时间) _cg_jpeg_idct_islow、vec_ycc_bgrx_convert、decompress_onepass、sep_upsample)中的时间占了 93% 的时间。实际上只有 7% 的时间花在 ripc_RenderImage 上,我认为这是实际的绘图。

Use Shark, profile it, figure out what's taking so long.

I have to work a lot with MediaPlayer.framework and when you get properties for songs on the iPod, the first property request is insanely slow compared to subsequent requests, because in the first property request MobileMediaPlayer packages up a dictionary with all the properties and passes it to my app.

I'd be willing to bet that there is a similar situation occurring here.

EDIT: I was able to do a time profile in Shark of both Matt Long's UIImagePickerControllerEditedImage situation and the generic UIImagePickerControllerOriginalImage situation.

In both cases, a majority of the time is taken up by CGContextDrawImage. In Matt Long's case, the UIImagePickerController takes care of this in between the user capturing the image and the image entering 'edit' mode.

Scaling the percentage of time taken to CGContextDrawImage = 100%, CGContextDelegateDrawImage then takes 100%, then ripc_DrawImage (from libRIP.A.dylib) takes 100%, and then ripc_AcquireImage (which it looks like decompresses the JPEG, and takes up most of its time in _cg_jpeg_idct_islow, vec_ycc_bgrx_convert, decompress_onepass, sep_upsample) takes 93% of the time. Only 7% of the time is actually spent in ripc_RenderImage, which I assume is the actual drawing.

瞄了个咪的 2024-08-20 11:28:53

我也遇到过同样的问题,并且用头撞了很长时间。据我所知,第一次访问图像选择器返回的 UIImage 时,速度很慢。作为一个实验,尝试使用 UIImage 来计时任意两个操作 - 例如,按比例缩小,然后是 UIImageJPEGRepresentation 或其他操作。然后调换顺序。当我过去这样做时,第一次操作会受到时间损失。我最好的假设是,内存仍然以某种方式位于 CCD 上,将其传输到主内存中以执行任何操作的速度很慢。

当您设置 allowedImageEditing=YES 时,您返回的图像将调整大小并裁剪为大约 320x320。这使得速度更快,但这可能不是您想要的。

我发现的最佳加速是:

CGContextSetInterpolationQuality(context, kCGInterpolationLow)

在执行 CGContextDrawImage 之前,在从 CGBitmapContextCreate 返回的上下文上。

问题是缩小后的图像可能看起来不太好。但是,如果您按整数倍缩小(例如,1600x1200 到 800x600),那么看起来就可以了。

I have had the same problem and banged my head against it for a long time. As far as I can tell, the first time you access the UIImage returned by the image picker, it's just slow. As an experiment, try timing any two operations with the UIImage--e.g., your scale-down, and then UIImageJPEGRepresentation or something. Then switch the order. When I've done this in the past, the first operation gets a time penalty. My best hypothesis is that the memory is still on the CCD somehow, and transferring it into main memory to do anything with it is slow.

When you set allowsImageEditing=YES, the image you get back is resized and cropped down to about 320x320. That makes it faster, but it's probably not what you want.

The best speedup I've found is:

CGContextSetInterpolationQuality(context, kCGInterpolationLow)

on the context you get back from CGBitmapContextCreate, before you do CGContextDrawImage.

The problem is that your scaled-down images might not look as good. However, if you're scaling down by an integer factor--e.g., 1600x1200 to 800x600--then it looks OK.

无可置疑 2024-08-20 11:28:53

这是我用过的一个 git 项目,它似乎运行良好。用法也非常干净——一行代码。

https://github.com/AliSoftware/UIImage-Resize

Here's a git project that I've used and it seems to work well. The usage is pretty clean as well - one line of code.

https://github.com/AliSoftware/UIImage-Resize

紅太極 2024-08-20 11:28:53

在这种情况下请勿使用 CGBitmapImageContextCreate!我在与您相同的情况下花了将近一周的时间。性能绝对会很糟糕,并且会疯狂地消耗内存。使用 UIGraphicsBeginImageContext 代替:

// create a new CGImage of the desired size
UIGraphicsBeginImageContext(desiredImageSize);
CGContextRef c = UIGraphicsGetCurrentContext();

// clear the new image
CGContextClearRect(c, CGRectMake(0,0,desiredImageSize.width, desiredImageSize.height));

// draw in the image
CGContextDrawImage(c, rect, [image CGImage]);

// return the result to our parent controller
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

在上面的示例中(来自我自己的图像调整大小代码),“rect”明显小于图像。上面的代码运行速度非常快,并且应该完全满足您的需要。

我不完全确定为什么 UIGraphicsBeginImageContext 如此快,但我相信它与内存分配有关。我注意到这种方法需要的内存显着减少,这意味着操作系统已经在某处为图像上下文分配了空间。

DO NOT USE CGBitmapImageContextCreate in this case! I spent almost a week in the same situation you are in. Performance will be absolutely terrible and it will eat up memory like crazy. Use UIGraphicsBeginImageContext instead:

// create a new CGImage of the desired size
UIGraphicsBeginImageContext(desiredImageSize);
CGContextRef c = UIGraphicsGetCurrentContext();

// clear the new image
CGContextClearRect(c, CGRectMake(0,0,desiredImageSize.width, desiredImageSize.height));

// draw in the image
CGContextDrawImage(c, rect, [image CGImage]);

// return the result to our parent controller
UIImage * result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

In the above example (from my own image resize code), "rect" is significantly smaller than the image. The code above runs very fast, and should do exactly what you need.

I'm not entirely sure why UIGraphicsBeginImageContext is so much faster, but I believe it has something to do with memory allocation. I've noticed that this approach requires significantly less memory, implying that the OS has already allocated space for an image context somewhere.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文