在 iOS 中绘制部分图像的最有效方法

发布于 2024-12-14 02:44:00 字数 798 浏览 0 评论 0原文

给定一个 UIImage 和一个 CGRect,绘制与 CGRect 对应的图像部分的最有效方法(在内存和时间上)是什么? >(不缩放)?

作为参考,这就是我目前的做法:

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);    
    CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
    CGContextTranslateCTM(context, 0, rect.size.height);
    CGContextScaleCTM(context, 1.0, -1.0);
    CGContextDrawImage(context, rect, imageRef);
    CGImageRelease(imageRef);
}

不幸的是,对于中等大小的图像和高 setNeedsDisplay 频率,这似乎非常慢。使用 UIImageView 的框架和 clipToBounds 会产生更好的结果(灵活性较低)。

Given an UIImage and a CGRect, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect (without scaling)?

For reference, this is how I currently do it:

- (void)drawRect:(CGRect)rect {
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);    
    CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
    CGContextTranslateCTM(context, 0, rect.size.height);
    CGContextScaleCTM(context, 1.0, -1.0);
    CGContextDrawImage(context, rect, imageRef);
    CGImageRelease(imageRef);
}

Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplay frequency. Playing with UIImageView's frame and clipToBounds produces better results (with less flexibility).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

指尖上得阳光 2024-12-21 02:44:01

新框架更好且易于使用:

      func crop(img: UIImage, with rect: CGRect) -> UIImage? {
            guard let cgImg = img.cgImage else { return nil }
            // Create bitmap image from context using the rect
            if let imageRef: CGImage = cgImg.cropping(to: rect){
                // Create a new image based on the imageRef and rotate back to the original orientation
                let image: UIImage = UIImage(cgImage: imageRef, scale: img.scale, orientation: img.imageOrientation)
                return image
            }
            else{
                return nil
            }
        }

ps:我将@Scott Lahteine 的答案翻译成Swift,结果真的很奇怪。

New framework is better and easy to use:

      func crop(img: UIImage, with rect: CGRect) -> UIImage? {
            guard let cgImg = img.cgImage else { return nil }
            // Create bitmap image from context using the rect
            if let imageRef: CGImage = cgImg.cropping(to: rect){
                // Create a new image based on the imageRef and rotate back to the original orientation
                let image: UIImage = UIImage(cgImage: imageRef, scale: img.scale, orientation: img.imageOrientation)
                return image
            }
            else{
                return nil
            }
        }

ps: I translated @Scott Lahteine's answer into Swift, and the result is really weird.

束缚m 2024-12-21 02:44:00

我猜你这样做是为了在屏幕上显示图像的一部分,因为你提到了 UIImageView。而优化问题总是需要具体定义。


相信Apple的常规UI内容

实际上,如果您的目标只是剪切图像的矩形区域(不太大)。此外,您不需要发送 setNeedsDisplay 消息。

或者您可以尝试将 UIImageView 放入空的 UIView 中,并在容器视图中设置剪切。使用这种技术,您可以通过设置 2D 中的 transform 属性来自由地变换图像(缩放、旋转、平移)。

如果您需要 3D 转换,您仍然可以将 CALayermasksToBounds 属性一起使用,但使用 CALayer 只会给您带来很少的额外性能,通常并不显着。

无论如何,您需要了解所有底层细节才能正确使用它们进行优化。


为什么这是最快的方法之一?

UIView 只是 CALayer 之上的一个薄层,它是在OpenGL之上实现的,而OpenGL实际上是与GPU的直接接口。这意味着 UIKit 正在由 GPU 加速。

因此,如果您正确使用它们(我的意思是,在设计的限制内),它将执行与普通 OpenGL 实现一样的效果。如果您仅使用几个图像来显示,则通过 UIView 实现可以获得可接受的性能,因为它可以获得底层 OpenGL 的全面加速(这意味着 GPU 加速)。

无论如何,如果您需要像游戏应用程序中那样对数百个带有微调像素着色器的动画精灵进行极端优化,您应该直接使用OpenGL,因为CALayer缺乏许多优化选项较低的水平。无论如何,至少在 UI 的优化方面,想要比苹果做得更好是非常困难的。


为什么你的方法比 UIImageView 慢?

您应该了解有关 GPU 加速的全部内容。在所有最新的计算机中,快速的图形性能只能通过 GPU 来实现。然后,关键是你使用的方法是否是在 GPU 之上实现的。

IMO,CGImage 绘图方法不是用 GPU 实现的。
我想我在苹果的文档中读到过有关这一点的内容,但我不记得在哪里了。所以我对此不太确定。无论如何,我相信 CGImage 是在 CPU 中实现的,因为

  1. 它的 API 看起来像是为 CPU 设计的,例如位图编辑界面和文本绘制。它们不太适合 GPU 接口。
  2. 位图上下文接口允许直接内存访问。这意味着它的后端存储位于 CPU 内存中。也许在统一内存架构(以及 Metal API)上有些不同,但无论如何,CGImage 的最初设计意图应该是针对 CPU 的。
  3. 许多人最近发布的其他 Apple API 明确提到了 GPU 加速。这意味着他们的旧 API 不是这样。如果没有特别说明,通常默认是在CPU中完成的。

所以这似乎是在CPU中完成的。在 CPU 中完成的图形操作比在 GPU 中慢很多。

简单地剪切图像并合成图像层对于 GPU 来说是非常简单且廉价的操作(与 CPU 相比),因此您可以预期 UIKit 库将利用它,因为整个 UIKit 是在 OpenGL 之上实现的。


关于限制

因为优化是一种微观管理的工作,所以具体的数字和小事实非常重要。 中号是多少? iOS 上的 OpenGL 通常将最大纹理大小限制为 1024x1024 像素(在最近的版本中可能更大)。如果你的图像比这个大,它将无法工作,或者性能会大大降低(我认为 UIImageView 针对限制内的图像进行了优化)。

如果您需要通过裁剪显示巨大的图像,则必须使用另一种优化,例如 CATiledLayer,这是一个完全不同的故事。

除非您想了解 OpenGL 的每个细节,否则不要使用 OpenGL。它需要对底层图形的充分理解和至少100倍的代码。


关于未来

虽然这不太可能发生,但 CGImage 的东西(或其他东西)并不需要只停留在 CPU 中。不要忘记检查您正在使用的 API 的基础技术。尽管如此,GPU 的东西与 CPU 是非常不同的怪物,API 人员通常会明确而清楚地提到它们。

I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView. And optimization problems always need defining specifically.


Trust Apple for Regular UI stuff

Actually, UIImageView with clipsToBounds is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay message.

Or you can try putting the UIImageView inside of an empty UIView and set clipping at the container view. With this technique, you can transform your image freely by setting transform property in 2D (scaling, rotation, translation).

If you need 3D transformation, you still can use CALayer with masksToBounds property, but using CALayer will give you very little extra performance usually not considerable.

Anyway, you need to know all of the low-level details to use them properly for optimization.


Why is that one of the fastest ways?

UIView is just a thin layer on top of CALayer which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.

So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL implementation. If you use just a few images to display, you'll get acceptable performance with UIView implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).

Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.


Why your method is slower than UIImageView?

What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.

IMO, CGImage drawing methods are not implemented with GPU.
I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage is implemented in CPU because,

  1. Its API looks like it was designed for CPU, such as bitmap editing interface and text drawing. They don't fit to a GPU interface very well.
  2. Bitmap context interface allows direct memory access. That means it's backend storage is located in CPU memory. Maybe somewhat different on unified memory architecture (and also with Metal API), but anyway, initial design intention of CGImage should be for CPU.
  3. Many recently released other Apple APIs mentioning GPU acceleration explicitly. That means their older APIs were not. If there's no special mention, it's usually done in CPU by default.

So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.

Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.


About Limitations

Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).

If you need to display huge images with clipping, you have to use another optimization like CATiledLayer and that's a totally different story.

And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.


About Some Future

Though it is not very likely happen, but CGImage stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.

枯寂 2024-12-21 02:44:00

如果您不仅可以设置 UIImageView 的图像,还可以设置要在该 UIImage 中显示的左上角偏移量,那么最终会更快,并且从精灵图集中创建的图像会少得多。也许这是可能的。

同时,我在我的应用程序中使用的实用程序类中创建了这些有用的函数。它从另一个 UIImage 的一部分创建一个 UIImage,并提供使用标准 UIImageOrientation 值指定的旋转、缩放和翻转选项。

我的应用程序在初始化期间创建了大量 UIImage,这必然需要时间。但在选择某个选项卡之前不需要某些图像。为了使加载看起来更快,我可以在启动时生成的单独线程中创建它们,然后等到它完成(如果选择了该选项卡)。

+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
    return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}

// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {

            // convert y coordinate to origin bottom-left
    CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
            orgX = -aperture.origin.x,
            scaleX = 1.0,
            scaleY = 1.0,
            rot = 0.0;
    CGSize size;

    switch (orientation) {
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            size = CGSizeMake(aperture.size.height, aperture.size.width);
            break;
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
        case UIImageOrientationUp:
        case UIImageOrientationUpMirrored:
            size = aperture.size;
            break;
        default:
            assert(NO);
            return nil;
    }


    switch (orientation) {
        case UIImageOrientationRight:
            rot = 1.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationRightMirrored:
            rot = 1.0 * M_PI / 2.0;
            scaleY = -1.0;
            break;
        case UIImageOrientationDown:
            scaleX = scaleY = -1.0;
            orgX -= aperture.size.width;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationDownMirrored:
            orgY -= aperture.size.height;
            scaleY = -1.0;
            break;
        case UIImageOrientationLeft:
            rot = 3.0 * M_PI / 2.0;
            orgX -= aperture.size.height;
            break;
        case UIImageOrientationLeftMirrored:
            rot = 3.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            orgX -= aperture.size.width;
            scaleY = -1.0;
            break;
        case UIImageOrientationUp:
            break;
        case UIImageOrientationUpMirrored:
            orgX -= aperture.size.width;
            scaleX = -1.0;
            break;
    }

    // set the draw rect to pan the image to the right spot
    CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);

    // create a context for the new image
    UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
    CGContextRef gc = UIGraphicsGetCurrentContext();

    // apply rotation and scaling
    CGContextRotateCTM(gc, rot);
    CGContextScaleCTM(gc, scaleX, scaleY);

    // draw the image to our clipped context using the offset rect
    CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);

    // pull the image from our cropped context
    UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();

    // pop the context to get back to the default
    UIGraphicsEndImageContext();

    // Note: this is autoreleased
    return cropped;
}

It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible.

Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify.

My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done if that tab is selected.

+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
    return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}

// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {

            // convert y coordinate to origin bottom-left
    CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
            orgX = -aperture.origin.x,
            scaleX = 1.0,
            scaleY = 1.0,
            rot = 0.0;
    CGSize size;

    switch (orientation) {
        case UIImageOrientationRight:
        case UIImageOrientationRightMirrored:
        case UIImageOrientationLeft:
        case UIImageOrientationLeftMirrored:
            size = CGSizeMake(aperture.size.height, aperture.size.width);
            break;
        case UIImageOrientationDown:
        case UIImageOrientationDownMirrored:
        case UIImageOrientationUp:
        case UIImageOrientationUpMirrored:
            size = aperture.size;
            break;
        default:
            assert(NO);
            return nil;
    }


    switch (orientation) {
        case UIImageOrientationRight:
            rot = 1.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationRightMirrored:
            rot = 1.0 * M_PI / 2.0;
            scaleY = -1.0;
            break;
        case UIImageOrientationDown:
            scaleX = scaleY = -1.0;
            orgX -= aperture.size.width;
            orgY -= aperture.size.height;
            break;
        case UIImageOrientationDownMirrored:
            orgY -= aperture.size.height;
            scaleY = -1.0;
            break;
        case UIImageOrientationLeft:
            rot = 3.0 * M_PI / 2.0;
            orgX -= aperture.size.height;
            break;
        case UIImageOrientationLeftMirrored:
            rot = 3.0 * M_PI / 2.0;
            orgY -= aperture.size.height;
            orgX -= aperture.size.width;
            scaleY = -1.0;
            break;
        case UIImageOrientationUp:
            break;
        case UIImageOrientationUpMirrored:
            orgX -= aperture.size.width;
            scaleX = -1.0;
            break;
    }

    // set the draw rect to pan the image to the right spot
    CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);

    // create a context for the new image
    UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
    CGContextRef gc = UIGraphicsGetCurrentContext();

    // apply rotation and scaling
    CGContextRotateCTM(gc, rot);
    CGContextScaleCTM(gc, scaleX, scaleY);

    // draw the image to our clipped context using the offset rect
    CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);

    // pull the image from our cropped context
    UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();

    // pop the context to get back to the default
    UIGraphicsEndImageContext();

    // Note: this is autoreleased
    return cropped;
}
躲猫猫 2024-12-21 02:44:00

UIImageView 中移动大图像的非常简单的方法如下。

让我们有一张大小为 (100, 400) 的图像,代表某张图片的 4 个状态,一个在另一个下。我们想要在大小为 (100, 100) 的正方形 UIImageView 中显示 offsetY = 100 的第二张图片。
解决方案是:

UIImageView *iView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
CGRect contentFrame = CGRectMake(0, 0.25, 1, 0.25);
iView.layer.contentsRect = contentFrame;
iView.image = [UIImage imageNamed:@"NAME"];

这里 contentFrame 是相对于真实 UIImage 大小的标准化帧。
所以,“0”意味着我们从左边框开始图像的可见部分,
“0.25”意味着我们的垂直偏移量为 100,
“1”表示我们要显示图像的全宽,
最后,“0.25”表示我们只想显示图像高度的 1/4 部分。

因此,在局部图像坐标中,我们显示以下帧

CGRect visibleAbsoluteFrame = CGRectMake(0*100, 0.25*400, 1*100, 0.25*400)
or CGRectMake(0, 100, 100, 100);

The very simple way to move big image inside UIImageView as follows.

Let we have the image of size (100, 400) representing 4 states of some picture one below another. We want to show the 2nd picture having offsetY = 100 in square UIImageView of size (100, 100).
The solution is:

UIImageView *iView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
CGRect contentFrame = CGRectMake(0, 0.25, 1, 0.25);
iView.layer.contentsRect = contentFrame;
iView.image = [UIImage imageNamed:@"NAME"];

Here contentFrame is normalized frame relative to real UIImage size.
So, "0" means that we start visible part of image from left border,
"0.25" means that we have vertical offset 100,
"1" means that we want to show full width of the image,
and finally, "0.25" means that we want to show only 1/4 part of image in height.

Thus, in local image coordinates we show the following frame

CGRect visibleAbsoluteFrame = CGRectMake(0*100, 0.25*400, 1*100, 0.25*400)
or CGRectMake(0, 100, 100, 100);
跨年 2024-12-21 02:44:00

与其创建一个新图像(由于要分配内存,所以成本高昂),不如使用 CGContextClipToRect 呢?

Rather than creating a new image (which is costly because it allocates memory), how about using CGContextClipToRect?

一曲琵琶半遮面シ 2024-12-21 02:44:00

最快的方法是使用图像蒙版:与要蒙版的图像大小相同的图像,但具有特定的像素图案,指示渲染时要蒙版图像的哪一部分 ->

// maskImage is used to block off the portion that you do not want rendered
// note that rect is not actually used because the image mask defines the rect that is rendered
-(void) drawRect:(CGRect)rect maskImage:(UIImage*)maskImage {

    UIGraphicsBeginImageContext(image_.size);
    [maskImage drawInRect:image_.bounds];
    maskImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGImageRef maskRef = maskImage.CGImage;
    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                    CGImageGetHeight(maskRef),
                                    CGImageGetBitsPerComponent(maskRef),
                                    CGImageGetBitsPerPixel(maskRef),
                                    CGImageGetBytesPerRow(maskRef),
                                    CGImageGetDataProvider(maskRef), NULL, false);

    CGImageRef maskedImageRef = CGImageCreateWithMask([image_ CGImage], mask);
    image_ = [UIImage imageWithCGImage:maskedImageRef scale:1.0f orientation:image_.imageOrientation];

    CGImageRelease(mask);
    CGImageRelease(maskedImageRef); 
}

The quickest way is to use an image mask: an image that is the same size as the image to mask but with a certain pixel pattern indicating which portion of the image to mask out when rendering ->

// maskImage is used to block off the portion that you do not want rendered
// note that rect is not actually used because the image mask defines the rect that is rendered
-(void) drawRect:(CGRect)rect maskImage:(UIImage*)maskImage {

    UIGraphicsBeginImageContext(image_.size);
    [maskImage drawInRect:image_.bounds];
    maskImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    CGImageRef maskRef = maskImage.CGImage;
    CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
                                    CGImageGetHeight(maskRef),
                                    CGImageGetBitsPerComponent(maskRef),
                                    CGImageGetBitsPerPixel(maskRef),
                                    CGImageGetBytesPerRow(maskRef),
                                    CGImageGetDataProvider(maskRef), NULL, false);

    CGImageRef maskedImageRef = CGImageCreateWithMask([image_ CGImage], mask);
    image_ = [UIImage imageWithCGImage:maskedImageRef scale:1.0f orientation:image_.imageOrientation];

    CGImageRelease(mask);
    CGImageRelease(maskedImageRef); 
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文