在 iOS 中绘制部分图像的最有效方法
给定一个 UIImage
和一个 CGRect
,绘制与 CGRect
对应的图像部分的最有效方法(在内存和时间上)是什么? >(不缩放)?
作为参考,这就是我目前的做法:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, rect, imageRef);
CGImageRelease(imageRef);
}
不幸的是,对于中等大小的图像和高 setNeedsDisplay
频率,这似乎非常慢。使用 UIImageView
的框架和 clipToBounds
会产生更好的结果(灵活性较低)。
Given an UIImage
and a CGRect
, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect
(without scaling)?
For reference, this is how I currently do it:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect frameRect = CGRectMake(frameOrigin.x + rect.origin.x, frameOrigin.y + rect.origin.y, rect.size.width, rect.size.height);
CGImageRef imageRef = CGImageCreateWithImageInRect(image_.CGImage, frameRect);
CGContextTranslateCTM(context, 0, rect.size.height);
CGContextScaleCTM(context, 1.0, -1.0);
CGContextDrawImage(context, rect, imageRef);
CGImageRelease(imageRef);
}
Unfortunately this seems extremely slow with medium-sized images and a high setNeedsDisplay
frequency. Playing with UIImageView
's frame and clipToBounds
produces better results (with less flexibility).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
新框架更好且易于使用:
ps:我将@Scott Lahteine 的答案翻译成Swift,结果真的很奇怪。
New framework is better and easy to use:
ps: I translated @Scott Lahteine's answer into Swift, and the result is really weird.
我猜你这样做是为了在屏幕上显示图像的一部分,因为你提到了 UIImageView。而优化问题总是需要具体定义。
相信Apple的常规UI内容
实际上,如果您的目标只是剪切图像的矩形区域(不太大)。此外,您不需要发送
setNeedsDisplay
消息。或者您可以尝试将
UIImageView
放入空的UIView
中,并在容器视图中设置剪切。使用这种技术,您可以通过设置 2D 中的transform
属性来自由地变换图像(缩放、旋转、平移)。如果您需要 3D 转换,您仍然可以将
CALayer
与masksToBounds
属性一起使用,但使用CALayer
只会给您带来很少的额外性能,通常并不显着。无论如何,您需要了解所有底层细节才能正确使用它们进行优化。
为什么这是最快的方法之一?
UIView
只是CALayer
之上的一个薄层,它是在OpenGL之上实现的,而OpenGL实际上是与GPU的直接接口。这意味着 UIKit 正在由 GPU 加速。因此,如果您正确使用它们(我的意思是,在设计的限制内),它将执行与普通
OpenGL
实现一样的效果。如果您仅使用几个图像来显示,则通过UIView
实现可以获得可接受的性能,因为它可以获得底层 OpenGL 的全面加速(这意味着 GPU 加速)。无论如何,如果您需要像游戏应用程序中那样对数百个带有微调像素着色器的动画精灵进行极端优化,您应该直接使用OpenGL,因为
CALayer
缺乏许多优化选项较低的水平。无论如何,至少在 UI 的优化方面,想要比苹果做得更好是非常困难的。为什么你的方法比 UIImageView 慢?
您应该了解有关 GPU 加速的全部内容。在所有最新的计算机中,快速的图形性能只能通过 GPU 来实现。然后,关键是你使用的方法是否是在 GPU 之上实现的。
IMO,
CGImage
绘图方法不是用 GPU 实现的。我想我在苹果的文档中读到过有关这一点的内容,但我不记得在哪里了。所以我对此不太确定。无论如何,我相信 CGImage 是在 CPU 中实现的,因为
所以这似乎是在CPU中完成的。在 CPU 中完成的图形操作比在 GPU 中慢很多。
简单地剪切图像并合成图像层对于 GPU 来说是非常简单且廉价的操作(与 CPU 相比),因此您可以预期 UIKit 库将利用它,因为整个 UIKit 是在 OpenGL 之上实现的。
关于限制
因为优化是一种微观管理的工作,所以具体的数字和小事实非常重要。 中号是多少? iOS 上的 OpenGL 通常将最大纹理大小限制为 1024x1024 像素(在最近的版本中可能更大)。如果你的图像比这个大,它将无法工作,或者性能会大大降低(我认为 UIImageView 针对限制内的图像进行了优化)。
如果您需要通过裁剪显示巨大的图像,则必须使用另一种优化,例如 CATiledLayer,这是一个完全不同的故事。
除非您想了解 OpenGL 的每个细节,否则不要使用 OpenGL。它需要对底层图形的充分理解和至少100倍的代码。
关于未来
虽然这不太可能发生,但 CGImage 的东西(或其他东西)并不需要只停留在 CPU 中。不要忘记检查您正在使用的 API 的基础技术。尽管如此,GPU 的东西与 CPU 是非常不同的怪物,API 人员通常会明确而清楚地提到它们。
I guessed you are doing this to display part of an image on the screen, because you mentioned
UIImageView
. And optimization problems always need defining specifically.Trust Apple for Regular UI stuff
Actually,
UIImageView
withclipsToBounds
is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to sendsetNeedsDisplay
message.Or you can try putting the
UIImageView
inside of an emptyUIView
and set clipping at the container view. With this technique, you can transform your image freely by settingtransform
property in 2D (scaling, rotation, translation).If you need 3D transformation, you still can use
CALayer
withmasksToBounds
property, but usingCALayer
will give you very little extra performance usually not considerable.Anyway, you need to know all of the low-level details to use them properly for optimization.
Why is that one of the fastest ways?
UIView
is just a thin layer on top ofCALayer
which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.So if you use them properly (I mean, within designed limitations), it will perform as well as plain
OpenGL
implementation. If you use just a few images to display, you'll get acceptable performance withUIView
implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because
CALayer
lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.Why your method is slower than
UIImageView
?What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.
IMO,
CGImage
drawing methods are not implemented with GPU.I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe
CGImage
is implemented in CPU because,CGImage
should be for CPU.So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.
Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.
About Limitations
Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).
If you need to display huge images with clipping, you have to use another optimization like
CATiledLayer
and that's a totally different story.And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.
About Some Future
Though it is not very likely happen, but
CGImage
stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.如果您不仅可以设置 UIImageView 的图像,还可以设置要在该 UIImage 中显示的左上角偏移量,那么最终会更快,并且从精灵图集中创建的图像会少得多。也许这是可能的。
同时,我在我的应用程序中使用的实用程序类中创建了这些有用的函数。它从另一个 UIImage 的一部分创建一个 UIImage,并提供使用标准 UIImageOrientation 值指定的旋转、缩放和翻转选项。
我的应用程序在初始化期间创建了大量 UIImage,这必然需要时间。但在选择某个选项卡之前不需要某些图像。为了使加载看起来更快,我可以在启动时生成的单独线程中创建它们,然后等到它完成(如果选择了该选项卡)。
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible.
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done if that tab is selected.
在 UIImageView 中移动大图像的非常简单的方法如下。
让我们有一张大小为 (100, 400) 的图像,代表某张图片的 4 个状态,一个在另一个下。我们想要在大小为 (100, 100) 的正方形 UIImageView 中显示 offsetY = 100 的第二张图片。
解决方案是:
这里 contentFrame 是相对于真实 UIImage 大小的标准化帧。
所以,“0”意味着我们从左边框开始图像的可见部分,
“0.25”意味着我们的垂直偏移量为 100,
“1”表示我们要显示图像的全宽,
最后,“0.25”表示我们只想显示图像高度的 1/4 部分。
因此,在局部图像坐标中,我们显示以下帧
The very simple way to move big image inside UIImageView as follows.
Let we have the image of size (100, 400) representing 4 states of some picture one below another. We want to show the 2nd picture having offsetY = 100 in square UIImageView of size (100, 100).
The solution is:
Here contentFrame is normalized frame relative to real UIImage size.
So, "0" means that we start visible part of image from left border,
"0.25" means that we have vertical offset 100,
"1" means that we want to show full width of the image,
and finally, "0.25" means that we want to show only 1/4 part of image in height.
Thus, in local image coordinates we show the following frame
与其创建一个新图像(由于要分配内存,所以成本高昂),不如使用 CGContextClipToRect 呢?
Rather than creating a new image (which is costly because it allocates memory), how about using
CGContextClipToRect
?最快的方法是使用图像蒙版:与要蒙版的图像大小相同的图像,但具有特定的像素图案,指示渲染时要蒙版图像的哪一部分 ->
The quickest way is to use an image mask: an image that is the same size as the image to mask but with a certain pixel pattern indicating which portion of the image to mask out when rendering ->