iOS 上的 UIGraphicsBeginImageContext 与 CGBitmapContextCreate
这可能是一个非常愚蠢的问题,但是有人可以告诉我使用 UIGraphicsBeginImageContext 创建 CGContextRef 和使用 CGBitmapContextCreate 绘制图像之间的区别吗?特别是现在,由于 UIKit 绘图是线程安全的,我想知道是否有任何理由使用 CGBitmapContextCreate 而不是 UIGraphicsBeginImageContext。
This may be a really stupid question, but can someone tell me the difference between creating a CGContextRef using UIGraphicsBeginImageContext and using CGBitmapContextCreate for drawing to images? Especially now since UIKit drawing is thread safe I was wondering if there was any reason to use CGBitmapContextCreate over UIGraphicsBeginImageContext.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
根据 Apple 的 UIKit Function Reference,其中包含封面日期为 2010 年 11 月 15 日,
UIGraphicsBeginImageContext
和相关函数仍应仅在主线程上调用。最新 Xcode 3.2.5 附带的开发人员文档中重复了相同的文本。但是,它对UIGraphicsGetCurrentContext
报告了相同的情况,我现在明确理解它是线程安全的。我的理解是,现在只有 UIGraphicsGetCurrentContext 和 UIImage、UIColor 和 UIFont 类是线程安全的,而不是整个 UIKit,但我无法找到明确的参考。无论如何,
UIGraphicsBeginImageContext
是一个 UIKit 包装器,位于CGBitmapContextCreate
之上并减少了其功能。特别是,您仅限于具有固定组件顺序的 RGBA 颜色空间图像(尽管它根据 iOS 版本而变化),并且无法指定您自己的绘制目标缓冲区。因此,举例来说,它对于进行一堆 CoreGraphics 合成然后将结果发布到 OpenGL 是没有用的,并且对于将已经以某种数组形式获得的图形管道传输到 CoreGraphics 也没有帮助。然而,当 UIKit 方法支持您需要的功能并且可以安全使用时,CoreGraphics 方法没有固有的优势。
According to Apple's UIKit Function Reference, which carries the cover date of the 15th November 2010,
UIGraphicsBeginImageContext
and related functions should still be called on the main thread only. The same text is repeated in the developer documentation that comes with the latest Xcode, 3.2.5. However, it reports the same forUIGraphicsGetCurrentContext
which I explicitly understood to be thread safe now. My understanding was that onlyUIGraphicsGetCurrentContext
and the UIImage, UIColor and UIFont classes are now thread safe rather than the entirety of UIKit, but I'm unable to find a definitive reference.Regardless,
UIGraphicsBeginImageContext
is a UIKit wrapper that sits on top ofCGBitmapContextCreate
and reduces its functionality. In particular you're limited to RGBA colour space images with a fixed component order (though it varies according to the iOS version) and cannot specify your own target buffer for drawing. So, for example, it's useless for doing a bunch of CoreGraphics composition and then posting the result off to OpenGL and unhelpful for piping graphics you've already got in some array form into CoreGraphics.However, where the UIKit method supports the functionality you need and is safe to use, there is no inherent advantage to the CoreGraphics methods.