将 UIImage 裁剪为 alpha
我有一个相当大的、几乎全屏的图像,我将在 iPad 上显示它。图像的透明度约为 80%。我需要在客户端确定不透明像素的边界框,然后裁剪到该边界框。
扫描 StackOverflow 上的其他问题并阅读一些 CoreGraphics 文档,我想我可以通过以下方式完成此任务:
CGBitmapContextCreate(...) // Use this to render the image to a byte array
..
- iterate through this byte array to find the bounding box
..
CGImageCreateWithImageInRect(image, boundingRect);
这看起来非常低效且笨拙。我可以用 CGImage 蒙版做一些聪明的事情,或者利用设备的图形加速来做到这一点吗?
I have a rather large, almost full screen image that I'm going to be displaying on an iPad. The image is about 80% transparent. I need to, on the client, determine the bounding box of the opaque pixels, and then crop to that bounding box.
Scanning other questions here on StackOverflow and reading some of the CoreGraphics docs, I think I could accomplish this by:
CGBitmapContextCreate(...) // Use this to render the image to a byte array
..
- iterate through this byte array to find the bounding box
..
CGImageCreateWithImageInRect(image, boundingRect);
That just seems very inefficient and clunky. Is there something clever I can do with CGImage masks or something which makes use of the device's graphics acceleration to do this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
感谢用户404709 的辛勤工作。
下面的代码还处理视网膜图像并释放 CFDataRef。
Thanks to user404709 for making all the hard work.
Below code also handles retina images and frees the CFDataRef.
我在 UImage 上创建了一个类别,如果有人需要它,它可以执行此操作...
I created a category on UImage which does this if any one needs it...
没有什么聪明的办法可以让设备完成这项工作,但有一些方法可以加速任务,或尽量减少对用户界面的影响。
首先,考虑加速这项任务的必要性。对该字节数组的简单迭代可能足够快。如果应用程序每次运行时只计算一次或响应用户的选择(在选择之间至少需要几秒钟),则可能不需要投资优化此任务。
如果图像可用后一段时间内不需要边界框,则可以在单独的线程中启动此迭代。这样计算就不会阻塞主界面线程。 Grand Central Dispatch 可以使使用单独的线程来完成此任务变得更容易。
如果任务必须加速,也许这是视频图像的实时处理,那么数据的并行处理可能会有所帮助。 Accelerate 框架可能有助于对数据设置 SIMD 计算。或者,为了通过此迭代真正获得性能,使用 NEON SIMD 操作的 ARM 汇编语言代码可以通过大量的开发工作获得出色的结果。
最后的选择是研究更好的算法。在检测图像中的特征方面有大量的工作。边缘检测算法可能比通过字节数组的简单迭代更快。也许Apple将来会在Core Graphics中添加边缘检测功能,这可以应用于这种情况。 Apple 实现的图像处理功能可能与这种情况不完全匹配,但 Apple 的实现应该优化以使用 iPad 的 SIMD 或 GPU 功能,从而获得更好的整体性能。
There is no clever cheat to get around having the device do the work, but there are some ways to accelerate the task, or minimize the impact on the user interface.
First, consider the need to accelerate this task. A simple iteration through this byte array may go fast enough. There may be no need to invest in optimizing this task if the app is just calculating this once per run or in reaction to a user's choice that takes at least a few seconds between choices.
If the bounding box is not needed for some time after the image becomes available, this iteration may be launched in a separate thread. That way the calculation doesn't block the main interface thread. Grand Central Dispatch may make using a separate thread for this task easier.
If the task must be accelerated, maybe this is real time processing of video images, then parallel processing of the data may help. The Accelerate framework may help in setting up SIMD calculations on the data. Or, to really get performance with this iteration, ARM assembly language code using the NEON SIMD operations could get great results with significant development effort.
The last choice is to investigate a better algorithm. There's a huge body of work on detecting features in images. An edge detection algorithm may be faster than a simple iteration through the byte array. Maybe Apple will add edge detection capabilities to Core Graphics in the future which can be applied to this case. An Apple implemented image processing capability may not be an exact match for this case, but Apple's implementation should be optimized to use the SIMD or GPU capabilities of the iPad, resulting in better overall performance.