如何将 UIImage/CGImageRef 的 alpha 通道转换为 mask?
如何提取 UIImage 或 CGImageRef 的 Alpha 通道并将其转换为可以与 CGImageMaskCreate 一起使用的蒙版?
例如:
本质上,对于给定的任何图像,我不关心图像内部的颜色。我想要的只是创建一个代表 Alpha 通道的灰度图像。然后可以使用该图像来掩盖其他图像。
当您向 UIBarButtonItem 提供图标图像时,此行为的示例如下。根据苹果文档,它指出:
栏上显示的图像源自该图像。如果该图像太大而无法放在条上,则会缩放以适合该图像。通常,工具栏和导航栏图像的大小为 20 x 20 点。源图像中的 Alpha 值用于创建图像 - 不透明值将被忽略。
UIBarButtonItem 接受任何图像并且仅查看 Alpha,而不查看图像的颜色。
How can I extract the alpha channel of a UIImage or CGImageRef and convert it into a mask that I can use with CGImageMaskCreate?
For example:
Essentially, given any image, I don't care about the colors inside the image. All I want is to create a grayscale image that represents the alpha channel. This image can then be used to mask other images.
An example behavior of this is in the UIBarButtonItem when you supply it an icon image. According to the Apple docs it states:
The images displayed on the bar are derived from this image. If this image is too large to fit on the bar, it is scaled to fit. Typically, the size of a toolbar and navigation bar image is 20 x 20 points. The alpha values in the source image are used to create the images—opaque values are ignored.
The UIBarButtonItem takes any image and looks only at the alpha, not the colors of the image.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
要按照栏按钮项目的方式为图标着色,您不需要传统的蒙版,而是需要蒙版的反面 - 原始图像中的不透明像素呈现最终的颜色,而不是相反。
这是实现此目的的一种方法。获取原始 RBGA 图像,并通过以下方式处理它:
例如
,现在您可以使用
finalMaskImage
作为CGContextClipToMask
等中的遮罩。To color icons the way the bar button items do, you don't want the traditional mask, you want the inverse of a mask-- one where the opaque pixels in the original image take on your final coloring, rather than the other way around.
Here's one way to accomplish this. Take your original RBGA image, and process it by:
E.g.
Now you can use
finalMaskImage
as the mask inCGContextClipToMask
etc, or etc.Ben Zotto 的解决方案是正确的,但有一种方法可以通过依靠 CGImage 为我们完成这项工作,而无需数学或局部复杂性。
以下解决方案使用 Swift (v3) 通过反转现有图像的 alpha 通道来从图像创建蒙版。源图像中的透明像素将变得不透明,部分透明的像素将反转为按比例或多或少透明。
此解决方案的唯一要求是 CGImage 基础图像。大多数
UIImage
都可以从UIImage.cgImage
中获取。如果您自己在CGContext
中渲染基础图像,请使用CGContext.makeImage()
生成新的CGImage
。代码
就是这样!
mask
CGImage 现在可以与context.clip(to: rect, mask: mask!)
一起使用。演示
这是我的基本图像,在透明背景上带有不透明红色的“蒙版图像”:
为了演示通过上述算法运行时会发生什么,这里有一个示例,它只是在绿色背景上渲染结果图像。
现在我们可以使用该图像来掩盖任何渲染的内容。这是一个示例,我们在上一个示例的绿色顶部渲染蒙版渐变。
(注意:您还可以交换
CGImage
代码以使用 Accelerate Framework 的vImage
,可能会受益从矢量处理我还没有尝试过该库的优化。)The solution by Ben Zotto is correct, but there is a way to do this with no math or local complexity by relying on
CGImage
to do the work for us.The following solution uses Swift (v3) to create a mask from an image by inverting the alpha channel of an existing image. Transparent pixels in the source image will become opaque, and partially transparent pixels will be inverted to be proportionally more or less transparent.
The only requirement for this solution is a
CGImage
base image. One can be obtained fromUIImage.cgImage
for a mostUIImage
s. If you're rendering the base image yourself in aCGContext
, useCGContext.makeImage()
to generate a newCGImage
.The code
That's it! The
mask
CGImage is now ready to used withcontext.clip(to: rect, mask: mask!)
.Demo
Here is my base image with "Mask Image" in opaque red on a transparent background:
To demonstrate what happens when running it through the above algorithm, here is an example which simply renders the resulting image over a green background.
Now we can use that image to mask any rendered content. Here's an example where we render a masked gradient on top of the green from the previous example.
(Note: You could also also swap the
CGImage
code to use Accelerate Framework'svImage
, possibly benefiting from the vector processing optimizations in that library. I haven't tried it.)我尝试了 quixoto 提供的代码,但它对我不起作用,所以我对其进行了一些更改。
问题是只绘制 alpha 通道对我来说不起作用,所以我首先获取原始图像的数据并处理 alpha 通道来手动完成此操作。
你可以像这样调用该函数
I tried the code provided by quixoto but it didn't work for me so I changed it a little bit.
The problem was that drawing only the alpha channel wasn't working for me, so I did that manually by first obtaining the data of the original image and working on the alpha channel.
You can call that function like this