检索 UIImage 的像素 alpha 值
我目前正在尝试获取 UIImageView 中像素的 alpha 值。 我已经从 [UIImageView image] 获取了 CGImage 并从中创建了一个 RGBA 字节数组。 Alpha 被预乘。
CGImageRef image = uiImage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
rawData = malloc(height * width * 4);
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(
rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
然后,我使用 UIImageView 中的坐标计算给定 alpha 通道的数组索引。
int byteIndex = (bytesPerRow * uiViewPoint.y) + uiViewPoint.x * bytesPerPixel;
unsigned char alpha = rawData[byteIndex + 3];
但是我没有得到我期望的值。 对于图像的全黑透明区域,我得到的 Alpha 通道非零值。 我是否需要转换 UIKit 和 Core Graphics 之间的坐标 - 即:y 轴是否反转? 或者我是否误解了预乘 alpha 值?
更新:
@Nikolai Ruhe 的建议对此至关重要。 事实上,我不需要在 UIKit 坐标和 Core Graphics 坐标之间进行转换。 然而,设置混合模式后,我的 alpha 值达到了我的预期:
CGContextSetBlendMode(context, kCGBlendModeCopy);
I am currently trying to obtain the alpha value of a pixel in a UIImageView. I have obtained the CGImage from [UIImageView image] and created a RGBA byte array from this. Alpha is premultiplied.
CGImageRef image = uiImage.CGImage;
NSUInteger width = CGImageGetWidth(image);
NSUInteger height = CGImageGetHeight(image);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
rawData = malloc(height * width * 4);
bytesPerPixel = 4;
bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(
rawData, width, height, bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big
);
CGColorSpaceRelease(colorSpace);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGContextRelease(context);
I then calculate the array index for the given alpha channel using the coordinates from the UIImageView.
int byteIndex = (bytesPerRow * uiViewPoint.y) + uiViewPoint.x * bytesPerPixel;
unsigned char alpha = rawData[byteIndex + 3];
However I don't get the values I expect. For a completely black transparent area of the image I get non-zero values for the alpha channel. Do I need to translate the co-ordinates between UIKit and Core Graphics - i.e: is the y-axis inverted? Or have I misunderstood premultiplied alpha values?
Update:
@Nikolai Ruhe's suggestion was key to this. I did not in fact need to translate between UIKit coordinates and Core Graphics coordinates. However, after setting the blend mode my alpha values were what I expected:
CGContextSetBlendMode(context, kCGBlendModeCopy);
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果您想要的只是单个点的 alpha 值,那么您需要的只是一个仅包含 alpha 的单点缓冲区。 我相信这应该足够了:
如果不必每次都重新创建 UIImage,那么这是非常有效的。
编辑 2011 年 12 月 8 日:
评论者指出,在某些情况下图像可能会翻转。 我一直在思考这个问题,有点遗憾我没有像这样直接使用 UIImage 编写代码(我认为原因是当时我不了解
UIGraphicsPushContext
):我认为这可以解决翻转问题。
If all you want is the alpha value of a single point, all you need is an alpha-only single-point buffer. I believe this should suffice:
If the UIImage doesn't have to be recreated every time, this is very efficient.
EDIT December 8 2011:
A commenter points out that under certain circumstances the image may be flipped. I've been thinking about this, and I'm a little sorry that I didn't write the code using the UIImage directly, like this (I think the reason is that at the time I didn't understand about
UIGraphicsPushContext
):I think that would have solved the flipping issue.
是的,CGContext 的 y 轴朝上,而在 UIKit 中它的 y 轴朝下。 查看文档。
阅读代码后编辑:
您还需要在绘制图像之前设置要替换的混合模式,因为您需要图像的 Alpha 值,而不是之前上下文缓冲区中的值:
之后编辑思考:
通过构建尽可能小的 CGBitmapContext(1x1 像素?也许 8x8?尝试一下)并在绘制之前将上下文转换到您想要的位置,您可以更有效地进行查找:
Yes, CGContexts have their y-axis pointing up while in UIKit it points down. See the docs.
Edit after reading code:
You also want to set the blend mode to replace before drawing the image since you want the image's alpha value, not the one which was in the context's buffer before:
Edit after thinking:
You could do the lookup much more efficient by building the smallest possible CGBitmapContext (1x1 pixel? maybe 8x8? have a try) and translating the context to your desired position before drawing:
我在研究如何使用图像数据的 alpha 值而不是矩形边界框进行精灵之间的碰撞检测时发现了这个问题/答案。 上下文是一个 iPhone 应用程序...我正在尝试执行上述建议的 1 像素绘制,但在使其正常工作时仍然遇到问题,但我找到了一种使用图像本身的数据创建 CGContextRef 的更简单的方法,并且这里的辅助函数:
这绕过了上面示例中所有丑陋的硬编码。 最后一个值可以通过调用 CGImageGetBitmapInfo() 来检索,但在我的例子中,它返回图像中导致 ContextCreate 函数出错的值。 只有某些组合才有效,如下所述:http://developer.apple.com/qa /qa2001/qa1037.html
希望这有帮助!
I found this question/answer while researching how to do collision detection between sprites using the alpha value of the image data, rather than a rectangular bounding box. The context is an iPhone app... I am trying to do the above suggested 1 pixel draw and I am still having problems getting this to work, but I found an easier way of creating a CGContextRef using data from the image itself, and the helper functions here:
This bypasses all the ugly hardcoding in the sample above. The last value can be retrieved by calling CGImageGetBitmapInfo() but in my case, it return a value from the image that caused an error in the ContextCreate function. Only certain combinations are valid as documented here: http://developer.apple.com/qa/qa2001/qa1037.html
Hope this is helpful!
这是可能的。 在CGImage中,像素数据是按照英文阅读顺序:从左到右,从上到下。 因此,数组中的第一个像素位于左上角; 第二个像素是顶行左侧的一个; 等等。
假设您有这个权利,您还应该确保您正在查看像素内的正确组件。 也许您期望 RGBA 但要求 ARGB,反之亦然。 或者,也许你的字节顺序错误(我不知道iPhone的字节顺序是什么)。
听起来不像。
对于那些不知道的人来说:预乘意味着颜色分量预乘 alpha; 无论颜色分量是否预乘 alpha 分量,它都是相同的。 您可以通过将颜色分量除以 alpha 来反转此过程(取消预乘)。
It's possible. In CGImage, the pixel data is in English reading order: left-to-right, top-to-bottom. So, the first pixel in the array is the top-left; the second pixel is one from the left on the top row; etc.
Assuming you have that right, you should also make sure you're looking at the correct component within a pixel. Perhaps you're expecting RGBA but asking for ARGB, or vice versa. Or, maybe you have the byte order wrong (I don't know what the iPhone's endianness is).
It doesn't sound like it.
For those who don't know: Premultiplied means that the color components are premultiplied by the alpha; the alpha component is the same whether the color components are premultiplied by it or not. You can reverse this (unpremultiply) by dividing the color components by the alpha.