检测 UIImageView 中变换图像的 Alpha 通道
我使用这个函数来检测 UIImageView 的触摸点是否对应于透明像素。
-(BOOL)isTransparency:(CGPoint)point
{
CGImageRef cgim = self.image.CGImage;
unsigned char pixel[1] = {0};
point.x = (point.x / (self.frame.size.width / CGImageGetWidth(cgim)));
point.y = (point.y / (self.frame.size.height / CGImageGetHeight(cgim)));
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.image.size.height - point.y),
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return alpha < 0.01;
}
如果未修改 UIImageView 转换,则工作正常。但是,当用户旋转图像时,变换矩阵会被修改,此代码将图像置于原始位置,而不对应于变换后的图像位置,并且我得到错误的结果。
我的问题是如何检测 UIImageView 图像的触摸像素在旋转时是否透明?
谢谢。
编辑: -版本 1.1-
嗨。感谢您的回答。我修改了代码,现在这个版本仍然像以前的版本一样工作:
-(BOOL)isTransparency:(CGPoint)point
{
CGImageRef cgim = self.image.CGImage;
NSLog(@"Image Size W:%lu H:%lu",CGImageGetWidth(cgim),CGImageGetHeight(cgim));
unsigned char pixel[1] = {0};
point.x = (point.x / (self.frame.size.width / CGImageGetWidth(cgim)));
point.y = (point.y / (self.frame.size.height / CGImageGetHeight(cgim)));
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextTranslateCTM(context, self.image.size.width * 0.5f, self.image.size.height * 0.5f);
CGContextConcatCTM(context, self.transform);
//CGContextScaleCTM(context, [self currentPercent]/100, [self currentPercent]/100);
//CGContextRotateCTM(context, atan2f(self.transform.b, self.transform.a));
CGContextTranslateCTM(context, -self.image.size.width * 0.5f, -self.image.size.height * 0.5f);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.image.size.height - point.y),
self.image.size.width,
self.image.size.height),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
NSLog(@"Is Transparent: %d",alpha < 0.01);
return alpha < 0.01;
}
我尝试使用几个函数将原始图像旋转和缩放到当前的旋转和大小,但没有成功。 :(
有什么帮助吗?
编辑:版本 1.2
我找到了解决此问题的解决方法。我决定创建一个大小等于屏幕大小的上下文,并根据当前变换仅绘制触摸的对象 之后,我为此上下文
创建一个位图,其中正确绘制图像并将图像传递给仅适用于未转换图像的函数,
我不知道这是否是最佳方法。这样做,但有效的
代码如下:
-(BOOL)isTransparency:(CGPoint)point
{
UIGraphicsBeginImageContext(self.superview.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextTranslateCTM(context, [self center].x, [self center].y);
CGContextConcatCTM(context, [self transform]);
CGContextTranslateCTM(context,
-[self bounds].size.width * [[self layer] anchorPoint].x,
-[self bounds].size.height * [[self layer] anchorPoint].y);
[[self image] drawInRect:[self bounds]];
CGContextRestoreGState(context);
CGImageRef image = CGBitmapContextCreateImage(context);
CGImageRef viewImage = image;
return [self isTransparentPixel:point image:image];
}
函数创建具有超级视图大小的图像(在我的情况下始终全屏),并从父视图接收 UIKit 坐标系中的触摸点作为参数。
-(BOOL)isTransparentPixel:(CGPoint)point image:(CGImageRef)cgim
{
NSLog(@"Image Size W:%lu H:%lu",CGImageGetWidth(cgim),CGImageGetHeight(cgim));
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.superview.frame.size.height - point.y),
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return alpha < 0.01;
}
此函数检测给定图像中给定像素的 Alpha 通道。
此 对于所有人。
I'm using this function to detect if the touched point of a UIImageView corresponds to a transparent pixel or not.
-(BOOL)isTransparency:(CGPoint)point
{
CGImageRef cgim = self.image.CGImage;
unsigned char pixel[1] = {0};
point.x = (point.x / (self.frame.size.width / CGImageGetWidth(cgim)));
point.y = (point.y / (self.frame.size.height / CGImageGetHeight(cgim)));
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.image.size.height - point.y),
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return alpha < 0.01;
}
Works fine if the UIImageView transform is not modified. But when user rotates the image the transform matrix is modified, this code get the image in the original position, not corresponding to the transformed image position, and I'm getting false results.
My question is how can I detect if the touched pixel of the image of the UIImageView is transparent when it is rotated?
Thanks.
EDIT: -Version 1.1-
Hi. Thanks for the answer. I've modified the code and now this version still working as previous version:
-(BOOL)isTransparency:(CGPoint)point
{
CGImageRef cgim = self.image.CGImage;
NSLog(@"Image Size W:%lu H:%lu",CGImageGetWidth(cgim),CGImageGetHeight(cgim));
unsigned char pixel[1] = {0};
point.x = (point.x / (self.frame.size.width / CGImageGetWidth(cgim)));
point.y = (point.y / (self.frame.size.height / CGImageGetHeight(cgim)));
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextTranslateCTM(context, self.image.size.width * 0.5f, self.image.size.height * 0.5f);
CGContextConcatCTM(context, self.transform);
//CGContextScaleCTM(context, [self currentPercent]/100, [self currentPercent]/100);
//CGContextRotateCTM(context, atan2f(self.transform.b, self.transform.a));
CGContextTranslateCTM(context, -self.image.size.width * 0.5f, -self.image.size.height * 0.5f);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.image.size.height - point.y),
self.image.size.width,
self.image.size.height),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
NSLog(@"Is Transparent: %d",alpha < 0.01);
return alpha < 0.01;
}
I tried with several functions to rotate and scale the original image to current rotation and size without success. :(
Any help please?
EDIT: Version 1.2
I found a workaround to fix this problem. I decided to create a context with size equal to screen size and draw only the touched object, according with the current transform matrix.
After that I create a bitmap for this context where the image is properly drawn and pass the image to the function that works only with non transformed images.
The result is that works ok. I don't know if is the optimum way to do that, but works.
Here the code:
-(BOOL)isTransparency:(CGPoint)point
{
UIGraphicsBeginImageContext(self.superview.bounds.size);
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSaveGState(context);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextTranslateCTM(context, [self center].x, [self center].y);
CGContextConcatCTM(context, [self transform]);
CGContextTranslateCTM(context,
-[self bounds].size.width * [[self layer] anchorPoint].x,
-[self bounds].size.height * [[self layer] anchorPoint].y);
[[self image] drawInRect:[self bounds]];
CGContextRestoreGState(context);
CGImageRef image = CGBitmapContextCreateImage(context);
CGImageRef viewImage = image;
return [self isTransparentPixel:point image:image];
}
This function creates the image with the size of superview (in my case always full screen) and receives as parameter the touched point in UIKit coordinates system from the parent view.
-(BOOL)isTransparentPixel:(CGPoint)point image:(CGImageRef)cgim
{
NSLog(@"Image Size W:%lu H:%lu",CGImageGetWidth(cgim),CGImageGetHeight(cgim));
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,
1, 1, 8, 1, NULL,
kCGImageAlphaOnly);
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(-point.x,
-(self.superview.frame.size.height - point.y),
CGImageGetWidth(cgim),
CGImageGetHeight(cgim)),
cgim);
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
return alpha < 0.01;
}
This function detects the alpha channel of a given pixel in a given image.
Thanks for all.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在绘制图像之前,使用 CGContextConcatCTM() 等函数将图像视图的变换应用到 CGContext。
Apply the transform from your image view to your CGContext with a function such as CGContextConcatCTM() before drawing your image.