如何获取 iPhone 上图像上像素的 RGB 值

发布于 2024-07-06 11:03:53 字数 198 浏览 4 评论 0原文

我正在编写一个 iPhone 应用程序,并且需要本质上实现与 Photoshop 中的“吸管”工具等效的东西,您可以在其中触摸图像上的一个点并捕获相关像素的 RGB 值以确定和匹配其颜色。 获取 UIImage 是很容易的部分,但是有没有办法将 UIImage 数据转换为位图表示形式,以便我可以提取给定像素的信息? 工作代码示例将非常受欢迎,请注意,我不关心 alpha 值。

I am writing an iPhone application and need to essentially implement something equivalent to the 'eyedropper' tool in photoshop, where you can touch a point on the image and capture the RGB values for the pixel in question to determine and match its color. Getting the UIImage is the easy part, but is there a way to convert the UIImage data into a bitmap representation in which I could extract this information for a given pixel? A working code sample would be most appreciated, and note that I am not concerned with the alpha value.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

不必你懂 2024-07-13 11:03:53

更多细节...

我今天晚上早些时候发布了对本页内容的整合和小补充 - 可以在这篇文章的底部找到。 然而,我现在正在编辑这篇文章,发布我提出的方法(至少对于我的要求,包括修改像素数据)是一种更好的方法,因为它提供了可写数据(而据我了解,提供的方法由之前的帖子和本文底部提供了对数据的只读参考)。

方法一:可写像素信息

  1. 我定义的常量

    <前><代码>#define RGBA 4
    #定义RGBA_8_BIT 8

  2. 在我的 UIImage 子类中我声明了实例变量:

    size_t bytesPerRow; 
      size_t 字节计数; 
      size_t 像素计数; 
    
      CGContextRef 上下文; 
      CGColorSpaceRef 颜色空间; 
    
      UInt8 *像素字节数据; 
      // 指向内存中 RGBA 字节数组的指针 
      RPVW_RGBAPixel *像素数据; 
      
  3. 像素结构(在此版本中带有 alpha)

    typedef 结构 RGBAPixel { 
          字节红; 
          字节绿; 
          字节蓝; 
          字节阿尔法; 
      RGBA像素; 
      
  4. Bitmap 函数(返回预先计算的 RGBA;将 RGB 除以 A 以获得未修改的 RGB):

    -(RGBAPixel*) 位图 { 
          NSLog( @"返回 UIImage 的位图表示。" ); 
          // 红色、绿色、蓝色和 alpha 各 8 位。 
          [self setBytesPerRow:self.size.width * RGBA]; 
          [self setByteCount:bytesPerRow * self.size.height]; 
          [self setPixelCount:self.size.width * self.size.height]; 
    
          // 创建RGB颜色空间 
          [自我设置ColorSpace:CGColorSpaceCreateDeviceRGB()]; 
    
          if (!colorSpace) 
          { 
              NSLog(@"分配颜色空间时出错。"); 
              返回零; 
          } 
    
          [自我设置PixelData:malloc(byteCount)]; 
    
          if (!pixelData) 
          { 
              NSLog(@"分配位图内存时出错。释放颜色空间。"); 
              CGColorSpaceRelease(colorSpace); 
    
              返回零; 
          } 
    
          // 创建位图上下文。  
          // 预乘 RGBA,每个组件 8 位。  
          // 源图像格式将被CGBitmapContextCreate转换为此处指定的格式。 
          [自我设置上下文:CGBitmapContextCreate( 
                                                 (void*)像素数据, 
                                                 自.尺寸.宽度, 
                                                 自.尺寸.高度, 
                                                 RGBA_8_BIT, 
                                                 每行字节数, 
                                                 色彩空间, 
                                                 kCGImageAlphaPremultipliedLast 
                                                 )]; 
    
          // 确保我们有上下文 
          如果(!上下文){ 
              免费(像素数据); 
              NSLog(@"上下文未创建!"); 
          } 
    
          // 将图像绘制到位图上下文中。  
          // 为渲染上下文分配的内存将包含指定颜色空间中的原始图像像素数据。 
          CGRect 矩形 = { { 0 , 0 }, { self.size.width, self.size.height } }; 
    
          CGContextDrawImage( 上下文, 矩形, self.CGImage ); 
    
          // 现在我们可以获得一个指向与位图上下文关联的图像像素数据的指针。 
          PixelData = (RGBAPixel*) CGBitmapContextGetData(上下文); 
    
          返回像素数据; 
      } 
      

只读数据(之前的资料) - 方法2:


步骤1.我声明了一个byte的类型:

 typedef unsigned char byte;

步骤2. 我声明了一个对应于像素的结构:

 typedef struct RGBPixel{
    byte red;
    byte green;
    byte blue;  
    }   
RGBPixel;

步骤 3. 我对 UIImageView 进行了子类化并声明(具有相应的合成属性):

//  Reference to Quartz CGImage for receiver (self)  
CFDataRef bitmapData;   

//  Buffer holding raw pixel data copied from Quartz CGImage held in receiver (self)    
UInt8* pixelByteData;

//  A pointer to the first pixel element in an array    
RGBPixel* pixelData;

步骤 4. 我将子类代码放入名为 bitmap 的方法中(以返回位图像素数据):

//Get the bitmap data from the receiver's CGImage (see UIImage docs)  
[self setBitmapData: CGDataProviderCopyData(CGImageGetDataProvider([self CGImage]))];

//Create a buffer to store bitmap data (unitialized memory as long as the data)    
[self setPixelBitData:malloc(CFDataGetLength(bitmapData))];

//Copy image data into allocated buffer    
CFDataGetBytes(bitmapData,CFRangeMake(0,CFDataGetLength(bitmapData)),pixelByteData);

//Cast a pointer to the first element of pixelByteData    
//Essentially what we're doing is making a second pointer that divides the byteData's units differently - instead of dividing each unit as 1 byte we will divide each unit as 3 bytes (1 pixel).    
pixelData = (RGBPixel*) pixelByteData;

//Now you can access pixels by index: pixelData[ index ]    
NSLog(@"Pixel data one red (%i), green (%i), blue (%i).", pixelData[0].red, pixelData[0].green, pixelData[0].blue);

//You can determine the desired index by multiplying row * column.    
return pixelData;

步骤 5我做了一个访问器方法:

-(RGBPixel*)pixelDataForRow:(int)row column:(int)column{
    //Return a pointer to the pixel data
    return &pixelData[row * column];           
}

A little more detail...

I posted earlier this evening with a consolidation and small addition to what had been said on this page - that can be found at the bottom of this post. I am editing the post at this point, however, to post what I propose is (at least for my requirements, which include modifying pixel data) a better method, as it provides writable data (whereas, as I understand it, the method provided by previous posts and at the bottom of this post provides a read-only reference to data).

Method 1: Writable Pixel Information

  1. I defined constants

    #define RGBA        4
    #define RGBA_8_BIT  8
    
  2. In my UIImage subclass I declared instance variables:

    size_t bytesPerRow;
    size_t byteCount;
    size_t pixelCount;
    
    CGContextRef context;
    CGColorSpaceRef colorSpace;
    
    UInt8 *pixelByteData;
    // A pointer to an array of RGBA bytes in memory
    RPVW_RGBAPixel *pixelData;
    
  3. The pixel struct (with alpha in this version)

    typedef struct RGBAPixel {
        byte red;
        byte green;
        byte blue;
        byte alpha;
    } RGBAPixel;
    
  4. Bitmap function (returns pre-calculated RGBA; divide RGB by A to get unmodified RGB):

    -(RGBAPixel*) bitmap {
        NSLog( @"Returning bitmap representation of UIImage." );
        // 8 bits each of red, green, blue, and alpha.
        [self setBytesPerRow:self.size.width * RGBA];
        [self setByteCount:bytesPerRow * self.size.height];
        [self setPixelCount:self.size.width * self.size.height];
    
        // Create RGB color space
        [self setColorSpace:CGColorSpaceCreateDeviceRGB()];
    
        if (!colorSpace)
        {
            NSLog(@"Error allocating color space.");
            return nil;
        }
    
        [self setPixelData:malloc(byteCount)];
    
        if (!pixelData)
        {
            NSLog(@"Error allocating bitmap memory. Releasing color space.");
            CGColorSpaceRelease(colorSpace);
    
            return nil;
        }
    
        // Create the bitmap context. 
        // Pre-multiplied RGBA, 8-bits per component. 
        // The source image format will be converted to the format specified here by CGBitmapContextCreate.
        [self setContext:CGBitmapContextCreate(
                                               (void*)pixelData,
                                               self.size.width,
                                               self.size.height,
                                               RGBA_8_BIT,
                                               bytesPerRow,
                                               colorSpace,
                                               kCGImageAlphaPremultipliedLast
                                               )];
    
        // Make sure we have our context
        if (!context)   {
            free(pixelData);
            NSLog(@"Context not created!");
        }
    
        // Draw the image to the bitmap context. 
        // The memory allocated for the context for rendering will then contain the raw image pixelData in the specified color space.
        CGRect rect = { { 0 , 0 }, { self.size.width, self.size.height } };
    
        CGContextDrawImage( context, rect, self.CGImage );
    
        // Now we can get a pointer to the image pixelData associated with the bitmap context.
        pixelData = (RGBAPixel*) CGBitmapContextGetData(context);
    
        return pixelData;
    }
    

Read-Only Data (Previous information) - method 2:


Step 1. I declared a type for byte:

 typedef unsigned char byte;

Step 2. I declared a struct to correspond to a pixel:

 typedef struct RGBPixel{
    byte red;
    byte green;
    byte blue;  
    }   
RGBPixel;

Step 3. I subclassed UIImageView and declared (with corresponding synthesized properties):

//  Reference to Quartz CGImage for receiver (self)  
CFDataRef bitmapData;   

//  Buffer holding raw pixel data copied from Quartz CGImage held in receiver (self)    
UInt8* pixelByteData;

//  A pointer to the first pixel element in an array    
RGBPixel* pixelData;

Step 4. Subclass code I put in a method named bitmap (to return the bitmap pixel data):

//Get the bitmap data from the receiver's CGImage (see UIImage docs)  
[self setBitmapData: CGDataProviderCopyData(CGImageGetDataProvider([self CGImage]))];

//Create a buffer to store bitmap data (unitialized memory as long as the data)    
[self setPixelBitData:malloc(CFDataGetLength(bitmapData))];

//Copy image data into allocated buffer    
CFDataGetBytes(bitmapData,CFRangeMake(0,CFDataGetLength(bitmapData)),pixelByteData);

//Cast a pointer to the first element of pixelByteData    
//Essentially what we're doing is making a second pointer that divides the byteData's units differently - instead of dividing each unit as 1 byte we will divide each unit as 3 bytes (1 pixel).    
pixelData = (RGBPixel*) pixelByteData;

//Now you can access pixels by index: pixelData[ index ]    
NSLog(@"Pixel data one red (%i), green (%i), blue (%i).", pixelData[0].red, pixelData[0].green, pixelData[0].blue);

//You can determine the desired index by multiplying row * column.    
return pixelData;

Step 5. I made an accessor method:

-(RGBPixel*)pixelDataForRow:(int)row column:(int)column{
    //Return a pointer to the pixel data
    return &pixelData[row * column];           
}
静若繁花 2024-07-13 11:03:53

这是我对 UIImage 颜色进行采样的解决方案。

此方法将请求的像素渲染到 1px 大的 RGBA 缓冲区中,并将结果颜色值作为 UIColor 对象返回。 这比我见过的大多数其他方法要快得多,并且只使用很少的内存。

这对于颜色选择器之类的东西来说应该非常有效,在这种情况下,您通常在任何给定时间只需要一个特定像素的值。

Uiimage+Picker.h

#import <UIKit/UIKit.h>


@interface UIImage (Picker)

- (UIColor *)colorAtPosition:(CGPoint)position;

@end

Uiimage+Picker.m

#import "UIImage+Picker.h"


@implementation UIImage (Picker)

- (UIColor *)colorAtPosition:(CGPoint)position {

    CGRect sourceRect = CGRectMake(position.x, position.y, 1.f, 1.f);
    CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, sourceRect);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *buffer = malloc(4);
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
    CGContextRef context = CGBitmapContextCreate(buffer, 1, 1, 8, 4, colorSpace, bitmapInfo);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0.f, 0.f, 1.f, 1.f), imageRef);
    CGImageRelease(imageRef);
    CGContextRelease(context);

    CGFloat r = buffer[0] / 255.f;
    CGFloat g = buffer[1] / 255.f;
    CGFloat b = buffer[2] / 255.f;
    CGFloat a = buffer[3] / 255.f;

    free(buffer);

    return [UIColor colorWithRed:r green:g blue:b alpha:a];
}

@end 

Here is my solution for sampling color of an UIImage.

This approach renders the requested pixel into a 1px large RGBA buffer and returns the resulting color values as an UIColor object. This is much faster than most other approaches I've seen and uses only very little memory.

This should work pretty well for something like a color picker, where you typically only need the value of one specific pixel at a any given time.

Uiimage+Picker.h

#import <UIKit/UIKit.h>


@interface UIImage (Picker)

- (UIColor *)colorAtPosition:(CGPoint)position;

@end

Uiimage+Picker.m

#import "UIImage+Picker.h"


@implementation UIImage (Picker)

- (UIColor *)colorAtPosition:(CGPoint)position {

    CGRect sourceRect = CGRectMake(position.x, position.y, 1.f, 1.f);
    CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, sourceRect);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *buffer = malloc(4);
    CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
    CGContextRef context = CGBitmapContextCreate(buffer, 1, 1, 8, 4, colorSpace, bitmapInfo);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0.f, 0.f, 1.f, 1.f), imageRef);
    CGImageRelease(imageRef);
    CGContextRelease(context);

    CGFloat r = buffer[0] / 255.f;
    CGFloat g = buffer[1] / 255.f;
    CGFloat b = buffer[2] / 255.f;
    CGFloat a = buffer[3] / 255.f;

    free(buffer);

    return [UIColor colorWithRed:r green:g blue:b alpha:a];
}

@end 
咋地 2024-07-13 11:03:53

您无法直接访问 UIImage 的位图数据。

您需要获取 UIImage 的 CGImage 表示。 然后获取 CGImage 的数据提供程序,从中获取位图的 CFData 表示形式。 确保完成后释放 CFData。

CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);

您可能想要查看 CGImage 的位图信息以获取像素顺序、图像尺寸等。

You can't access the bitmap data of a UIImage directly.

You need to get the CGImage representation of the UIImage. Then get the CGImage's data provider, from that a CFData representation of the bitmap. Make sure to release the CFData when done.

CGImageRef cgImage = [image CGImage];
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);

You will probably want to look at the bitmap info of the CGImage to get pixel order, image dimensions, etc.

墨小墨 2024-07-13 11:03:53

拉霍斯的回答对我有用。 为了获取字节数组形式的像素数据,我这样做了:

UInt8* data = CFDataGetBytePtr(bitmapData);

更多信息:CFDataRef 文档

另外,请记住包含 CoreGraphics.framework

Lajos's answer worked for me. To get the pixel data as an array of bytes, I did this:

UInt8* data = CFDataGetBytePtr(bitmapData);

More info: CFDataRef documentation.

Also, remember to include CoreGraphics.framework

远山浅 2024-07-13 11:03:53

感谢大家! 将其中一些答案放在一起,我得到:

- (UIColor*)colorFromImage:(UIImage*)image sampledAtPoint:(CGPoint)p {
    CGImageRef cgImage = [image CGImage];
    CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
    CFDataRef bitmapData = CGDataProviderCopyData(provider);
    const UInt8* data = CFDataGetBytePtr(bitmapData);
    size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
    size_t width = CGImageGetWidth(cgImage);
    size_t height = CGImageGetHeight(cgImage);
    int col = p.x*(width-1);
    int row = p.y*(height-1);
    const UInt8* pixel = data + row*bytesPerRow+col*4;
    UIColor* returnColor = [UIColor colorWithRed:pixel[0]/255. green:pixel[1]/255. blue:pixel[2]/255. alpha:1.0];
    CFRelease(bitmapData);
    return returnColor;
}

x 和 y 都只需要 0.0-1.0 的点范围。 示例:

UIColor* sampledColor = [self colorFromImage:image
         sampledAtPoint:CGPointMake(p.x/imageView.frame.size.width,
                                    p.y/imageView.frame.size.height)];

这对我来说非常有用。 我做了一些假设,例如每像素位数和 RGBA 色彩空间,但这应该适用于大多数情况。

另一个注意事项 - 它对我来说适用于模拟器和设备 - 我过去遇到过问题,因为它在设备上运行时发生了 PNG 优化。

Thanks everyone! Putting a few of these answers together I get:

- (UIColor*)colorFromImage:(UIImage*)image sampledAtPoint:(CGPoint)p {
    CGImageRef cgImage = [image CGImage];
    CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
    CFDataRef bitmapData = CGDataProviderCopyData(provider);
    const UInt8* data = CFDataGetBytePtr(bitmapData);
    size_t bytesPerRow = CGImageGetBytesPerRow(cgImage);
    size_t width = CGImageGetWidth(cgImage);
    size_t height = CGImageGetHeight(cgImage);
    int col = p.x*(width-1);
    int row = p.y*(height-1);
    const UInt8* pixel = data + row*bytesPerRow+col*4;
    UIColor* returnColor = [UIColor colorWithRed:pixel[0]/255. green:pixel[1]/255. blue:pixel[2]/255. alpha:1.0];
    CFRelease(bitmapData);
    return returnColor;
}

This just takes a point range 0.0-1.0 for both x and y. Example:

UIColor* sampledColor = [self colorFromImage:image
         sampledAtPoint:CGPointMake(p.x/imageView.frame.size.width,
                                    p.y/imageView.frame.size.height)];

This works great for me. I am making a couple assumptions like bits per pixel and RGBA colorspace, but this should work for most cases.

Another note - it is working on both Simulator and device for me - I have had problems with that in the past because of the PNG optimization that happened when it went on the device.

南街九尾狐 2024-07-13 11:03:53

为了在我的应用程序中执行类似的操作,我创建了一个小的离屏 CGImageContext,然后将 UIImage 渲染到其中。 这使我能够快速地一次提取多个像素。 这意味着您可以以易于解析的格式设置目标位图,并让 CoreGraphics 完成颜色模型或位图格式之间转换的艰苦工作。

To do something similar in my application, I created a small off-screen CGImageContext, and then rendered the UIImage into it. This allowed me a fast way to extract a number of pixels at once. This means that you can set up the target bitmap in a format you find easy to parse, and let CoreGraphics do the hard work of converting between color models or bitmap formats.

南汐寒笙箫 2024-07-13 11:03:53

我不知道如何根据给定的 X,Y 坐标正确索引图像数据。 有谁知道吗?

像素位置 = (x+(y*((imagewidth)*BytesPerPixel)));

// 据我所知,音调不是这个设备的问题,可以设为零...
//(或退出数学)。

I dont know how to index into image data correctly based on given X,Y cordination. Does anyone know?

pixelPosition = (x+(y*((imagewidth)*BytesPerPixel)));

// pitch isn't an issue with this device as far as I know and can be let zero...
// ( or pulled out of the math ).

丑丑阿 2024-07-13 11:03:53

使用 ANImageBitmapRep 提供像素级访问(读/写)。

Use ANImageBitmapRep which gives pixel-level access (read/write).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文