如何获取 iPhone 上图像上像素的 RGB 值
我正在编写一个 iPhone 应用程序,并且需要本质上实现与 Photoshop 中的“吸管”工具等效的东西,您可以在其中触摸图像上的一个点并捕获相关像素的 RGB 值以确定和匹配其颜色。 获取 UIImage 是很容易的部分,但是有没有办法将 UIImage 数据转换为位图表示形式,以便我可以提取给定像素的信息? 工作代码示例将非常受欢迎,请注意,我不关心 alpha 值。
I am writing an iPhone application and need to essentially implement something equivalent to the 'eyedropper' tool in photoshop, where you can touch a point on the image and capture the RGB values for the pixel in question to determine and match its color. Getting the UIImage is the easy part, but is there a way to convert the UIImage data into a bitmap representation in which I could extract this information for a given pixel? A working code sample would be most appreciated, and note that I am not concerned with the alpha value.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(8)
更多细节...
我今天晚上早些时候发布了对本页内容的整合和小补充 - 可以在这篇文章的底部找到。 然而,我现在正在编辑这篇文章,发布我提出的方法(至少对于我的要求,包括修改像素数据)是一种更好的方法,因为它提供了可写数据(而据我了解,提供的方法由之前的帖子和本文底部提供了对数据的只读参考)。
方法一:可写像素信息
我定义的常量
<前><代码>#define RGBA 4
#定义RGBA_8_BIT 8
在我的 UIImage 子类中我声明了实例变量:
像素结构(在此版本中带有 alpha)
Bitmap 函数(返回预先计算的 RGBA;将 RGB 除以 A 以获得未修改的 RGB):
只读数据(之前的资料) - 方法2:
步骤1.我声明了一个byte的类型:
步骤2. 我声明了一个对应于像素的结构:
步骤 3. 我对 UIImageView 进行了子类化并声明(具有相应的合成属性):
步骤 4. 我将子类代码放入名为 bitmap 的方法中(以返回位图像素数据):
步骤 5我做了一个访问器方法:
A little more detail...
I posted earlier this evening with a consolidation and small addition to what had been said on this page - that can be found at the bottom of this post. I am editing the post at this point, however, to post what I propose is (at least for my requirements, which include modifying pixel data) a better method, as it provides writable data (whereas, as I understand it, the method provided by previous posts and at the bottom of this post provides a read-only reference to data).
Method 1: Writable Pixel Information
I defined constants
In my UIImage subclass I declared instance variables:
The pixel struct (with alpha in this version)
Bitmap function (returns pre-calculated RGBA; divide RGB by A to get unmodified RGB):
Read-Only Data (Previous information) - method 2:
Step 1. I declared a type for byte:
Step 2. I declared a struct to correspond to a pixel:
Step 3. I subclassed UIImageView and declared (with corresponding synthesized properties):
Step 4. Subclass code I put in a method named bitmap (to return the bitmap pixel data):
Step 5. I made an accessor method:
这是我对 UIImage 颜色进行采样的解决方案。
此方法将请求的像素渲染到 1px 大的 RGBA 缓冲区中,并将结果颜色值作为 UIColor 对象返回。 这比我见过的大多数其他方法要快得多,并且只使用很少的内存。
这对于颜色选择器之类的东西来说应该非常有效,在这种情况下,您通常在任何给定时间只需要一个特定像素的值。
Uiimage+Picker.h
Uiimage+Picker.m
Here is my solution for sampling color of an UIImage.
This approach renders the requested pixel into a 1px large RGBA buffer and returns the resulting color values as an UIColor object. This is much faster than most other approaches I've seen and uses only very little memory.
This should work pretty well for something like a color picker, where you typically only need the value of one specific pixel at a any given time.
Uiimage+Picker.h
Uiimage+Picker.m
您无法直接访问 UIImage 的位图数据。
您需要获取 UIImage 的 CGImage 表示。 然后获取 CGImage 的数据提供程序,从中获取位图的 CFData 表示形式。 确保完成后释放 CFData。
您可能想要查看 CGImage 的位图信息以获取像素顺序、图像尺寸等。
You can't access the bitmap data of a UIImage directly.
You need to get the CGImage representation of the UIImage. Then get the CGImage's data provider, from that a CFData representation of the bitmap. Make sure to release the CFData when done.
You will probably want to look at the bitmap info of the CGImage to get pixel order, image dimensions, etc.
拉霍斯的回答对我有用。 为了获取字节数组形式的像素数据,我这样做了:
UInt8* data = CFDataGetBytePtr(bitmapData);
更多信息:CFDataRef 文档。
另外,请记住包含
CoreGraphics.framework
Lajos's answer worked for me. To get the pixel data as an array of bytes, I did this:
UInt8* data = CFDataGetBytePtr(bitmapData);
More info: CFDataRef documentation.
Also, remember to include
CoreGraphics.framework
感谢大家! 将其中一些答案放在一起,我得到:
x 和 y 都只需要 0.0-1.0 的点范围。 示例:
这对我来说非常有用。 我做了一些假设,例如每像素位数和 RGBA 色彩空间,但这应该适用于大多数情况。
另一个注意事项 - 它对我来说适用于模拟器和设备 - 我过去遇到过问题,因为它在设备上运行时发生了 PNG 优化。
Thanks everyone! Putting a few of these answers together I get:
This just takes a point range 0.0-1.0 for both x and y. Example:
This works great for me. I am making a couple assumptions like bits per pixel and RGBA colorspace, but this should work for most cases.
Another note - it is working on both Simulator and device for me - I have had problems with that in the past because of the PNG optimization that happened when it went on the device.
为了在我的应用程序中执行类似的操作,我创建了一个小的离屏 CGImageContext,然后将 UIImage 渲染到其中。 这使我能够快速地一次提取多个像素。 这意味着您可以以易于解析的格式设置目标位图,并让 CoreGraphics 完成颜色模型或位图格式之间转换的艰苦工作。
To do something similar in my application, I created a small off-screen CGImageContext, and then rendered the UIImage into it. This allowed me a fast way to extract a number of pixels at once. This means that you can set up the target bitmap in a format you find easy to parse, and let CoreGraphics do the hard work of converting between color models or bitmap formats.
像素位置 = (x+(y*((imagewidth)*BytesPerPixel)));
// 据我所知,音调不是这个设备的问题,可以设为零...
//(或退出数学)。
pixelPosition = (x+(y*((imagewidth)*BytesPerPixel)));
// pitch isn't an issue with this device as far as I know and can be let zero...
// ( or pulled out of the math ).
使用 ANImageBitmapRep 提供像素级访问(读/写)。
Use ANImageBitmapRep which gives pixel-level access (read/write).