32 位/组件 使用 CGImageCreate 的图像实际上只有 8 位/组件

发布于 2024-10-28 04:38:47 字数 2343 浏览 1 评论 0原文

在过去的 4 到 5 个小时里,我一直在努力解决这个非常奇怪的问题。我有一个字节数组,其中包含我想要制作图像的像素值。该数组代表每个组件 32 位的值。没有 Alpha 通道,因此图像为 96 位/像素。

我已将所有这些指定给 CGImageCreate 函数,如下所示:

  CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone , provider, NULL, NO, kCGRenderingIntentDefault);

bytesPerRow 为 3*width*4。之所以如此,是因为每个像素有 3 个分量,每个分量占用 4 个字节(32 位)。因此,每行的总字节数为 3*4*宽度。数据提供者的定义如下:

     CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);

这就是事情变得奇怪的地方。在我的数组中,我明确将值设置为 0x000000FF(对于所有 3 个通道),但图像却完全是白色的。如果我将该值设置为 0xFFFFFF00,图像将显示为黑色。这告诉我,由于某种原因,程序没有读取每个组件的所有 4 个字节,而是读取最低有效字节。我尝试过各种组合 - 甚至包括 Alpha 通道,但它对此没有任何影响。

该程序对此视而不见:0xAAAAAA00。它只是将其读取为 0。当我明确指定每个组件的位为 32 位时,该函数不应该考虑到这一点并实际从数组中读取 4 个字节吗?

字节数组定义为:bitmapData = (char*)malloc(bytesPerRow*height); 我按如下方式为数组赋值

 for(i=0;i<width*height;i++)
{
    *((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}

请注意,我将数组作为 int 寻址以寻址 4 个字节的记忆。 i 乘以 12,因为每个像素有 12 个字节。 4 和 8 的相加允许循环寻址绿色和蓝色通道。请注意,我已经检查了调试器中数组的内存,这似乎完全没问题。循环正在写入 4 个字节。任何类型的指向这将是最有帮助的。我的最终目标是能够读取 32 位 FITS 文件 - 我已经为此编写了程序。我只是用上面的数组测试上面的代码。

如果重要的话,这里是完整的代码。这是我的自定义视图的 drawRect:(NSRect)dirtyRect 方法:

int width, height, bytesPerRow;
int i;

width = 256;
height = 256;
bytesPerRow = 3*width*4;

char *bitmapData;
bitmapData = (char*)malloc(bytesPerRow*height);
for(i=0;i<width*height;i++)
{
    *((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();

CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone, provider, NULL, NO, kCGRenderingIntentDefault);

CGColorSpaceRelease(space);
CGDataProviderRelease(provider);

CGContextRef theContext = [[NSGraphicsContext currentContext] graphicsPort];
CGContextDrawImage(theContext, CGRectMake(0,0,width,height), img);

For the past 4 to 5 hours I've been wrestling with this very bizarre issue. I have a an array of bytes which contain pixel values out of which I'll like to make an image of. The array represents 32 bit per component values. There is no Alpha channel, so the image is 96 bits/pixel.

I have specified all of this to the CGImageCreate function as follows:

  CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone , provider, NULL, NO, kCGRenderingIntentDefault);

bytesPerRow is 3*width*4. This is so because there are 3 components per pixel, and each component takes 4 bytes (32 bits). So, total bytes per row is 3*4*width. The data provider is defined as follows:

     CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);

This is where things get bizarre. In my array, I am explicity setting the values to be 0x000000FF (for all 3 channels) and yet, the image is coming out to be completely white. If I set the value to 0xFFFFFF00, the image comes out to be black. This is telling me that the program is, for some reason, not reading all of the 4 bytes for each component and is instead reading the least significant byte. I have tried all sorts of combinations - even including an Alpha channel, but it has made no difference to this.

The program is blind to this: 0xAAAAAA00. It simply reads this as 0. When I'm explicity specifying that the bits per component are 32 bits, shouldn't the function take this into account and actually read 4 bytes from the array?

The bytes array is defined as: bitmapData = (char*)malloc(bytesPerRow*height); And I am assigning values to the array as follows

 for(i=0;i<width*height;i++)
{
    *((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}

Note that I address the array as an int to address 4 bytes of memory. i is multiplied by 12 because there are 12 bytes per pixel. The addition of 4 and 8 allow the loop to address the green and blue channels. Note that I have inspected the memory of the array in the debugger and that seems to be perfectly OK. The loop is writing to 4 bytes. Any sort of pointers to this would be MOST helpful. My ultimate goal is to be able to read 32 bit FITS files - for which I already have the program written. I am only testing the above code with the above array.

Here the code in its entirety if it matters. This is in drawRect:(NSRect)dirtyRect method of my custom view:

int width, height, bytesPerRow;
int i;

width = 256;
height = 256;
bytesPerRow = 3*width*4;

char *bitmapData;
bitmapData = (char*)malloc(bytesPerRow*height);
for(i=0;i<width*height;i++)
{
    *((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
    *((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();

CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone, provider, NULL, NO, kCGRenderingIntentDefault);

CGColorSpaceRelease(space);
CGDataProviderRelease(provider);

CGContextRef theContext = [[NSGraphicsContext currentContext] graphicsPort];
CGContextDrawImage(theContext, CGRectMake(0,0,width,height), img);

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

醉城メ夜风 2024-11-04 04:38:47

我看到一些值得指出的事情:

首先,Quartz 2D 编程指南 未列出 96-bpp RGB 作为受支持的格式。您可以尝试 128-bpp RGB。

其次,您正在使用小端系统*,这意味着 LSB 优先。将每个组件设置的值更改为 0x33000000EE,您将看到浅灰色 (EE),而不是深灰色 (33) 。

最重要的是,bbum 指出您的显示器无法渲染该范围的颜色**是绝对正确的。为了显示,它被压缩到 8-bpc。如果记忆是正确的,那么记忆也是正确的。


*:更多的是遗憾。 RIP PPC。

**:也许 NASA 有一个可以做到的?

I see a few things worth pointing out:

First, the Quartz 2D Programming Guide doesn't list 96-bpp RGB as a supported format. You might try 128-bpp RGB.

Second, you're working on a little-endian system*, which means LSB comes first. Change the values to which you set each component to 0x33000000EE and you will see a light grey (EE), not a dark grey (33).

Most importantly, bbum is absolutely right when he points out that your display can't render that range of color**. It's getting squashed down to 8-bpc just for display. If it's correct in memory, then it's correct in memory.


*: More's the pity. R.I.P PPC.

**: Maybe NASA has one that can?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文