如何从 ARGB NSImage 加载 OpenGL 纹理而不需要调整?

发布于 2024-10-17 05:05:05 字数 1370 浏览 10 评论 0原文

我正在为 Mac OS >= 10.6 编写一个应用程序,它可以从磁盘加载的图像创建 OpenGL 纹理。

首先,我将图像加载到 NSImage 中。然后我从图像中获取 NSBitmapImageRep 并使用 glTexImage2D 将像素数据加载到纹理中。

对于 RGB 或 RGBA 图像,它工作得很好。我可以传入 3 字节/像素的 RGB 或 4 字节的 RGBA,并创建 4 字节/像素的 RGBA 纹理。

然而,我刚刚让测试人员向我发送了一张似乎具有 ARGB 字节顺序的 JPEG 图像(在 Canon EOS 50D 上拍摄,不确定它是如何导入的)。

我在这个线程上找到了一篇文章:(http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html)这表明我指定 GL_BGRA 的格式参数 glTexImage2D,以及GL_UNSIGNED_INT_8_8_8_8_REV类型。

这似乎合乎逻辑,而且似乎应该有效,但事实并非如此。我得到不同但仍然错误的颜色值。

我编写了“swizzling”(手动字节交换)代码,将 ARGB 图像数据洗牌到新的 RGBA 缓冲区中,但是对于大图像来说,这种逐字节的 swizzling 会很慢。

我还想了解如何使这项工作“以正确的方式”进行。

将 ARGB 数据加载到 RGBA OpenGL 纹理中有什么技巧?

我当前对 xxx 的调用如下所示:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);

哪里是 RGB 或 RGBA。

我尝试使用:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);

当我的图像代表报告它处于“alpha优先”顺序时。

作为第二个问题,我还了解到大多数显卡的“本机”格式是 GL_BGRA,因此以该格式创建纹理会导致更快的纹理绘制。纹理绘制的速度比加载纹理的速度更重要,因此预先将数据“混合”为 BGRA 格式是值得的。我尝试要求 OpenGL 通过指定 GL_RGBA 的“内部格式”来创建 BGRA 纹理,但这会导致完全黑色的图像。我对文档的解释让我期望如果源格式和内部格式不同,glTexImage2D 会在读取数据时对数据进行字节交换,但当我尝试指定以下格式的“内部格式”时,我会收到 OpenGL 错误 0x500 (GL_INVALID_ENUM) GL_RGBA。我缺少什么?

I'm writing an app for Mac OS >= 10.6 that creates OpenGL textures from images loaded from disk.

First, I load the image into an NSImage. Then I get the NSBitmapImageRep from the image and load the pixel data into a texture using glTexImage2D.

For RGB or RGBA images, it works perfectly. I can pass in either 3 bytes/pixel of RGB, or 4 bytes of RGBA, and create a 4-byte/pixel RGBA texture.

However, I just had a tester send me a JPEG image (shot on a Canon EOS 50D, not sure how it was imported) that seems to have ARGB byte ordering.

I found a post on this thread: (http://www.cocoabuilder.com/archive/cocoa/12782-coregraphics-over-opengl.html) That suggests that I specify a format parameter of GL_BGRA to
glTexImage2D, and a type of GL_UNSIGNED_INT_8_8_8_8_REV.

That seems logical, and seems like it should work, but it doesn't. I get different, but still wrong, color values.

I wrote "swizzling" (manual byte-swapping) code that shuffles the ARGB image data into a new RGBA buffer, but this byte-by-byte swizzling is going to be slow for large images.

I would also like to understand how to make this work "the right way".

What is the trick to loading ARGB data into an RGBA OpenGL texture?

My current call to xxx looks like this:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, format, GL_UNSIGNED_BYTE, pixelBuffer);

where is either RGB or RGBA.

I tried using:

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, newWidth, newHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, pixelBuffer);

When my image rep's reports that it is in "alpha first" order.

As a second question, I've also read that most graphics card's "native" format is GL_BGRA, so creating a texture in that format results in faster texture drawing. The speed of texture drawing is more important than the speed of loading the texture, so "swizzling" the data to BGRA format up-front would be worth it. I tried asking OpenGL to create a BGRA texture by specifying an "internalformat" of GL_RGBA, but that results in a completely black image. My interpretation on the docs makes me expect that glTexImage2D would byte-swap the data as it reads it if the source and internal formats are different, but instead I get an OpenGL error 0x500 (GL_INVALID_ENUM) when I try to specify an "internalformat" of GL_RGBA. What am I missing?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

哆啦不做梦 2024-10-24 05:05:05

我不知道如何将 ARGB 数据直接加载到纹理中,但有一个比仅在 CPU 上进行混合更好的解决方法。您可以在 GPU 上非常有效地完成此操作:

  1. 将 ARGB 数据加载到临时 RGBA 纹理中。
  2. 使用此纹理绘制全屏四边形,同时使用简单的像素着色器渲染到目标纹理。
  3. 继续加载其他资源,无需停止 GPU 管道。

像素着色器示例:

#version 130
uniform sampler2DRect unit_in;
void main() {
    gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}

I'm not aware of the way to load the ARGB data directly into the texture, but there is a better workaround than just doing the swizzle on CPU. You can do it very effectively on GPU instead:

  1. Load the ARGB data into the temporary RGBA texture.
  2. Draw a full-screen quad with this texture, while rendering into the target texture, using a simple pixel shader.
  3. Continue to load other resources, no need to stall the GPU pipeline.

Example pixel shader:

#version 130
uniform sampler2DRect unit_in;
void main() {
    gl_FragColor = texture( unit_in, gl_FragCoord.xy ).gbar;
}
月下客 2024-10-24 05:05:05

你用 OpenGL 渲染它,对吗?
如果您想以简单的方式做到这一点,您可以让像素着色器实时混合颜色。对于显卡来说这根本不是问题,它们是用来做更复杂的事情的:)。

您可以使用这样的着色器:

uniform sampler2D image;
void main()
{
    gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}

如果您不了解着色器,请在此处阅读此教程:http: //www.lighthouse3d.com/opengl/glsl/

You're rendering it with OpenGL, right?
If you want to do it the easy way, you can have your pixel shader swizzle the colors in realtime. This is no problem at all for the graphics card, they're made to do faar more complicated stuff :).

You can use a shader like this:

uniform sampler2D image;
void main()
{
    gl_FragColor = texture2D(image, gl_FragCoord.xy).gbar;
}

If you don't know about shaders, read this tut here: http://www.lighthouse3d.com/opengl/glsl/

清醇 2024-10-24 05:05:05

这个问题很老了,但如果其他人正在寻找这个问题,我发现了一个不是严格安全但有效的解决方案。问题是每个 32 位 RGBA 值都将 A 作为第一个字节而不是最后一个字节。

NBitmapImageRep.bitmapData 为您提供一个指向第一个字节的指针,您将其作为指向其像素的指针提供给 OpenGL。只需向该指针加 1,即可按正确的顺序指向 RGB 值,下一个像素的 A 位于末尾。

这样做的问题是,最后一个像素将从图像末尾以外的一个字节中获取 A 值,并且 A 值都是一个像素出来的。但就像提问者一样,我在加载 JPG 时得到了这个,所以 alpha 无论如何都是无关紧要的。这似乎不会造成问题,但我不会声称它是“安全的”。

This question is old but in case anyone else is looking for this I found a not strictly safe but effective solution. The problem is that each 32-bit RGBA value has A as the first byte rather than the last.

NBitmapImageRep.bitmapData gives you a pointer to that first byte which you give to OpenGL as the pointer to its pixels. Simply add 1 to that pointer and you point at the RGB values in the right order, with the A of the next pixel at the end.

The problems with this are that the last pixel will take the A value from one byte beyond the end of the image and the A values are all one pixel out. But like the asker, I get this while loading a JPG so alpha is irrelevant anyway. This doesn't appear to cause a problem, but I wouldn't claim that its 'safe'.

日久见人心 2024-10-24 05:05:05

数据为 ARGB 格式的纹理的名称。

GLuint argb_texture;

用于在一个函数调用中设置 ARGB swizzle 的一组标记。

static const GLenum argb_swizzle[] =
{
   GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};

绑定 ARGB 纹理

glBindTexture(GL_TEXTURE_2D, argb_texture);

在一次调用 glTexParameteriv 中设置所有四个 swizzle 参数

glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);

我知道这项工作,但我不确定 argb_swizzle 的顺序是否正确。如果这不正确,请纠正我。我不太清楚argb_swizzle中的GL_GREEN、G​​L_BLUE、GL_ALPHA、GL_RED是如何确定的。

正如 OpenGL 编程指南 建议的那样:

...这是一种允许您重新排列组件的机制
图形读取动态纹理数据的顺序
硬件。

The name of a texture whose data is in ARGB format.

GLuint argb_texture;

An array of tokens to set ARGB swizzle in one function call.

static const GLenum argb_swizzle[] =
{
   GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED
};

Bind the ARGB texture

glBindTexture(GL_TEXTURE_2D, argb_texture);

Set all four swizzle parameters in one call to glTexParameteriv

glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, argb_swizzle);

I know this work, but I am not sure if argb_swizzle is in right order. Please correct me if this is not right. I am not very clear how are GL_GREEN, GL_BLUE, GL_ALPHA, GL_RED determined in argb_swizzle.

As The OpenGL Programming Guide suggested:

...which is a mechanism that allows you to rearrange the component
order of texture data on the fly as it is read by the graphics
hardware.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文