在 iOS 4.3 中将 YpCbCr iPhone 4 相机帧渲染为 OpenGL ES 2.0 纹理
我试图在 iPhone 4 上的 iOS 4.3 中将本机平面图像渲染为 OpenGL ES 2.0 纹理。然而,纹理最终变成全黑。我的相机配置如下:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
我将像素数据传递到我的纹理,如下所示:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
我的片段着色器是:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
我的顶点着色器是
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
当我使用 BGRA 图像且仅使用我的片段着色器时,这工作得很好 ?
gl_FragColor = texture2D(videoFrame, textureCoordinate);
如果我在这里缺少什么怎么办 谢谢!
I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
好的。我们在这里取得了工作上的成功。关键是将 Y 和 UV 作为两个单独的纹理传递给片段着色器。这是最终的着色器:
您需要像这样创建纹理:
然后像这样传递它们:
天哪,我松了一口气!
PS yuv2rgb 矩阵的值来自这里 http://en.wikipedia.org/wiki/YUV 我复制了代码来自这里 http://www.ogre3d.org/forums/viewtopic.php?f=5&t= 25877 找出如何开始获取正确的 YUV 值。
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
You'll need to create your textures along like this:
and then pass them like this:
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
您的代码似乎尝试将 444+未使用字节中的 32 位颜色转换为 RGBA。那效果不会太好。我不知道有什么东西可以输出“YUVA”。
另外,我认为 BGRA 相机输出返回的 alpha 通道是 0,而不是 1,所以我不确定它为什么有效(IIRC 将其转换为 CGImage,您需要使用 AlphaNoneSkipLast)。
420“双平面”输出的结构如下:
bytes_per_row_1
大约为width
,bytes_per_row_2
大约为width/2
,但您需要使用 CVPixelBufferGetBytesPerRowOfPlane()为了稳健性(您可能还想检查 ..GetHeightOfPlane 和 ...GetWidthOfPlane 的结果)。您可能会幸运地将其视为 1 分量宽度*高度纹理和 2 分量宽度/2*高度/2 纹理。您可能需要检查每行字节数并处理不仅仅是宽度*组件数的情况(尽管对于大多数视频模式来说这可能是正确的)。 AIUI,您还需要在调用 CVPixelBufferUnlockBaseAddress() 之前刷新 GL 上下文。
或者,您可以将其全部以您期望的格式复制到内存中(优化此循环可能有点棘手)。复制的优点是,在解锁像素缓冲区后,您无需担心访问内存的问题。
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
CVPixelBufferGetBaseAddressOfPlane()
and friends)bytes_per_row_1
is approximatelywidth
andbytes_per_row_2
is approximatelywidth/2
, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.