使用 kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange 格式创建的像素缓冲区压缩视频
我有一个 YCbCr 格式的双平面原始视频缓冲区数据,我将其用作在 iPhone 和 iPad 上压缩 H.264 格式的一个新 mp4/mov 视频的源。因此,我使用 CVPixelBufferRef 来创建新的像素缓冲区,然后使用 AVAssetWriterInputPixelBufferAdaptor 附加到视频编写器。
但是,当我使用其中包含 YCbCr 数据的新像素缓冲区调用 appendPixelBuffer
时,appendPixelBuffer
仅针对附加的第一帧返回 YES
。所有其他帧都被编写者拒绝(通过返回 NO
)。问题是,如果我使用 BGRA32 格式的原始视频数据,它工作得很好,所以我想知道我是否有一个错误创建的 YCbCr 格式的像素缓冲区。
我有两种方法来创建像素缓冲区:
1) 使用 CVPixelBufferCreateWithBytes
:
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
videoWidth_,
videoHeight_,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
[videoFrame getBaseAddressOfPlane:0],
[videoFrame getBytesPerRowOfPlane:0],
NULL,
NULL,
NULL,
&pixelBuffer);
2) 使用 CVPixelBufferCreateWithPlanarBytes
:
cvErr = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
videoWidth_,
videoHeight_,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
videoFrame.planeCount,
planeBaseAddress,
planeWidth,
planeHeight,
planeStride,
NULL,
NULL,
NULL,
&pixelBuffer);
planeBaseAddress
、planeWidth
、planeHeight
、planeStride
是包含基地址、宽度、高度和步幅的二维数组Y平面和CbCr平面。
那么你能告诉我哪里做错了吗,或者是否有一些示例代码可以参考,或者这是否是iPhone SDK的问题?
I have a bi-planar raw video buffer data in YCbCr format, which I'm using as source to compress one new mp4/mov video in H.264 format on iPhone and iPad. Therefore, I'm using a CVPixelBufferRef
to make a new pixel buffer and then use an AVAssetWriterInputPixelBufferAdaptor
to append to the video writer.
However, when I'm calling appendPixelBuffer
with the new pixel buffer with YCbCr data in it, appendPixelBuffer
is only returning YES
for the first frame appended. All the other frames are denied by the writer (by returning a NO
). The problem is that if I use BGRA32 format raw video data, it's working just fine, so I wonder whether I have a wrongly created pixel buffer with YCbCr format.
I had two methods to create the pixel buffer:
1) Use CVPixelBufferCreateWithBytes
:
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
videoWidth_,
videoHeight_,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
[videoFrame getBaseAddressOfPlane:0],
[videoFrame getBytesPerRowOfPlane:0],
NULL,
NULL,
NULL,
&pixelBuffer);
2) Use CVPixelBufferCreateWithPlanarBytes
:
cvErr = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault,
videoWidth_,
videoHeight_,
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,
NULL,
0,
videoFrame.planeCount,
planeBaseAddress,
planeWidth,
planeHeight,
planeStride,
NULL,
NULL,
NULL,
&pixelBuffer);
planeBaseAddress
, planeWidth
, planeHeight
, planeStride
are 2D arrays containing the base address, width, height and stride of the Y plane and the CbCr plane.
So can you show me where I'm doing wrong, or if there is some sample code I can refer to, or if this is iPhone SDK's issue?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论