从 libswscale 中获取 PIX_FMT_YUYV422

发布于 2024-08-05 13:36:09 字数 1696 浏览 1 评论 0原文

我正在尝试学习在 Cocoa 中使用不同的 ffmpeg 库,并且我正在尝试在 Core Video 的帮助下显示帧。看来我已经让 CV 回调开始工作,并且它获取了我尝试放入 CVImageBufferRef 中的帧,稍后我用 Core Image 绘制了这些帧。

问题是我试图让 PIX_FMT_YUYV422 与 libswscale 一起使用,但是一旦我将像素格式更改为 PIX_FMT_YUV420P 以外的任何格式,它就会因 EXC_BAD_ACCESS 崩溃。

只要我使用YUV420P,程序就可以运行,尽管它不能正常显示。我怀疑像素格式不受支持,所以我想尝试PIX_FMT_YUYV422。

我之前已经运行过它并成功使用 PIX_FMT_RGB24 写入 PPM 文件。由于某种原因,它现在在我身上崩溃了,我不知道可能出了什么问题。

我有点不知所措,但这就是我更喜欢的学习方式。 :)

这是我分配 AVFrame 的方式:

inFrame = avcodec_alloc_frame();
outFrame = avcodec_alloc_frame();
int frameBytes = avpicture_get_size(PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);


uint8_t *frameBuffer = malloc(frameBytes);
avpicture_fill((AVPicture *)outFrame, frameBuffer, PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);

然后我尝试通过 swscale 运行它,如下所示:

static struct SwsContext *convertContext;


 if (convertContext == NULL) {
  int w = cdcCtx->width;
  int h = cdcCtx->height;
  convertContext = sws_getContext(w, h, cdcCtx->pix_fmt, outWidth, outHeight, PIX_FMT_YUYV422, SWS_BICUBIC, NULL, NULL, NULL);
  if (convertContext == NULL) {
   NSLog(@"Cannot initialize the conversion context!");
   return NO;
  }
 }

 sws_scale(convertContext, inFrame->data, inFrame->linesize, 0, outHeight, outFrame->data, outFrame->linesize);

最后我尝试将其写入像素缓冲区以与 Core Image 一起使用:

int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, &currentFrame);

使用 420P 它可以运行,但它与 kYUVSPixelformat 不匹配对于像素缓冲区,据我了解它不接受 YUV420。

我真的很感激任何帮助,无论多么小,因为它可能会帮助我继续奋斗。 :)

I'm trying to learn to use the different ffmpeg libs with Cocoa, and I'm trying to get frames to display with help of Core Video. It seems I have gotten the CV callbacks to work, and it gets frames which I try to put in a CVImageBufferRef that I later draw with Core Image.

The problem is I'm trying to get PIX_FMT_YUYV422 to work with libswscale, but as soon as I change the pixel format to anything other than PIX_FMT_YUV420P it crashes with EXC_BAD_ACCESS.

As long as I use YUV420P the program runs, allthough it doesn't display properly. I suspected that the pixel format isn't supported, so I wanted to try PIX_FMT_YUYV422.

I have had it running before and successfully wrote PPM files with PIX_FMT_RGB24. For some reason it just crashes on me now, and I don't see what might be wrong.

I'm a bit in over my head here, but that is how I prefer to learn. :)

Here's how I allocate the AVFrames:

inFrame = avcodec_alloc_frame();
outFrame = avcodec_alloc_frame();
int frameBytes = avpicture_get_size(PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);


uint8_t *frameBuffer = malloc(frameBytes);
avpicture_fill((AVPicture *)outFrame, frameBuffer, PIX_FMT_YUYV422, cdcCtx->width, cdcCtx->height);

Then I try to run it through swscale like so:

static struct SwsContext *convertContext;


 if (convertContext == NULL) {
  int w = cdcCtx->width;
  int h = cdcCtx->height;
  convertContext = sws_getContext(w, h, cdcCtx->pix_fmt, outWidth, outHeight, PIX_FMT_YUYV422, SWS_BICUBIC, NULL, NULL, NULL);
  if (convertContext == NULL) {
   NSLog(@"Cannot initialize the conversion context!");
   return NO;
  }
 }

 sws_scale(convertContext, inFrame->data, inFrame->linesize, 0, outHeight, outFrame->data, outFrame->linesize);

And finally I try to write it to a pixel buffer for use with Core Image:

int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);

With 420P it runs, but it doesnt match up with the kYUVSPixelformat for the pixel buffer, and as I understand it doesnt accept YUV420.

I would really appreciate any help, no matter how small, as it might help me struggle on. :)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

薄凉少年不暖心 2024-08-12 13:36:09

这当然不是完整的代码示例,因为您从未将任何内容解码到输入帧中。如果你这样做,看起来是正确的。

实际上,您也不需要填充输出图片,甚至不需要为其分配 AVFrame。

This certainly isn't a complete code sample, since you never decode anything into the input frame. If you were to do that, it looks correct.

You also don't need to fill the output picture, or even allocate an AVFrame for it, really.

别低头,皇冠会掉 2024-08-12 13:36:09

YUV420P 是一种平面格式。因此,AVFrame.data[0] 并不是故事的全部。我在对于平面格式中看到一个错误

int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);

,您必须读取从 AVFrame.data[0]AVFrame.data[3] 的数据块

YUV420P is a planar format. Therefore, AVFrame.data[0] is not the whole story. I see a mistake in

int ret = CVPixelBufferCreateWithBytes(0, outWidth, outHeight, kYUVSPixelFormat, outFrame->data[0], outFrame->linesize[0], 0, 0, 0, ¤tFrame);

For planar formats, you will have to read data blocks from AVFrame.data[0] up to AVFrame.data[3]

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文