使用 libavcodec 的 mpegts 容器中的原始 H264 帧
我非常感谢有关以下问题的帮助:
我有一个带摄像头的小工具,可以生成 H264 压缩视频帧,这些帧正在发送到我的应用程序。这些帧不在容器中,只是原始数据。
我想用ffmpeg和libav函数创建一个视频文件,方便以后使用。
如果我解码帧,然后对其进行编码,一切正常,我会得到一个有效的视频文件。 (解码/编码步骤是通常的libav命令,这里没什么花哨的,我从万能的互联网上获取它们,它们坚如磐石)...但是,我在解码和编码上浪费了很多时间,所以我想跳过这一步,直接将帧放入输出流中。现在,问题来了。
这是我想出的用于生成编码的代码:
AVFrame* picture;
avpicture_fill((AVPicture*) picture, (uint8_t*)frameData,
codecContext->pix_fmt, codecContext->width,
codecContext->height);
int outSize = avcodec_encode_video(codecContext, videoOutBuf,
sizeof(videoOutBuf), picture);
if (outSize > 0)
{
AVPacket packet;
av_init_packet(&packet);
packet.pts = av_rescale_q(codecContext->coded_frame->pts,
codecContext->time_base, videoStream->time_base);
if (codecContext->coded_frame->key_frame)
{
packet.flags |= PKT_FLAG_KEY;
}
packet.stream_index = videoStream->index;
packet.data = videoOutBuf;
packet.size = outSize;
av_interleaved_write_frame(context, &packet);
put_flush_packet(context->pb);
}
其中变量如下所示:
frameData
是来自相机的解码帧数据,它在上一步中已解码,并且 videoOutBuf
是一个普通的 uint8_t 缓冲区,用于保存数据
我已经修改了应用程序,以便不解码帧,而只是传递数据,例如:
AVPacket packet;
av_init_packet(&packet);
packet.stream_index = videoStream->index;
packet.data = (uint8_t*)frameData;
packet.size = currentFrameSize;
av_interleaved_write_frame(context, &packet);
put_flush_packet(context->pb);
其中
frameData
是原始 H264 帧 currentFrameSize 是原始 H264 帧的大小,即。我从设备获得的每一帧的字节数。
突然间,应用程序不再正常工作,生成的视频无法播放。这是显而易见的,因为我没有为数据包设置正确的 PTS。我所做的是以下内容(我很绝望,你可以从这种方法中看到它:))
packet.pts = timestamps[timestamp_counter ++];
其中 timestamps
实际上是由上面的工作代码生成的 PTS 列表,并写入文件(是的,你没看错,我记录了 10 分钟会话的所有 PTS,并且想使用它们)。
该应用程序仍然无法运行。
现在,我不知道该怎么做,所以问题是:
我想使用 libav 函数创建一个“mpegts”视频流,在流中插入已经编码的视频帧并用它创建一个视频文件。我该怎么做?
谢谢, f.
I would really appreciate some help with the following issue:
I have a gadget with a camera, producing H264 compressed video frames, these frames are being sent to my application. These frames are not in a container, just raw data.
I want to use ffmpeg and libav functions to create a video file, which can be used later.
If I decode the frames, then encode them, everything works fine, I get a valid video file. (the decode/encode steps are the usual libav commands, nothing fancy here, I took them from the almighty internet, they are rock solid)... However, I waste a lot of time by decoding and encoding, so I would like to skip this step and directly put the frames in the output stream. Now, the problems come.
Here is the code I came up with for producing the encoding:
AVFrame* picture;
avpicture_fill((AVPicture*) picture, (uint8_t*)frameData,
codecContext->pix_fmt, codecContext->width,
codecContext->height);
int outSize = avcodec_encode_video(codecContext, videoOutBuf,
sizeof(videoOutBuf), picture);
if (outSize > 0)
{
AVPacket packet;
av_init_packet(&packet);
packet.pts = av_rescale_q(codecContext->coded_frame->pts,
codecContext->time_base, videoStream->time_base);
if (codecContext->coded_frame->key_frame)
{
packet.flags |= PKT_FLAG_KEY;
}
packet.stream_index = videoStream->index;
packet.data = videoOutBuf;
packet.size = outSize;
av_interleaved_write_frame(context, &packet);
put_flush_packet(context->pb);
}
Where the variables are like:
frameData
is the decoded frame data, that came from the camera, it was decoded in a previous step and videoOutBuf
is a plain uint8_t buffer for holding the data
I have modified the application in order to not to decode the frames, but simply pass through the data like:
AVPacket packet;
av_init_packet(&packet);
packet.stream_index = videoStream->index;
packet.data = (uint8_t*)frameData;
packet.size = currentFrameSize;
av_interleaved_write_frame(context, &packet);
put_flush_packet(context->pb);
where
frameData
is the raw H264 frame
and currentFrameSize
is the size of the raw H264 frame, ie. the number of bytes I get from the gadget for every frame.
And suddenly the application is not working correctly anymore, the produced video is unplayable. This is obvious, since I was not setting a correct PTS for the packet. What I did was the following (I'm desperate, you can see it from this approach :) )
packet.pts = timestamps[timestamp_counter ++];
where timestamps
is actually a list of PTS's produced by the working code above, and written to a file (yes, you read it properly, I logged all the PTS's for a 10 minute session, and wanted to use them).
The application still does not work.
Now, here I am without any clue what to do, so here is the question:
I would like to create an "mpegts" video stream using libav functions, insert in the stream already encoded video frames and create a video file with it. How do I do it?
Thanks,
f.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我相信如果你设置以下内容,你就会看到视频播放。
您确实应该根据 h264 数据包标头设置 packet.flags。您可以尝试这个堆栈溢出者的建议直接从流中提取。
如果您还添加音频,那么 pts/dts 将更加重要。我建议你学习本教程
编辑
我找到了时间来提取内容正在我的测试应用程序中为我工作。由于某种原因,dts/pts 值为零对我有用,但 0 或 AV_NOPTS_VALUE 以外的值则不起作用。我想知道我们是否有不同版本的 ffmpeg。我有来自 git://git.videolan.org/ffmpeg.git 的最新信息。
fftest.cpp
I believe if you set the following, you will see video playback.
You should really set packet.flags according to the h264 packet headers. You might try this fellow stack overflowian's suggestion for extracting directly from the stream.
If you are also adding audio, then pts/dts is going to be more important. I suggest you study this tutorial
EDIT
I found time to extract out what is working for me from my test app. For some reason, dts/pts values of zero works for me, but values other than 0 or AV_NOPTS_VALUE do not. I wonder if we have different versions of ffmpeg. I have the latest from git://git.videolan.org/ffmpeg.git.
fftest.cpp
您可以创建一个从控制台调用 ffmpeg 的进程。
用于处理 000001.jpg、000002.jpg、000003.jpg、... 等文件的命令行示例
ffmpeg -ic:\frames\%06d.jpg -r 16 -vcodec mpeg4 -an -yc:\video\some_video。 avi
其他ffmpeg 文档中的示例
You can create a process to call ffmpeg from console.
Example of command line for processing files like 000001.jpg, 000002.jpg, 000003.jpg, ...
ffmpeg -i c:\frames\%06d.jpg -r 16 -vcodec mpeg4 -an -y c:\video\some_video.avi
Other examples from ffmpeg docs