使用 libavcodec 的 mpegts 容器中的原始 H264 帧

发布于 2024-11-06 14:59:50 字数 2122 浏览 4 评论 0原文

我非常感谢有关以下问题的帮助:

我有一个带摄像头的小工具,可以生成 H264 压缩视频帧,这些帧正在发送到我的应用程序。这些帧不在容器中,只是原始数据。

我想用ffmpeg和libav函数创建一个视频文件,方便以后使用。

如果我解码帧,然后对其进行编码,一切正常,我会得到一个有效的视频文件。 (解码/编码步骤是通常的libav命令,这里没什么花哨的,我从万能的互联网上获取它们,它们坚如磐石)...但是,我在解码和编码上浪费了很多时间,所以我想跳过这一步,直接将帧放入输出流中。现在,问题来了。

这是我想出的用于生成编码的代码:

AVFrame* picture;

avpicture_fill((AVPicture*) picture, (uint8_t*)frameData, 
                 codecContext->pix_fmt, codecContext->width,
                 codecContext->height);
int outSize = avcodec_encode_video(codecContext, videoOutBuf, 
                 sizeof(videoOutBuf), picture);
if (outSize > 0) 
{
    AVPacket packet;
    av_init_packet(&packet);
    packet.pts = av_rescale_q(codecContext->coded_frame->pts,
                  codecContext->time_base, videoStream->time_base);
    if (codecContext->coded_frame->key_frame) 
    {
        packet.flags |= PKT_FLAG_KEY;
    }
    packet.stream_index = videoStream->index;
    packet.data =  videoOutBuf;
    packet.size =  outSize;

    av_interleaved_write_frame(context, &packet);
    put_flush_packet(context->pb);
}

其中变量如下所示:

frameData 是来自相机的解码帧数据,它在上一步中已解码,并且 videoOutBuf 是一个普通的 uint8_t 缓冲区,用于保存数据

我已经修改了应用程序,以便不解码帧,而只是传递数据,例如:

    AVPacket packet;
    av_init_packet(&packet);

    packet.stream_index = videoStream->index;
    packet.data = (uint8_t*)frameData;
    packet.size = currentFrameSize;

    av_interleaved_write_frame(context, &packet);
    put_flush_packet(context->pb);

其中

frameData 是原始 H264 帧 currentFrameSize 是原始 H264 帧的大小,即。我从设备获得的每一帧的字节数。

突然间,应用程序不再正常工作,生成的视频无法播放。这是显而易见的,因为我没有为数据包设置正确的 PTS。我所做的是以下内容(我很绝望,你可以从这种方法中看到它:))

    packet.pts = timestamps[timestamp_counter ++];

其中 timestamps 实际上是由上面的工作代码生成的 PTS 列表,并写入文件(是的,你没看错,我记录了 10 分钟会话的所有 PTS,并且想使用它们)。

该应用程序仍然无法运行。

现在,我不知道该怎么做,所以问题是:

我想使用 libav 函数创建一个“mpegts”视频流,在流中插入已经编码的视频帧并用它创建一个视频文件。我该怎么做?

谢谢, f.

I would really appreciate some help with the following issue:

I have a gadget with a camera, producing H264 compressed video frames, these frames are being sent to my application. These frames are not in a container, just raw data.

I want to use ffmpeg and libav functions to create a video file, which can be used later.

If I decode the frames, then encode them, everything works fine, I get a valid video file. (the decode/encode steps are the usual libav commands, nothing fancy here, I took them from the almighty internet, they are rock solid)... However, I waste a lot of time by decoding and encoding, so I would like to skip this step and directly put the frames in the output stream. Now, the problems come.

Here is the code I came up with for producing the encoding:

AVFrame* picture;

avpicture_fill((AVPicture*) picture, (uint8_t*)frameData, 
                 codecContext->pix_fmt, codecContext->width,
                 codecContext->height);
int outSize = avcodec_encode_video(codecContext, videoOutBuf, 
                 sizeof(videoOutBuf), picture);
if (outSize > 0) 
{
    AVPacket packet;
    av_init_packet(&packet);
    packet.pts = av_rescale_q(codecContext->coded_frame->pts,
                  codecContext->time_base, videoStream->time_base);
    if (codecContext->coded_frame->key_frame) 
    {
        packet.flags |= PKT_FLAG_KEY;
    }
    packet.stream_index = videoStream->index;
    packet.data =  videoOutBuf;
    packet.size =  outSize;

    av_interleaved_write_frame(context, &packet);
    put_flush_packet(context->pb);
}

Where the variables are like:

frameData is the decoded frame data, that came from the camera, it was decoded in a previous step and videoOutBuf is a plain uint8_t buffer for holding the data

I have modified the application in order to not to decode the frames, but simply pass through the data like:

    AVPacket packet;
    av_init_packet(&packet);

    packet.stream_index = videoStream->index;
    packet.data = (uint8_t*)frameData;
    packet.size = currentFrameSize;

    av_interleaved_write_frame(context, &packet);
    put_flush_packet(context->pb);

where

frameData is the raw H264 frame
and currentFrameSize is the size of the raw H264 frame, ie. the number of bytes I get from the gadget for every frame.

And suddenly the application is not working correctly anymore, the produced video is unplayable. This is obvious, since I was not setting a correct PTS for the packet. What I did was the following (I'm desperate, you can see it from this approach :) )

    packet.pts = timestamps[timestamp_counter ++];

where timestamps is actually a list of PTS's produced by the working code above, and written to a file (yes, you read it properly, I logged all the PTS's for a 10 minute session, and wanted to use them).

The application still does not work.

Now, here I am without any clue what to do, so here is the question:

I would like to create an "mpegts" video stream using libav functions, insert in the stream already encoded video frames and create a video file with it. How do I do it?

Thanks,
f.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

嘦怹 2024-11-13 14:59:50

我相信如果你设置以下内容,你就会看到视频播放。

packet.flags |= AV_PKT_FLAG_KEY;
packet.pts = packet.dts = 0;

您确实应该根据 h264 数据包标头设置 packet.flags。您可以尝试这个堆栈溢出者的建议直接从流中提取。

如果您还添加音频,那么 pts/dts 将更加重要。我建议你学习本教程

编辑

我找到了时间来提取内容正在我的测试应用程序中为我工作。由于某种原因,dts/pts 值为零对我有用,但 0 或 AV_NOPTS_VALUE 以外的值则不起作用。我想知道我们是否有不同版本的 ffmpeg。我有来自 git://git.videolan.org/ffmpeg.git 的最新信息。

fftest.cpp

#include <string>

#ifndef INT64_C
#define INT64_C(c) (c ## LL)
#define UINT64_C(c) (c ## ULL)
#endif

//#define _M
#define _M printf( "%s(%d) : MARKER\n", __FILE__, __LINE__ )

extern "C"
{
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
};


AVFormatContext *fc = 0;
int vi = -1, waitkey = 1;

// < 0 = error
// 0 = I-Frame
// 1 = P-Frame
// 2 = B-Frame
// 3 = S-Frame
int getVopType( const void *p, int len )
{   
    if ( !p || 6 >= len )
        return -1;

    unsigned char *b = (unsigned char*)p;

    // Verify NAL marker
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
    {   b++;
        if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
            return -1;
    } // end if

    b += 3;

    // Verify VOP id
    if ( 0xb6 == *b )
    {   b++;
        return ( *b & 0xc0 ) >> 6;
    } // end if

    switch( *b )
    {   case 0x65 : return 0;
        case 0x61 : return 1;
        case 0x01 : return 2;
    } // end switch

    return -1;
}

void write_frame( const void* p, int len )
{
    if ( 0 > vi )
        return;

    AVStream *pst = fc->streams[ vi ];

    // Init packet
    AVPacket pkt;
    av_init_packet( &pkt );
    pkt.flags |= ( 0 >= getVopType( p, len ) ) ? AV_PKT_FLAG_KEY : 0;   
    pkt.stream_index = pst->index;
    pkt.data = (uint8_t*)p;
    pkt.size = len;

    // Wait for key frame
    if ( waitkey )
        if ( 0 == ( pkt.flags & AV_PKT_FLAG_KEY ) )
            return;
        else
            waitkey = 0;

    pkt.dts = AV_NOPTS_VALUE;
    pkt.pts = AV_NOPTS_VALUE;

//  av_write_frame( fc, &pkt );
    av_interleaved_write_frame( fc, &pkt );
}

void destroy()
{
    waitkey = 1;
    vi = -1;

    if ( !fc )
        return;

_M; av_write_trailer( fc );

    if ( fc->oformat && !( fc->oformat->flags & AVFMT_NOFILE ) && fc->pb )
        avio_close( fc->pb ); 

    // Free the stream
_M; av_free( fc );

    fc = 0;
_M; 
}

int get_nal_type( void *p, int len )
{
    if ( !p || 5 >= len )
        return -1;

    unsigned char *b = (unsigned char*)p;

    // Verify NAL marker
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
    {   b++;
        if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
            return -1;
    } // end if

    b += 3;

    return *b;
}

int create( void *p, int len )
{
    if ( 0x67 != get_nal_type( p, len ) )
        return -1;

    destroy();

    const char *file = "test.avi";
    CodecID codec_id = CODEC_ID_H264;
//  CodecID codec_id = CODEC_ID_MPEG4;
    int br = 1000000;
    int w = 480;
    int h = 354;
    int fps = 15;

    // Create container
_M; AVOutputFormat *of = av_guess_format( 0, file, 0 );
    fc = avformat_alloc_context();
    fc->oformat = of;
    strcpy( fc->filename, file );

    // Add video stream
_M; AVStream *pst = av_new_stream( fc, 0 );
    vi = pst->index;

    AVCodecContext *pcc = pst->codec;
_M; avcodec_get_context_defaults2( pcc, AVMEDIA_TYPE_VIDEO );
    pcc->codec_type = AVMEDIA_TYPE_VIDEO;

    pcc->codec_id = codec_id;
    pcc->bit_rate = br;
    pcc->width = w;
    pcc->height = h;
    pcc->time_base.num = 1;
    pcc->time_base.den = fps;

    // Init container
_M; av_set_parameters( fc, 0 );

    if ( !( fc->oformat->flags & AVFMT_NOFILE ) )
        avio_open( &fc->pb, fc->filename, URL_WRONLY );

_M; av_write_header( fc );

_M; return 1;
}

int main( int argc, char** argv )
{
    int f = 0, sz = 0;
    char fname[ 256 ] = { 0 };
    char buf[ 128 * 1024 ];

    av_log_set_level( AV_LOG_ERROR );
    av_register_all();

    do
    {
        // Raw frames in v0.raw, v1.raw, v2.raw, ...
//      sprintf( fname, "rawvideo/v%lu.raw", f++ );
        sprintf( fname, "frames/frame%lu.bin", f++ );
        printf( "%s\n", fname );

        FILE *fd = fopen( fname, "rb" );
        if ( !fd )
            sz = 0;
        else
        {
            sz = fread( buf, 1, sizeof( buf ) - FF_INPUT_BUFFER_PADDING_SIZE, fd );
            if ( 0 < sz )
            {
                memset( &buf[ sz ], 0, FF_INPUT_BUFFER_PADDING_SIZE );          

                if ( !fc )
                    create( buf, sz );

                if ( fc )
                    write_frame( buf, sz );

            } // end if

            fclose( fd );

        } // end else

    } while ( 0 < sz );

    destroy();
}

I believe if you set the following, you will see video playback.

packet.flags |= AV_PKT_FLAG_KEY;
packet.pts = packet.dts = 0;

You should really set packet.flags according to the h264 packet headers. You might try this fellow stack overflowian's suggestion for extracting directly from the stream.

If you are also adding audio, then pts/dts is going to be more important. I suggest you study this tutorial

EDIT

I found time to extract out what is working for me from my test app. For some reason, dts/pts values of zero works for me, but values other than 0 or AV_NOPTS_VALUE do not. I wonder if we have different versions of ffmpeg. I have the latest from git://git.videolan.org/ffmpeg.git.

fftest.cpp

#include <string>

#ifndef INT64_C
#define INT64_C(c) (c ## LL)
#define UINT64_C(c) (c ## ULL)
#endif

//#define _M
#define _M printf( "%s(%d) : MARKER\n", __FILE__, __LINE__ )

extern "C"
{
    #include "libavcodec/avcodec.h"
    #include "libavformat/avformat.h"
};


AVFormatContext *fc = 0;
int vi = -1, waitkey = 1;

// < 0 = error
// 0 = I-Frame
// 1 = P-Frame
// 2 = B-Frame
// 3 = S-Frame
int getVopType( const void *p, int len )
{   
    if ( !p || 6 >= len )
        return -1;

    unsigned char *b = (unsigned char*)p;

    // Verify NAL marker
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
    {   b++;
        if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
            return -1;
    } // end if

    b += 3;

    // Verify VOP id
    if ( 0xb6 == *b )
    {   b++;
        return ( *b & 0xc0 ) >> 6;
    } // end if

    switch( *b )
    {   case 0x65 : return 0;
        case 0x61 : return 1;
        case 0x01 : return 2;
    } // end switch

    return -1;
}

void write_frame( const void* p, int len )
{
    if ( 0 > vi )
        return;

    AVStream *pst = fc->streams[ vi ];

    // Init packet
    AVPacket pkt;
    av_init_packet( &pkt );
    pkt.flags |= ( 0 >= getVopType( p, len ) ) ? AV_PKT_FLAG_KEY : 0;   
    pkt.stream_index = pst->index;
    pkt.data = (uint8_t*)p;
    pkt.size = len;

    // Wait for key frame
    if ( waitkey )
        if ( 0 == ( pkt.flags & AV_PKT_FLAG_KEY ) )
            return;
        else
            waitkey = 0;

    pkt.dts = AV_NOPTS_VALUE;
    pkt.pts = AV_NOPTS_VALUE;

//  av_write_frame( fc, &pkt );
    av_interleaved_write_frame( fc, &pkt );
}

void destroy()
{
    waitkey = 1;
    vi = -1;

    if ( !fc )
        return;

_M; av_write_trailer( fc );

    if ( fc->oformat && !( fc->oformat->flags & AVFMT_NOFILE ) && fc->pb )
        avio_close( fc->pb ); 

    // Free the stream
_M; av_free( fc );

    fc = 0;
_M; 
}

int get_nal_type( void *p, int len )
{
    if ( !p || 5 >= len )
        return -1;

    unsigned char *b = (unsigned char*)p;

    // Verify NAL marker
    if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
    {   b++;
        if ( b[ 0 ] || b[ 1 ] || 0x01 != b[ 2 ] )
            return -1;
    } // end if

    b += 3;

    return *b;
}

int create( void *p, int len )
{
    if ( 0x67 != get_nal_type( p, len ) )
        return -1;

    destroy();

    const char *file = "test.avi";
    CodecID codec_id = CODEC_ID_H264;
//  CodecID codec_id = CODEC_ID_MPEG4;
    int br = 1000000;
    int w = 480;
    int h = 354;
    int fps = 15;

    // Create container
_M; AVOutputFormat *of = av_guess_format( 0, file, 0 );
    fc = avformat_alloc_context();
    fc->oformat = of;
    strcpy( fc->filename, file );

    // Add video stream
_M; AVStream *pst = av_new_stream( fc, 0 );
    vi = pst->index;

    AVCodecContext *pcc = pst->codec;
_M; avcodec_get_context_defaults2( pcc, AVMEDIA_TYPE_VIDEO );
    pcc->codec_type = AVMEDIA_TYPE_VIDEO;

    pcc->codec_id = codec_id;
    pcc->bit_rate = br;
    pcc->width = w;
    pcc->height = h;
    pcc->time_base.num = 1;
    pcc->time_base.den = fps;

    // Init container
_M; av_set_parameters( fc, 0 );

    if ( !( fc->oformat->flags & AVFMT_NOFILE ) )
        avio_open( &fc->pb, fc->filename, URL_WRONLY );

_M; av_write_header( fc );

_M; return 1;
}

int main( int argc, char** argv )
{
    int f = 0, sz = 0;
    char fname[ 256 ] = { 0 };
    char buf[ 128 * 1024 ];

    av_log_set_level( AV_LOG_ERROR );
    av_register_all();

    do
    {
        // Raw frames in v0.raw, v1.raw, v2.raw, ...
//      sprintf( fname, "rawvideo/v%lu.raw", f++ );
        sprintf( fname, "frames/frame%lu.bin", f++ );
        printf( "%s\n", fname );

        FILE *fd = fopen( fname, "rb" );
        if ( !fd )
            sz = 0;
        else
        {
            sz = fread( buf, 1, sizeof( buf ) - FF_INPUT_BUFFER_PADDING_SIZE, fd );
            if ( 0 < sz )
            {
                memset( &buf[ sz ], 0, FF_INPUT_BUFFER_PADDING_SIZE );          

                if ( !fc )
                    create( buf, sz );

                if ( fc )
                    write_frame( buf, sz );

            } // end if

            fclose( fd );

        } // end else

    } while ( 0 < sz );

    destroy();
}
屋顶上的小猫咪 2024-11-13 14:59:50

您可以创建一个从控制台调用 ffmpeg 的进程。

用于处理 000001.jpg、000002.jpg、000003.jpg、... 等文件的命令行示例

ffmpeg -ic:\frames\%06d.jpg -r 16 -vcodec mpeg4 -an -yc:\video\some_video。 avi

其他ffmpeg 文档中的示例

You can create a process to call ffmpeg from console.

Example of command line for processing files like 000001.jpg, 000002.jpg, 000003.jpg, ...

ffmpeg -i c:\frames\%06d.jpg -r 16 -vcodec mpeg4 -an -y c:\video\some_video.avi

Other examples from ffmpeg docs

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文