在 Android 上使用来自 javacv 的 ffmpeg 编码视频会导致本机代码崩溃

发布于 2024-12-21 10:48:19 字数 3982 浏览 1 评论 0原文

注意:自从最初提出这个问题以来,我已经更新了这个问题,以反映我所学到的有关将实时相机图像加载到 ffmpeg 库中的一些知识。

我正在使用为 Android 编译的 javacv 中的 ffmpeg 来为我的应用程序编码/解码视频。 (请注意,最初,我尝试使用ffmpeg-java,但它有一些不兼容的库)

原始问题:我遇到的问题是我目前将每个帧作为位图(只是一个普通的android.graphics.Bitmap),我不知道如何将其填充到编码器中。

javacv 的 ffmpeg 中的解决方案: 使用 avpicture_fill(),Android 的格式据说是 YUV420P,尽管我无法验证这一点,直到我的编码器问题(如下)已修复。

avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)

现在的问题:实际对数据进行编码的行使线程崩溃。我得到了一个我无法理解的大本机代码堆栈跟踪。有人有建议吗?

这是我用来实例化所有 ffmpeg 库的代码:

    avcodec.avcodec_register_all();
    avcodec.avcodec_init();
    avformat.av_register_all();

    mCodec = avcodec.avcodec_find_encoder(avcodec.CODEC_ID_H263);
    if (mCodec == null)
    {
        Logging.Log("Unable to find encoder.");
        return;
    }
    Logging.Log("Found encoder.");

    mCodecCtx = avcodec.avcodec_alloc_context();
    mCodecCtx.bit_rate(300000);
    mCodecCtx.codec(mCodec);
    mCodecCtx.width(VIDEO_WIDTH);
    mCodecCtx.height(VIDEO_HEIGHT);
    mCodecCtx.pix_fmt(avutil.PIX_FMT_YUV420P);
    mCodecCtx.codec_id(avcodec.CODEC_ID_H263);
    mCodecCtx.codec_type(avutil.AVMEDIA_TYPE_VIDEO);
    AVRational ratio = new AVRational();
    ratio.num(1);
    ratio.den(30);
    mCodecCtx.time_base(ratio);
    mCodecCtx.coder_type(1);
    mCodecCtx.flags(mCodecCtx.flags() | avcodec.CODEC_FLAG_LOOP_FILTER);
    mCodecCtx.me_cmp(avcodec.FF_LOSS_CHROMA);
    mCodecCtx.me_method(avcodec.ME_HEX);
    mCodecCtx.me_subpel_quality(6);
    mCodecCtx.me_range(16);
    mCodecCtx.gop_size(30);
    mCodecCtx.keyint_min(10);
    mCodecCtx.scenechange_threshold(40);
    mCodecCtx.i_quant_factor((float) 0.71);
    mCodecCtx.b_frame_strategy(1);
    mCodecCtx.qcompress((float) 0.6);
    mCodecCtx.qmin(10);
    mCodecCtx.qmax(51);
    mCodecCtx.max_qdiff(4);
    mCodecCtx.max_b_frames(1);
    mCodecCtx.refs(2);
    mCodecCtx.directpred(3);
    mCodecCtx.trellis(1);
    mCodecCtx.flags2(mCodecCtx.flags2() | avcodec.CODEC_FLAG2_BPYRAMID | avcodec.CODEC_FLAG2_WPRED | avcodec.CODEC_FLAG2_8X8DCT | avcodec.CODEC_FLAG2_FASTPSKIP);

    if (avcodec.avcodec_open(mCodecCtx, mCodec) == 0)
    {
        Logging.Log("Unable to open encoder.");
        return;
    }
    Logging.Log("Encoder opened.");

    mFrameSize = avcodec.avpicture_get_size(avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT);
    Logging.Log("Frame size - '" + mFrameSize + "'.");
    //mPic = new AVPicture(mPicSize);
    mFrame = avcodec.avcodec_alloc_frame();
    if (mFrame == null)
    {
        Logging.Log("Unable to alloc frame.");
    }

这是我接下来想要执行的代码:

    BytePointer picPointer = new BytePointer(data);
    int bBuffSize = mFrameSize;

    BytePointer bBuffer = new BytePointer(bBuffSize);

    int picSize = 0;
    if ((picSize = avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)) <= 0)
    {
        Logging.Log("Couldn't convert preview to AVPicture (" + picSize + ")");
        return;
    }
    Logging.Log("Converted preview to AVPicture (" + picSize + ")");

    VCAP_Package vPackage = new VCAP_Package();

    if (mCodecCtx.isNull())
    {
        Logging.Log("Codec Context is null!");
    }

    //encode the image
    int size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, mFrame);

    int totalSize = 0;
    while (size >= 0)
    {
        totalSize += size;
        Logging.Log("Encoded '" + size + "' bytes.");
        //Get any delayed frames
        size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, null); 
    }
    Logging.Log("Finished encoding. (" + totalSize + ")");

但是,到目前为止,我不知道如何放置位图进入正确的部分,或者如果我的设置正确。

关于代码的一些注释: - VIDEO_WIDTH = 352 - VIDEO_HEIGHT = 288 - VIDEO_FPS = 30;

NOTE: I have updated this since originally asking the question to reflect some of what I have learned about loading live camera images into the ffmpeg libraries.

I am using ffmpeg from javacv compiled for Android to encode/decode video for my application. (Note that originally, I was trying to use ffmpeg-java, but it has some incompatible libraries)

Original problem: The problem that I've run into is that I am currently getting each frame as a Bitmap (just a plain android.graphics.Bitmap) and I can't figure out how to stuff that into the encoder.

Solution in javacv's ffmpeg: Use avpicture_fill(), the format from Android is supposedly YUV420P, though I can't verify this until my encoder issues (below) are fixed.

avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)

Problem Now: The line that is supposed to actually encode the data crashes the thread. I get a big native code stack trace that I'm unable to understand. Does anybody have a suggestion?

Here is the code that I am using to instantiate all the ffmpeg libraries:

    avcodec.avcodec_register_all();
    avcodec.avcodec_init();
    avformat.av_register_all();

    mCodec = avcodec.avcodec_find_encoder(avcodec.CODEC_ID_H263);
    if (mCodec == null)
    {
        Logging.Log("Unable to find encoder.");
        return;
    }
    Logging.Log("Found encoder.");

    mCodecCtx = avcodec.avcodec_alloc_context();
    mCodecCtx.bit_rate(300000);
    mCodecCtx.codec(mCodec);
    mCodecCtx.width(VIDEO_WIDTH);
    mCodecCtx.height(VIDEO_HEIGHT);
    mCodecCtx.pix_fmt(avutil.PIX_FMT_YUV420P);
    mCodecCtx.codec_id(avcodec.CODEC_ID_H263);
    mCodecCtx.codec_type(avutil.AVMEDIA_TYPE_VIDEO);
    AVRational ratio = new AVRational();
    ratio.num(1);
    ratio.den(30);
    mCodecCtx.time_base(ratio);
    mCodecCtx.coder_type(1);
    mCodecCtx.flags(mCodecCtx.flags() | avcodec.CODEC_FLAG_LOOP_FILTER);
    mCodecCtx.me_cmp(avcodec.FF_LOSS_CHROMA);
    mCodecCtx.me_method(avcodec.ME_HEX);
    mCodecCtx.me_subpel_quality(6);
    mCodecCtx.me_range(16);
    mCodecCtx.gop_size(30);
    mCodecCtx.keyint_min(10);
    mCodecCtx.scenechange_threshold(40);
    mCodecCtx.i_quant_factor((float) 0.71);
    mCodecCtx.b_frame_strategy(1);
    mCodecCtx.qcompress((float) 0.6);
    mCodecCtx.qmin(10);
    mCodecCtx.qmax(51);
    mCodecCtx.max_qdiff(4);
    mCodecCtx.max_b_frames(1);
    mCodecCtx.refs(2);
    mCodecCtx.directpred(3);
    mCodecCtx.trellis(1);
    mCodecCtx.flags2(mCodecCtx.flags2() | avcodec.CODEC_FLAG2_BPYRAMID | avcodec.CODEC_FLAG2_WPRED | avcodec.CODEC_FLAG2_8X8DCT | avcodec.CODEC_FLAG2_FASTPSKIP);

    if (avcodec.avcodec_open(mCodecCtx, mCodec) == 0)
    {
        Logging.Log("Unable to open encoder.");
        return;
    }
    Logging.Log("Encoder opened.");

    mFrameSize = avcodec.avpicture_get_size(avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT);
    Logging.Log("Frame size - '" + mFrameSize + "'.");
    //mPic = new AVPicture(mPicSize);
    mFrame = avcodec.avcodec_alloc_frame();
    if (mFrame == null)
    {
        Logging.Log("Unable to alloc frame.");
    }

This is what I want to be able to execute next:

    BytePointer picPointer = new BytePointer(data);
    int bBuffSize = mFrameSize;

    BytePointer bBuffer = new BytePointer(bBuffSize);

    int picSize = 0;
    if ((picSize = avcodec.avpicture_fill((AVPicture)mFrame, picPointer, avutil.PIX_FMT_YUV420P, VIDEO_WIDTH, VIDEO_HEIGHT)) <= 0)
    {
        Logging.Log("Couldn't convert preview to AVPicture (" + picSize + ")");
        return;
    }
    Logging.Log("Converted preview to AVPicture (" + picSize + ")");

    VCAP_Package vPackage = new VCAP_Package();

    if (mCodecCtx.isNull())
    {
        Logging.Log("Codec Context is null!");
    }

    //encode the image
    int size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, mFrame);

    int totalSize = 0;
    while (size >= 0)
    {
        totalSize += size;
        Logging.Log("Encoded '" + size + "' bytes.");
        //Get any delayed frames
        size = avcodec.avcodec_encode_video(mCodecCtx, bBuffer, bBuffSize, null); 
    }
    Logging.Log("Finished encoding. (" + totalSize + ")");

But, as of now, I don't know how to put the Bitmap into the right piece or if I have that setup correctly.

A few notes about the code:
- VIDEO_WIDTH = 352
- VIDEO_HEIGHT = 288
- VIDEO_FPS = 30;

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

猫烠⑼条掵仅有一顆心 2024-12-28 10:48:20

经过大量搜索后,我发现您必须以相当严格和尴尬的方式加载指针。这就是我让一切正常工作的方式:

编解码器设置:

    avcodec.avcodec_register_all();
    avcodec.avcodec_init();
    avformat.av_register_all();

    /* find the H263 video encoder */
    mCodec = avcodec.avcodec_find_encoder(avcodec.CODEC_ID_H263);
    if (mCodec == null) {
        Log.d("TEST_VIDEO", "avcodec_find_encoder() run fail.");
    }

    mCodecCtx = avcodec.avcodec_alloc_context();
    picture = avcodec.avcodec_alloc_frame();

    /* put sample parameters */
    mCodecCtx.bit_rate(400000);
    /* resolution must be a multiple of two */
    mCodecCtx.width(VIDEO_WIDTH);
    mCodecCtx.height(VIDEO_HEIGHT);
    /* frames per second */
    AVRational avFPS = new AVRational();
    avFPS.num(1);
    avFPS.den(VIDEO_FPS);
    mCodecCtx.time_base(avFPS);
    mCodecCtx.pix_fmt(avutil.PIX_FMT_YUV420P);
    mCodecCtx.codec_id(avcodec.CODEC_ID_H263);
    mCodecCtx.codec_type(avutil.AVMEDIA_TYPE_VIDEO);

    /* open it */
    if (avcodec.avcodec_open(mCodecCtx, mCodec) < 0) {
        Log.d("TEST_VIDEO", "avcodec_open() run fail.");
    }

    /* alloc image and output buffer */
    output_buffer_size = 100000;
    output_buffer = avutil.av_malloc(output_buffer_size);

    size = mCodecCtx.width() * mCodecCtx.height();
    picture_buffer = avutil.av_malloc((size * 3) / 2); /* size for YUV 420 */

    picture.data(0, new BytePointer(picture_buffer));
    picture.data(1, picture.data(0).position(size));
    picture.data(2, picture.data(1).position(size / 4));
    picture.linesize(0, mCodecCtx.width());
    picture.linesize(1, mCodecCtx.width() / 2);
    picture.linesize(2, mCodecCtx.width() / 2);

处理预览数据:

    //(1)Convert byte[] first
    byte[] data420 = new byte[data.length];
    convert_yuv422_to_yuv420(data, data420, VIDEO_WIDTH, VIDEO_HEIGHT);

    //(2) Fill picture buffer
    int data1_offset = VIDEO_HEIGHT * VIDEO_WIDTH;
    int data2_offset = data1_offset * 5 / 4;
    int pic_linesize_0 = picture.linesize(0);
    int pic_linesize_1 = picture.linesize(1);
    int pic_linesize_2 = picture.linesize(2);

    //Y
    for(y = 0; y < VIDEO_HEIGHT; y++) 
    {
        for(x = 0; x < VIDEO_WIDTH; x++) 
        {
            picture.data(0).put((y * pic_linesize_0 + x), data420[y * VIDEO_WIDTH + x]);
        }
    }

    //Cb and Cr
    for(y = 0; y < VIDEO_HEIGHT / 2; y++) {
        for(x = 0; x < VIDEO_WIDTH / 2; x++) {
            picture.data(1).put((y * pic_linesize_1 + x), data420[data1_offset + y * VIDEO_WIDTH / 2 + x]);
            picture.data(2).put((y * pic_linesize_2 + x), data420[data2_offset + y * VIDEO_WIDTH / 2 + x]);
        }
    }

    //(2)Encode
    //Encode the image into output_buffer
    out_size = avcodec.avcodec_encode_video(mCodecCtx, new BytePointer(output_buffer), output_buffer_size, picture);
    Log.d("TEST_VIDEO", "Encoded '" + out_size + "' bytes");

    //Delayed frames
    for(; out_size > 0; i++) {
        out_size = avcodec.avcodec_encode_video(mCodecCtx, new BytePointer(output_buffer), output_buffer_size, null);
        Log.d("TEST_VIDEO", "Encoded '" + out_size + "' bytes");
        //fwrite(output_buffer, 1, out_size, file);
    }

我仍在努力对数据进行打包,但可以在此处找到正在进行的测试项目@ http://code.google.com/p/test-video-encode/

After a lot of searching, I figured out that you have to load the pointers in a fairly strict and awkward manner. This is how I got everything working:

Codec setup:

    avcodec.avcodec_register_all();
    avcodec.avcodec_init();
    avformat.av_register_all();

    /* find the H263 video encoder */
    mCodec = avcodec.avcodec_find_encoder(avcodec.CODEC_ID_H263);
    if (mCodec == null) {
        Log.d("TEST_VIDEO", "avcodec_find_encoder() run fail.");
    }

    mCodecCtx = avcodec.avcodec_alloc_context();
    picture = avcodec.avcodec_alloc_frame();

    /* put sample parameters */
    mCodecCtx.bit_rate(400000);
    /* resolution must be a multiple of two */
    mCodecCtx.width(VIDEO_WIDTH);
    mCodecCtx.height(VIDEO_HEIGHT);
    /* frames per second */
    AVRational avFPS = new AVRational();
    avFPS.num(1);
    avFPS.den(VIDEO_FPS);
    mCodecCtx.time_base(avFPS);
    mCodecCtx.pix_fmt(avutil.PIX_FMT_YUV420P);
    mCodecCtx.codec_id(avcodec.CODEC_ID_H263);
    mCodecCtx.codec_type(avutil.AVMEDIA_TYPE_VIDEO);

    /* open it */
    if (avcodec.avcodec_open(mCodecCtx, mCodec) < 0) {
        Log.d("TEST_VIDEO", "avcodec_open() run fail.");
    }

    /* alloc image and output buffer */
    output_buffer_size = 100000;
    output_buffer = avutil.av_malloc(output_buffer_size);

    size = mCodecCtx.width() * mCodecCtx.height();
    picture_buffer = avutil.av_malloc((size * 3) / 2); /* size for YUV 420 */

    picture.data(0, new BytePointer(picture_buffer));
    picture.data(1, picture.data(0).position(size));
    picture.data(2, picture.data(1).position(size / 4));
    picture.linesize(0, mCodecCtx.width());
    picture.linesize(1, mCodecCtx.width() / 2);
    picture.linesize(2, mCodecCtx.width() / 2);

Handling the preview data:

    //(1)Convert byte[] first
    byte[] data420 = new byte[data.length];
    convert_yuv422_to_yuv420(data, data420, VIDEO_WIDTH, VIDEO_HEIGHT);

    //(2) Fill picture buffer
    int data1_offset = VIDEO_HEIGHT * VIDEO_WIDTH;
    int data2_offset = data1_offset * 5 / 4;
    int pic_linesize_0 = picture.linesize(0);
    int pic_linesize_1 = picture.linesize(1);
    int pic_linesize_2 = picture.linesize(2);

    //Y
    for(y = 0; y < VIDEO_HEIGHT; y++) 
    {
        for(x = 0; x < VIDEO_WIDTH; x++) 
        {
            picture.data(0).put((y * pic_linesize_0 + x), data420[y * VIDEO_WIDTH + x]);
        }
    }

    //Cb and Cr
    for(y = 0; y < VIDEO_HEIGHT / 2; y++) {
        for(x = 0; x < VIDEO_WIDTH / 2; x++) {
            picture.data(1).put((y * pic_linesize_1 + x), data420[data1_offset + y * VIDEO_WIDTH / 2 + x]);
            picture.data(2).put((y * pic_linesize_2 + x), data420[data2_offset + y * VIDEO_WIDTH / 2 + x]);
        }
    }

    //(2)Encode
    //Encode the image into output_buffer
    out_size = avcodec.avcodec_encode_video(mCodecCtx, new BytePointer(output_buffer), output_buffer_size, picture);
    Log.d("TEST_VIDEO", "Encoded '" + out_size + "' bytes");

    //Delayed frames
    for(; out_size > 0; i++) {
        out_size = avcodec.avcodec_encode_video(mCodecCtx, new BytePointer(output_buffer), output_buffer_size, null);
        Log.d("TEST_VIDEO", "Encoded '" + out_size + "' bytes");
        //fwrite(output_buffer, 1, out_size, file);
    }

I am still working to packetize the data, but the ongoing test project can be found here @ http://code.google.com/p/test-video-encode/

难如初 2024-12-28 10:48:20

android图形库是否支持YUV格式:

codecCtx.pix_fmt = AVCodecLibrary.PIX_FMT_YUV420P;

看看是否可以将其设置为ARGB或RGB32。我知道android图形库支持这种像素格式。

PS:我对ffmpeg一无所知

Does android graphics library support the YUV format:

codecCtx.pix_fmt = AVCodecLibrary.PIX_FMT_YUV420P;

See if you can set it to ARGB or RGB32. I know the android graphics library supports this pixel format.

PS: I don't know anything about ffmpeg

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文