libav* 解码不正确
使用 libav 保存视频中的帧。
问题是,如果您多次调用函数decode,然后调用第二次,然后就无法正确处理。
第一次这样的结论(一切正常):
[swscaler @ 0x8b48510]No accelerated colorspace conversion found from yuv420p to bgra.
good
第二次(找不到流,但这些是相同的):
[mp3 @ 0x8ae5800]Header missing
Last message repeated 223 times
[mp3 @ 0x8af31c0]Could not find codec parameters (Audio: mp1, 0 channels, s16)
[mp3 @ 0x8af31c0]Estimating duration from bitrate, this may be inaccurate
av_find_stream_info
你能告诉我错误发生在哪里吗?
main.cpp
avcodec_init();
avcodec_register_all();
av_register_all();
char *data;
int size;
//fill data and size
...
decode(data, size);
decode(data, size);
video.cpp
int f_offset = 0;
int f_length = 0;
char *f_data = 0;
int64_t seekp(void *opaque, int64_t offset, int whence)
{
switch (whence)
{
case SEEK_SET:
if (offset > f_length || offset < 0)
return -1;
f_offset = offset;
return f_offset;
case SEEK_CUR:
if (f_offset + offset > f_length || f_offset + offset < 0)
return -1;
f_offset += offset;
return f_offset;
case SEEK_END:
if (offset > 0 || f_length + offset < 0)
return -1;
f_offset = f_length + offset;
return f_offset;
case AVSEEK_SIZE:
return f_length;
}
return -1;
}
int readp(void *opaque, uint8_t *buf, int buf_size)
{
if (f_offset == f_length)
return 0;
int length = buf_size <= (f_length - f_offset) ? buf_size : (f_length - f_offset);
memcpy(buf, f_data + f_offset, length);
f_offset += length;
return length;
}
bool decode(char *data, int length)
{
f_offset = 0;
f_length = length;
f_data = data;
int buffer_read_size = FF_MIN_BUFFER_SIZE;
uchar *buffer_read = (uchar *) av_mallocz(buffer_read_size + FF_INPUT_BUFFER_PADDING_SIZE);
AVProbeData pd;
pd.filename = "";
pd.buf_size = 4096 < f_length ? 4096 : f_length;
pd.buf = (uchar *) av_mallocz(pd.buf_size + AVPROBE_PADDING_SIZE);
memcpy(pd.buf, f_data, pd.buf_size);
AVInputFormat *pAVInputFormat = av_probe_input_format(&pd, 1);
if (pAVInputFormat == NULL)
{
std::cerr << "AVIF";
return false;
}
pAVInputFormat->flags |= AVFMT_NOFILE;
ByteIOContext ByteIOCtx;
if (init_put_byte(&ByteIOCtx, buffer_read, buffer_read_size, 0, NULL, readp, NULL, seekp) < 0)
{
std::cerr << "init_put_byte";
return false;
}
AVFormatContext *pFormatCtx;
if (av_open_input_stream(&pFormatCtx, &ByteIOCtx, "", pAVInputFormat, NULL) < 0)
{
std::cerr << "av_open_stream";
return false;
}
if (av_find_stream_info(pFormatCtx) < 0)
{
std::cerr << "av_find_stream_info";
return false;
}
int video_stream;
video_stream = -1;
for (uint i = 0; i < pFormatCtx->nb_streams; ++i)
if (pFormatCtx->streams[i]->codec->codec_type == CODEC_TYPE_VIDEO)
{
video_stream = i;
break;
}
if (video_stream == -1)
{
std::cerr << "video_stream == -1";
return false;
}
AVCodecContext *pCodecCtx;
pCodecCtx = pFormatCtx->streams[video_stream]->codec;
AVCodec *pCodec;
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (pCodec == NULL)
{
std::cerr << "pCodec == NULL";
return false;
}
if (avcodec_open(pCodecCtx, pCodec) < 0)
{
std::cerr << "avcodec_open";
return false;
}
AVFrame *pFrame;
pFrame = avcodec_alloc_frame();
if (pFrame == NULL)
{
std::cerr << "pFrame == NULL";
return false;
}
AVFrame *pFrameRGB;
pFrameRGB = avcodec_alloc_frame();
if (pFrameRGB == NULL)
{
std::cerr << "pFrameRGB == NULL";
return false;
}
int numBytes;
numBytes = avpicture_get_size(PIX_FMT_RGB32, pCodecCtx->width, pCodecCtx->height);
uint8_t *buffer;
buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
if (buffer == NULL)
{
std::cerr << "buffer == NULL";
return false;
}
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB32, pCodecCtx->width, pCodecCtx->height);
SwsContext *swsctx;
swsctx = sws_getContext(
pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB32,
SWS_BILINEAR, NULL, NULL, NULL);
if (swsctx == NULL)
{
std::cerr << "swsctx == NULL";
return false;
}
AVPacket packet;
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == video_stream)
{
int frame_finished;
avcodec_decode_video2(pCodecCtx, pFrame, &frame_finished, &packet);
if (frame_finished)
{
sws_scale(swsctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
std::cerr << "good";
av_close_input_stream(pFormatCtx);
return true;
}
else
std::cerr << "frame_finished == 0";
}
}
std::cerr << "av_read_frame < 0";
return false;
}
ffmpeg -版本
FFmpeg 0.6.2-4:0.6.2-1ubuntu1
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
Use libav to save frames from a video.
The problem is that if you call the function decode a few times, then 2nd and then not correctly handled.
1st time such a conclusion (all works fine):
[swscaler @ 0x8b48510]No accelerated colorspace conversion found from yuv420p to bgra.
good
2nd (can not find stream, but these are the same):
[mp3 @ 0x8ae5800]Header missing
Last message repeated 223 times
[mp3 @ 0x8af31c0]Could not find codec parameters (Audio: mp1, 0 channels, s16)
[mp3 @ 0x8af31c0]Estimating duration from bitrate, this may be inaccurate
av_find_stream_info
Can you please tell where the error occurred.
main.cpp
avcodec_init();
avcodec_register_all();
av_register_all();
char *data;
int size;
//fill data and size
...
decode(data, size);
decode(data, size);
video.cpp
int f_offset = 0;
int f_length = 0;
char *f_data = 0;
int64_t seekp(void *opaque, int64_t offset, int whence)
{
switch (whence)
{
case SEEK_SET:
if (offset > f_length || offset < 0)
return -1;
f_offset = offset;
return f_offset;
case SEEK_CUR:
if (f_offset + offset > f_length || f_offset + offset < 0)
return -1;
f_offset += offset;
return f_offset;
case SEEK_END:
if (offset > 0 || f_length + offset < 0)
return -1;
f_offset = f_length + offset;
return f_offset;
case AVSEEK_SIZE:
return f_length;
}
return -1;
}
int readp(void *opaque, uint8_t *buf, int buf_size)
{
if (f_offset == f_length)
return 0;
int length = buf_size <= (f_length - f_offset) ? buf_size : (f_length - f_offset);
memcpy(buf, f_data + f_offset, length);
f_offset += length;
return length;
}
bool decode(char *data, int length)
{
f_offset = 0;
f_length = length;
f_data = data;
int buffer_read_size = FF_MIN_BUFFER_SIZE;
uchar *buffer_read = (uchar *) av_mallocz(buffer_read_size + FF_INPUT_BUFFER_PADDING_SIZE);
AVProbeData pd;
pd.filename = "";
pd.buf_size = 4096 < f_length ? 4096 : f_length;
pd.buf = (uchar *) av_mallocz(pd.buf_size + AVPROBE_PADDING_SIZE);
memcpy(pd.buf, f_data, pd.buf_size);
AVInputFormat *pAVInputFormat = av_probe_input_format(&pd, 1);
if (pAVInputFormat == NULL)
{
std::cerr << "AVIF";
return false;
}
pAVInputFormat->flags |= AVFMT_NOFILE;
ByteIOContext ByteIOCtx;
if (init_put_byte(&ByteIOCtx, buffer_read, buffer_read_size, 0, NULL, readp, NULL, seekp) < 0)
{
std::cerr << "init_put_byte";
return false;
}
AVFormatContext *pFormatCtx;
if (av_open_input_stream(&pFormatCtx, &ByteIOCtx, "", pAVInputFormat, NULL) < 0)
{
std::cerr << "av_open_stream";
return false;
}
if (av_find_stream_info(pFormatCtx) < 0)
{
std::cerr << "av_find_stream_info";
return false;
}
int video_stream;
video_stream = -1;
for (uint i = 0; i < pFormatCtx->nb_streams; ++i)
if (pFormatCtx->streams[i]->codec->codec_type == CODEC_TYPE_VIDEO)
{
video_stream = i;
break;
}
if (video_stream == -1)
{
std::cerr << "video_stream == -1";
return false;
}
AVCodecContext *pCodecCtx;
pCodecCtx = pFormatCtx->streams[video_stream]->codec;
AVCodec *pCodec;
pCodec = avcodec_find_decoder(pCodecCtx->codec_id);
if (pCodec == NULL)
{
std::cerr << "pCodec == NULL";
return false;
}
if (avcodec_open(pCodecCtx, pCodec) < 0)
{
std::cerr << "avcodec_open";
return false;
}
AVFrame *pFrame;
pFrame = avcodec_alloc_frame();
if (pFrame == NULL)
{
std::cerr << "pFrame == NULL";
return false;
}
AVFrame *pFrameRGB;
pFrameRGB = avcodec_alloc_frame();
if (pFrameRGB == NULL)
{
std::cerr << "pFrameRGB == NULL";
return false;
}
int numBytes;
numBytes = avpicture_get_size(PIX_FMT_RGB32, pCodecCtx->width, pCodecCtx->height);
uint8_t *buffer;
buffer = (uint8_t *) av_malloc(numBytes * sizeof(uint8_t));
if (buffer == NULL)
{
std::cerr << "buffer == NULL";
return false;
}
// Assign appropriate parts of buffer to image planes in pFrameRGB
// Note that pFrameRGB is an AVFrame, but AVFrame is a superset
// of AVPicture
avpicture_fill((AVPicture *) pFrameRGB, buffer, PIX_FMT_RGB32, pCodecCtx->width, pCodecCtx->height);
SwsContext *swsctx;
swsctx = sws_getContext(
pCodecCtx->width, pCodecCtx->height, pCodecCtx->pix_fmt,
pCodecCtx->width, pCodecCtx->height, PIX_FMT_RGB32,
SWS_BILINEAR, NULL, NULL, NULL);
if (swsctx == NULL)
{
std::cerr << "swsctx == NULL";
return false;
}
AVPacket packet;
while (av_read_frame(pFormatCtx, &packet) >= 0)
{
if (packet.stream_index == video_stream)
{
int frame_finished;
avcodec_decode_video2(pCodecCtx, pFrame, &frame_finished, &packet);
if (frame_finished)
{
sws_scale(swsctx, pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize);
std::cerr << "good";
av_close_input_stream(pFormatCtx);
return true;
}
else
std::cerr << "frame_finished == 0";
}
}
std::cerr << "av_read_frame < 0";
return false;
}
ffmpeg -version
FFmpeg 0.6.2-4:0.6.2-1ubuntu1
libavutil 50.15. 1 / 50.15. 1
libavcodec 52.72. 2 / 52.72. 2
libavformat 52.64. 2 / 52.64. 2
libavdevice 52. 2. 0 / 52. 2. 0
libavfilter 1.19. 0 / 1.19. 0
libswscale 0.11. 0 / 0.11. 0
libpostproc 51. 2. 0 / 51. 2. 0
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可能已经阅读了一些 libav 教程,并且只需将几乎所有代码复制\粘贴到您的函数decode() 中。实在是大错特错了。看源码。每次您想要解码某个帧(无论音频或视频)时,您都会多次打开输入上下文、初始化编解码器等。每次你不关闭/释放它!请记住,即使您正确地打开\初始化并关闭\释放所有内容,每次调用decode()时您都会得到相同的帧,因为这种方法将导致每次调用decode时都在寻找文件开头的文件位置()。
此外,您调用 av_close_input_stream() 而不是 av_close_input_file(),您忘记使用 avcodec_close() 关闭编解码器,使用 avpicture_free() 释放分配的图片,使用 av_free() 释放分配的帧,使用 av_free_packet() 释放读取数据包。此外,您的函数eekp()和readp()也可能是错误的。
还有一个建议 - func sws_getContext() 现已弃用,您应该使用 sws_getCachedContext() 代替。根据您的情况的函数名称(多次调用 sws_getContext;但它仍然是错误的),它会工作得更快。
请再次阅读一些有关 libav 的教程。它们似乎都已过时,但您可以将其中已弃用或删除的函数替换为新函数,您可以在官方 libav doxygen 文档中找到该函数。以下是一些链接:
http://www.inb.uni-luebeck.de /~boehme/using_libavcodec.html
http://dranger.com/ffmpeg/ffmpeg.html
您将找到以下示例libav 官方 API 文档中的日期。
http://libav.org/doxygen/master/examples.html
他们解释得最多常见用例。
You have probably read some libav tutorial and simply copy\paste almost all code in your function decode(). It's really wrong. Look at the source. Every time you want to decode some frame - doesn't matter audio or video - you open input context, initialize codecs and other multiple times. And every time you don't close\free it whatsoever! Keep in mind that even if you correct open\init and close\freeing all stuff, you will get the same frame every time you call decode() 'coz this approach will result in seeking file position to begin of file every time you call decode().
Moreover you call av_close_input_stream() instead of av_close_input_file(), you forgot to close codec with avcodec_close(), to free allocated pictures with avpicture_free(), to free allocated frames with av_free(), to free read packets with av_free_packet(). In addition your functions seekp() and readp() can be wrong too.
One more advice - func sws_getContext() is now deprecated and you should use sws_getCachedContext() instead. According to the function name in your case (multiple calls to sws_getContext; but it's still wrong) it will work faster.
Please, read some tutorials on libav again. They all seems to be out of date, but you can just replace deprecated or removed functions in it with new one which you can find in official libav doxygen documentation. Here are some links:
http://www.inb.uni-luebeck.de/~boehme/using_libavcodec.html
http://dranger.com/ffmpeg/ffmpeg.html
You will find examples which are up to date in the official API Documentation of libav.
http://libav.org/doxygen/master/examples.html
They explain the most common use cases.