我正在尝试从 .mov 文件解析 H.264 帧。我想我已经得出结论,来自 FFMPEG AVFormat 部分的 mov.c 是可行的方法。但 mov.c 大约有 2600 行未注释的代码。我正在寻找 FFMPEG 的使用示例,尤其是解析任何文件类型的结构。无论是 MPEG4 还是 Quicktime Movie 都没有关系,因为它们在结构上非常相似。
如果没有现有的示例(我找不到任何示例),也许有人已经使用过它并且可以给我几行代码,或者解释如何开始?
我正在尝试做的事情:
我使用 AVCaptureSession 从摄像机捕获样本,然后将这些样本编码为 H264 并在 AVAssetsWriter、AVAssetsWriterInput 和 AVAssetsWriterInputPixelBufferAdaptor 的帮助下写入文件。原因是我无法直接访问硬件 H264 编码,因为苹果不允许这样做。我现在需要做的(我认为不确定)是解析出:
来自的“mdat”原子(电影数据,我认为可能不止一个) .mov 文件。
然后是“vide”原子,然后是视频原子内(视频数据样本,可能有多个)。我认为会有几个原子,我相信它们是框架。它们的类型为“avc1”(这是 H264 的类型)。 请纠正我,因为我很确定我还没有正确理解所有这些。
我的问题是,我将如何解析单帧。我一直在阅读 文档 并查看了iFrameExtractor (这不是很有帮助,因为它解码帧)。我想当我应该使用 mov.c 来自 FFMPEG-AVFormat 但我不确定。
编辑:
我现在正在尝试这样的操作:
-
我运行稍微简化的 init 函数 iFrameExtractor,它在 .mov 文件中查找视频流。
-
我得到这样的框架数据:
AVPacket数据包;
av_read_frame(pFormatCtx, &数据包);
NSData *框架;
if(packet.stream_index == videoStream){
frame = [NSData dataWithBytes:packet.data length:packet.size];
}
视频流++;
av_free_packet(&数据包);
返回帧;
然后我将它传递给 NSOperation 的子类,在那里它被保留以等待上传。
但我收到 EXC_BAD_ACC,从帧复制数据时我是否做错了什么?任何想法。当我尝试使用其(非原子,保留)属性设置类变量 NSData*frame
时,我得到了 EXC_...。 (合成行上显示 EXC_BAD_ACC)
I'm trying to parse H.264 frames from a .mov file. I think I've come to the conclusion that mov.c from AVFormat-part of FFMPEG is the way to go. But mov.c is ~2600 lines of next to uncommented code. I'm looking for examples of usage of FFMPEG, especially parsing the structure of any file type. doesn't matter if it is MPEG4 or Quicktime Movie since they are quite similar in structure.
if there are no existing examples (I can't find any) maybe someone has used it and can give me a couple of lines of code, or explain how to get started?
What I'm trying to do:
i use AVCaptureSession to capture samples from the video camera, these samples are then encoded in H264 and written to file with the help of AVAssetsWriter, AVAssetsWriterInput and AVAssetsWriterInputPixelBufferAdaptor. The reason for which is that I can't access the hardware H264 encoding directly since apple won't allow this. What I now need to do (I think not sure) is parse out:
The "mdat"-atom (Movie data, there might be more than one i think) from the .mov file.
then the "vide"-atom and then within the vide-atom (Video data sample, there might be more than one). I think there will be several atoms which i belive is the frames. these will be of type "avc1" (that's the type for H264). Please correct me in this because i'm quite sure that i havn't gotten all of this correctly yet.
my question then is, how will i go about parsing out the single frames. I've been reading the documentation and looked at iFrameExtractor (which is not very helpful since it decodes the frames). I think I've understood it correctly when I'm supposed to use mov.c from FFMPEG-AVFormat but I'm not sure.
Edit:
I'm now trying like this:
-
I run the slightly reduced init function i iFrameExtractor which finds the videostream in the .mov-file.
-
I get the data for the frame like this:
AVPacket packet;
av_read_frame(pFormatCtx, &packet);
NSData *frame;
if(packet.stream_index == videoStream){
frame = [NSData dataWithBytes:packet.data length:packet.size];
}
videoStream++;
av_free_packet(&packet);
return frame;
i then pass it to a subclass of NSOperation where it is retained in wait for upload.
but i receive a EXC_BAD_ACC, am I doing something wrong when copying the data from the frame? any ideas. i get the EXC_... when I try to set the class variable NSData* frame
using its (nonatomic,retain)-property. (it says EXC_BAD_ACC on the synthesize row)
发布评论
评论(3)
我使用以下内容来解析 mov 文件中的每一帧。
尽管请注意,因为 av_read_frame 不会验证帧,但这是在解码步骤中完成的。这意味着返回的“帧”可能包含不属于实际帧的额外信息。
初始化 AVFormatContext *pFormatCtx 和 AVCodecContext *pCodecCtx 我使用这段代码(我相信它源自 Martin Böhme 的示例代码):
希望这对将来的人有帮助。
i use the following to parse each frame from the mov file.
although watch out since av_read_frame does not verify the frames, that is done in the decoding step. this means that the "frames" returned might contain extra information which are not part of the actual frame.
to init the AVFormatContext *pFormatCtx and AVCodecContext *pCodecCtx I use this code (which I believe is derived from Martin Böhme's example code):
hope this helps someone in the future.
这里有一个关于使用 libavcodec/libavformat 的非常好的教程。听起来您感兴趣的是他们尚未实现的
DoSomethingWithTheImage()
函数。There's a pretty good tutorial on using libavcodec/libavformat here. The bit it sounds like you're interested in is the
DoSomethingWithTheImage()
function they've left unimplemented.如果您将 H264 流式传输到 iOS,则需要分段流式传输(又名苹果直播)。
这是一个开源项目:http://code.google.com/p/httpsegmenter/
If you stream H264 to iOS you need segmented streaming (aka apple live streaming).
Here is an open source project: http://code.google.com/p/httpsegmenter/