告诉 libavcodec/ffmpeg 丢帧
我正在构建一个应用程序,在其中创建视频。 问题是,有时(好吧......大多数时间)帧采集过程不够快。
我目前正在做的是,如果我迟到了,则跳过当前帧采集,但是 FFMPEG/libavcodec 将我传递给它的每一帧视为队列中的下一帧,因此如果我从 2 帧中删除 1 帧,则会生成一个 20 秒的视频只会持续 10。一旦我添加声音,就会出现更多问题,因为声音处理速度更快...
我想告诉 FFMPEG:“最后一帧的持续时间应该是最初预期的两倍”,或者任何其他内容那可以让我实时处理。
我试图在某个点上堆叠帧,但这最终会杀死我所有的内存(我还尝试将我的帧“堆叠”在硬盘驱动器中,正如我预期的那样,速度很慢)
我想我必须这样做手动使用 pts,但我所有的尝试都失败了,并且阅读其他一些使用 ffmpeg 的应用程序代码(例如 VLC)并没有很大的帮助......所以任何建议将不胜感激!
预先非常感谢!
I'm building an app in which I create a video.
Problem is, sometime (well... most of the time) the frame acquisition process isn't quick enough.
What I'm currently doing is to skip the current frame acquisition if I'm late, however FFMPEG/libavcodec considers every frame I pass to it as the next frame in line, so If I drop 1 out of 2 frames, a 20seconds video will only last 10. More problems come in as soon as I add sound, since sound processing is way faster...
What I'd like would be to tell FFMPEG : "last frame should last twice longer that originally intended", or anything that could allow me to process in real time.
I tried to stack the frames at a point, but this ends up killing all my memory (I also tried to 'stack' my frames in the hard drive, which was way to slow, as I expected)
I guess I'll have to work with the pts manually, but all my attempts have failed, and reading some other apps code which use ffmpeg, such as VLC, wasn't of a great help... so any advice would be much appreciated!
Thanks a lot in advance!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
您的输出可能会被视为可变帧率 (vfr),但您可以简单地在帧到达时使用挂钟时间生成时间戳,并在编码之前将其应用到 AVFrame。然后该帧将在播放时在正确的时间显示。
有关如何执行此操作的示例(至少指定您自己的时间戳部分),请参阅 ffmpeg 发行版中的 doc/examples/mushing.c (我当前的 git pull 中的第 491 行):
这里作者将帧时间戳增加了视频编解码器的时基中的 1 重新调整为视频流的时基,但在您的情况下,您可以简单地将自开始从任意时基捕获帧到输出视频流的时基以来的秒数重新调整(如上面的例子)。例如,如果您的任意时基为 1/1000,并且自开始捕获以来您收到了 0.25 秒的帧,则执行以下操作:
然后照常对帧进行编码。
your output will probably be considered variable framerate (vfr), but you can simply generate a timestamp using wallclock time when a frame arrives and apply it to your AVFrame before encoding it. then the frame will be displayed at the correct time on playback.
for an example of how to do this (at least the specifying your own timestamp part), see doc/examples/muxing.c in the ffmpeg distribution (line 491 in my current git pull):
here the author is incrementing the frame timestamp by 1 in the video codec's timebase rescaled to the video stream's timebase, but in your case you can simply rescale the number of seconds since you started capturing frames from an arbitrary timebase to your output video stream's timebase (as in the above example). for example, if your arbitrary timebase is 1/1000, and you receive a frame 0.25 seconds since you started capturing, then do this:
then encode the frame as usual.
许多(大多数?)视频格式不允许省略帧。相反,当您无法及时获取新视频帧时,请尝试重复使用旧视频帧。
Many (most?) video formats don't permit leaving out frames. Instead, try reusing old video frames when you can't get a fresh one in time.
只是一个想法..当处理滞后时,您是否尝试再次将同一帧传递给它(并删除当前帧)?也许它可以快速处理重复的帧。
Just an idea.. when it's lagging with the processing have you tried to pass to it the same frame again (and drop the current one)? Maybe it can process the duplicated frame quickly.
有一个用于多核处理的 ffmpeg 命令行开关
-threads ...
,因此您应该能够使用 API 执行类似的操作(尽管我不知道如何操作)。这可能会解决你的问题。There's this ffmpeg command line switch
-threads ...
for multicore processing, so you should be able to do something similar with the API (though I have no idea how). This might solve your problem.