FFMPEG 2 视频转码并排在 1 帧中?
我有 2 个视频:HEADSHOT.MOV 和 SCREEN.MOV。它们都是大文件,我希望缩小(大小、比特率等)并将这两个文件并排放置在同一个非常宽的视频帧中。最终结果是,当您播放 output_video.mp4 时,您将看到一个非常宽的帧,两个视频同步并以相同的速率播放。
这是我想要做的在语法上不正确的版本:
ffmpeg -i HEADSHOT.MOV -t 00:02:00 -acodec libfaac -ab 64k -vcodec libx264 -r 30 -pass 1 -s 374x210 -vf "movie=SCREEN.MOV [small]; [in][small] -an -r 30 -pass 1 -s 374x210 overlay=10:10 -t 00:02:00 [out]" -threads 0 output_movie.mp4
在上面的示例中,我还尝试将测试电影持续时间设置为 2 分钟,这提出了另一个问题,处理 2 部不同长度的电影的最佳方法是什么(如果他们很接近)?
到目前为止,我发现有用的资源是:
http://ffmpeg.org/ffmpeg.html#overlay-1
任何非常感谢帮助/建议。我在使用 FFMPEG 语法时遇到问题!谢谢你!
I have 2 videos: HEADSHOT.MOV and SCREEN.MOV. They are both large files and I am looking to both shrink (size, bitrate, etc) and place these two side by side in the same, very wide, video frame. The end result would be that when you play the output_video.mp4, you would have a very wide frame with both videos in sync and playing at the same rate.
Here is the syntatically incorrect version of what I am trying to do:
ffmpeg -i HEADSHOT.MOV -t 00:02:00 -acodec libfaac -ab 64k -vcodec libx264 -r 30 -pass 1 -s 374x210 -vf "movie=SCREEN.MOV [small]; [in][small] -an -r 30 -pass 1 -s 374x210 overlay=10:10 -t 00:02:00 [out]" -threads 0 output_movie.mp4
In the above example, I also tried to set a test movie duration for 2 minutes which raises another question, What is the best way to handle 2 movies of varying length (if they are close)?
The resources I have found helpful so far are:
Multiple video sources combined into one and
http://ffmpeg.org/ffmpeg.html#overlay-1
Any help/advice is greatly appreciated. I am having trouble with the FFMPEG syntax! Thank you!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
结果可以通过缩放、填充和覆盖过滤器的组合来实现,如下所示:
这里第一个视频缩小了一半,并填充到其原始大小。第二个视频缩小了三分之二,并覆盖在第一个视频的右半部分(填充区域)上。
较短的视频可以淡出;否则,将显示最后一帧,直到组合视频结束。
结果比特率可以使用 -b:v 选项设置。此外,视频大小和位置可以以像素为单位指定用于填充、缩放和覆盖过滤器。
The result can be achieved with the combination of scale, pad and overlay filters as the following:
Here the first video is shrunk by half, and padded to its original size. The second video is shrunk by two thirds and overlayed on the right half (padding area) of the first one.
The shorter video can be faded out; otherwise, it last frame will be display till the end of the combined video.
The result bit rate can be set with
-b:v
option. Also, video sizes and positions can be specified in pixels for pad, scale and overlay filters.为了让一个视频占据输出视频的整个左半部分,另一个视频占据输出视频的整个右半部分并获得正确的音频,我将扩展@Dmitry Shkuropatsky 的答案。这花了我大约 5 分钟或更长时间才弄清楚,我使用了 ffmpeg 版本 3.4(版权所有 (c) 2000-2017):
> ffmpeg -i left.webm -vf "[in] scale=iw/2 :ih/2, pad=2*iw:ih [左]; movie=right.mp4, 比例=iw/2:ih/2, 淡入淡出=alpha=1 [右];[左][右]overlay=main_w/2:0 [out]" -c:a aac -b:v 3800k 输出.mp4
>ffmpeg -i 输出.mp4 - i right.mp4 -c copy -map 0:0 -map 1:1 -shortest output_with_audio.mp4
更改:
我成功删除了淡出选项/参数,因为它给我带来了问题。如果您想使用淡出,则可以根据您的具体情况更改
fade=out:300:30:alpha=1
中的数字。不是右半部分仅充满右视频的三分之二,而是我将其更改为完全充满视频右半边。
我运行了第二个 FFmpeg 命令,因为第一个命令(包含所有
-vf
内容)似乎只使用-i
而不是movie=
的内容。如果-i
是没有音频的视频,而您想要使用movie=
中的音频,则会出现问题。第二个 ffmpeg 命令将视频流从output.mp4
复制,将音频流从right.mp4
复制到output_with_audio.mp4
中。In order to have one video take up the full left half of the output video and the other video take up the full right half of the output video AND to have the correct audio, I will expand upon @Dmitry Shkuropatsky's answer. This took me like 5 or more minutes to figure out, and I used ffmpeg version 3.4 (Copyright (c) 2000-2017):
>ffmpeg -i left.webm -vf "[in] scale=iw/2:ih/2, pad=2*iw:ih [left]; movie=right.mp4, scale=iw/2:ih/2, fade=alpha=1 [right]; [left][right] overlay=main_w/2:0 [out]" -c:a aac -b:v 3800k output.mp4
>ffmpeg -i output.mp4 -i right.mp4 -c copy -map 0:0 -map 1:1 -shortest output_with_audio.mp4
Changes:
I successfully removed the fade out option/argument because it was causing me problems. If you want to use fade out then maybe change the numbers in
fade=out:300:30:alpha=1
for your specific case.Instead of the right half only being two thirds full with the right video I changed it to be fully full with the video for the right half.
I ran the second FFmpeg command because the first one (with all the
-vf
stuff) seems to only use the audio from the contents of-i
and not the contents ofmovie=
. This is a problem if-i
is a video with no audio and you want to use the audio frommovie=
. The second ffmpeg command copies the video stream fromoutput.mp4
and the audio stream fromright.mp4
intooutput_with_audio.mp4
.