使用命名管道向 ffmpeg 提供输入

发布于 2024-10-31 06:28:14 字数 141 浏览 2 评论 0原文

我有一个 C 程序,它可以生成一系列图像,我想将它们制作成一个视频,该视频应该实时流式传输或存储在文件中。在阅读 ffmpeg 文档时,我反复发现 ffmpeg 可以从命名管道获取输入。

我的问题是输入管道的文件应该采用什么格式以及如何将文件输入管道。

I have a C program which generates a series of images and I wanted to make them into a video which should be streamed in real time or stored in a file. While reading up ffmpeg documentation I came across repeatedly that ffmpeg can take input from named pipes.

My question is in what format should the files given into the pipe should be and how to input the files into the pipe.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

檐上三寸雪 2024-11-07 06:28:14

据我所知,对放入命名管道的视频格式没有任何要求。你可以放置任何 ffmpeg 可以打开的东西。例如,我使用 ffmpeg 库开发了一个程序,该程序从命名管道读取 h264 视频并从中检索统计信息 - 命名管道是通过另一个程序填充的。对于连续视频来说,这确实是一个非常好的、干净的解决方案。

现在,关于您的情况,我相信您遇到了一个小问题,因为命名管道只是一个文件,而 ffmpeg 将无法知道同一文件中有多个图像!因此,如果您将命名管道声明为输入,ffmpeg 会认为您只有一个图像 - 不够好...

我能想到的一种解决方案是声明您的命名管道包含一个视频 - 这样 ffmpeg 将不断从中读取并存储它或流式传输它。当然,您的 C 程序需要生成该视频并将其写入命名管道...这并不像看起来那么难!您可以将图像(您没有告诉我们它们的格式是什么)转换为 YUV,然后在命名管道中一个接一个地写入(YUV 视频是一系列无头的 YUV 图像 - 您也可以轻松地从 BPM 转换为YUV,只需查看有关 YUV 的维基百科条目)。然后 ffmpeg 会认为命名管道包含一个简单的 YUV 文件,因此您最终可以从中读取并用它做任何您想做的事情。

From what I know, there aren't any requirements on the format of the video that will be put to the named pipe. You could put anything ffmpeg can open. For instance, I had developend a program using ffmpeg libraries that was reading an h264 video from a named pipe and retrieved statistics from it - the named pipe was filled through another program. This is really a very nice and clean solution for continous video.

Now, concerning your case, I believe that you have a small problem, since the named pipe is just one file and ffmpeg won't be able to know that there are multiple images in the same file! So if you declare the named pipe as input, ffmpeg will believe that you have only one image - not good enough ...

One solution I can think of is to declare that your named pipe contains a video - so ffmpeg will continously read from it and store it or stream it. Of course your C program would need generate and write that video to the named pipe... This isn't as hard as it seems! You could convert your images (you haven't told us what is their format) to YUV and simply write one after the other in the named pipe (a YUV video is a headerless series of YUV images - also you can easily convert from BPM to YUV, just check the wikipedia entry on YUV). Then ffmpeg will think that the named pipe contains a simple YUV file so you can finally read from it and do whatever you want with that.

命比纸薄 2024-11-07 06:28:14

您可以使用 *-loop_input* 命令行选项:

ffmpeg -loop_input -re -timestamp now -f image2 -r 25 -sameq -i input.jpg -an -vcodec mpeg2video out.mp4

在您的情况下,将 input.jpg 替换为管道。然后,FFmpeg 将每隔 1/25 秒从输入文件(或管道)创建一个新帧。

You could use the *-loop_input* command-line option:

ffmpeg -loop_input -re -timestamp now -f image2 -r 25 -sameq -i input.jpg -an -vcodec mpeg2video out.mp4

In your case, replace input.jpg with the pipe. Then, FFmpeg will create a new frame every 1/25 seconds from the input file (or pipe).

背叛残局 2024-11-07 06:28:14

将 image2 与通配符文件名一起使用,假设这些图像作为文件存在,来自 ffmpeg 文档:

for creating a video from the images in the file sequence ‘img-001.jpeg’, ‘img-002.jpeg’, ..., assuming an input frame rate of 10 frames per second:

ffmpeg -i 'img-%03d.jpeg' -r 10 out.mkv

AVFormatContext *pFormatCtx = avformat_alloc_context();
avformat_open_input(&pFormatCtx,"img-%03d.jpeg",NULL,NULL);

Use image2 with wildcard filenames, assuming these images exist as files, from the ffmpeg docs:

for creating a video from the images in the file sequence ‘img-001.jpeg’, ‘img-002.jpeg’, ..., assuming an input frame rate of 10 frames per second:

ffmpeg -i 'img-%03d.jpeg' -r 10 out.mkv

i.e.

AVFormatContext *pFormatCtx = avformat_alloc_context();
avformat_open_input(&pFormatCtx,"img-%03d.jpeg",NULL,NULL);
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文