使用命名管道时如何防止死锁?
我有一个捕获过程,它将原始视频数据和音频数据写入文件。像这样的东西会捕获 100 帧数据。
./capture -n 100 -f video_file -a audio_file
给我一个 768000 字节的音频文件和 414720000 字节的视频文件。这些似乎按预期加起来:
- 414720000 == 1920x1080(像素/帧)* 2(字节/像素)* 100(帧)
- 768000 == 48k(Hz)* 2(字节/样本)* 2通道* 100(帧) ) / 25 (帧/秒)
然后当我像
ffmpeg -i 那样编码该数据时audio_file -i video_file out.flv
我确实得到了一个可播放的有声视频(实际上我在命令行中有更多的东西,但这些是这个问题的重要部分)
现在,我实际上想要一个实时流,而不是一个文件,我可以只对视频执行此操作,如下所示:
./capture -f /dev/stdout | ffmpeg -i - udp://127.0.0.1:10000
我得到了一个在 udp 上没有音频广播的视频流,并且我能够正常接收和播放该流。但是当我想将音频添加到图片中时,我遇到了一些麻烦。我想我不能将它们都发送到 stdout
上,并且我不能使用 stderr
因为捕获过程已经讨论过这一点。所以我尝试用这样的命名管道来做到这一点:
mkfifo audio_pipe
mkfifo video_pipe
ffmpeg -i audio_pipe -i video_pipe out.flv &
./capture -f video_pipe -a audio_pipe
但它不起作用,似乎一切都陷入僵局。我已经测试过仅运行 ./capture -f video_file -a audio_file
,然后打开两个新 shell 并执行 cat video_file > > /dev/null
和 cat audio_file > /dev/null
,一旦两只猫都在运行,这就会解除捕获过程的阻塞,因此看起来写入管道没有问题。我查看了捕获代码的源代码,它的工作方式是使用来自更深层 API 的“帧到达”回调,然后按顺序写入视频帧和音频数据(它是阻塞的)。我不知道 ffmpeg 是做什么的,它是否按顺序读取输入视频文件或音频文件,或者以任意顺序读取它们,或者在线程中同时读取它们。我尝试将顺序更改为 ffmpeg -i video_pipe -i audio_pipe out.flv ,但不幸的是一切仍然锁定。仅使用一个命名管道来处理视频数据可以正常工作。
我该如何解决我的问题?一旦我了解了避免阻塞问题的最佳方法,我将使用 python subprocess 模块编写脚本。
I have a capture process which writes raw video data and audio data to files. Something like this will capture 100 frames of data.
./capture -n 100 -f video_file -a audio_file
Giving me a 768000 bytes audio_file and a 414720000 bytes video_file. These seems to add up as expected:
- 414720000 == 1920x1080 (pixels/frame) * 2 (bytes/pixel) * 100 (frames)
- 768000 == 48k (Hz) * 2 (bytes/sample) * 2 channels * 100 (frames) / 25 (frames/s)
Then when I encode that data like
ffmpeg -i audio_file -i video_file out.flv
I do get a playable video with sound (actually I have a lot more stuff in the command line, but those are the important parts for the purposes of this question)
Now, I actually want a live stream, not a file, and I can do this OK for just video with something like this:
./capture -f /dev/stdout | ffmpeg -i - udp://127.0.0.1:10000
I get a video stream with no audio broadcast on udp, and I was able to receive and play the stream ok. But when I want to add audio into the picture, I have some troubles.. I think I can not send them both on stdout
, and I can't use stderr
because the capture process chats on that already. So I have tried to do it with named pipes like this:
mkfifo audio_pipe
mkfifo video_pipe
ffmpeg -i audio_pipe -i video_pipe out.flv &
./capture -f video_pipe -a audio_pipe
But it is not working, it seems everything deadlocks. I have tested just running the ./capture -f video_file -a audio_file
, and then opening two new shells and doing cat video_file > /dev/null
and cat audio_file > /dev/null
, once both cats are running this unblocks the capture process, so it seems it has no trouble writing to pipes. I had a peek at the source of the capture code, and the way it works is with a "frame arrived" callback from deeper the API which then goes and writes the video frame and audio data, in that order (it is blocking). I don't know what ffmpeg does, whether it reads the input video file or audio file sequentially, in either order, or reads them simultaneously in threads. I tried changing the order to ffmpeg -i video_pipe -i audio_pipe out.flv
but unfortunately everything still locks. Using only one named pipes for video data works normally.
How can I solve my problem? I will script it with python subprocess module, once I understand how is the best way to avoid the blocking problem.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
在 Linux 中,管道缓冲区限制为 65k,您可能会遇到死锁,其中 capture 不会写入更多音频,直到它可以写入更多视频,而 ffmpeg 不会读取更多视频,直到它获得更多音频。
In Linux, the pipe buffer is limited to 65k, it's possible that you are ending up with a deadlock where capture won't write more audio until it can write more video, and ffmpeg won't read more video until it gets more audio.
最后,我无法让 ffmpeg 在不阻塞 FIFO 的情况下进行读取,因此捕获应用程序源代码被修改为从单独的工作线程而不是顺序写入音频帧和视频帧。
In the end, I couldn't get ffmpeg to read without blocking from the FIFOs, so the capture app source code was modified to write audio frames and video frames from separate worker threads rather than sequentially.