Gstreamer 将 RTP 流的多个片段记录到文件中
我正在使用 gstreamer 编写一个 C++ 应用程序,并试图实现以下目标:连接到 rtp 音频流(opus),将整个流的一份副本写入音频文件,然后另外基于用户触发的事件,创建由 rtp 流片段组成的一系列单独的音频文件(想象一下开始/停止录制切换按钮)。
目前使用udpsrc -> rtpbin-> rtpopusdepay->队列-> tee(此处管道分割)
tee_stream_1 ->队列-> webmmux->文件接收器
tee_stream_2 ->队列-> webmmux->文件接收器
tee_stream_1 应在管道的整个持续时间内处于活动状态。 tee_stream_2 应该根据用户切换事件生成多个文件。
示例场景:
- 管道接收 rtp 音频流,tee_stream_1 开始将音频写入 full_stream.webm
- 2 秒到 rtp 音频流中,用户切换“开始录制”。 tee_stream_2开始将音频写入stream_segment_1.webm
- 5秒到rtp音频流中,用户切换“停止录制”。 tee_stream_2 完成将音频写入stream_segment_1.webm 并关闭文件。
- 进入 rtp 音频流 8 秒后,用户切换“开始录制”。 tee_stream_2开始将音频写入stream_segment_2.webm
- 9秒到rtp音频流中,用户切换“停止录制”。 tee_stream_2 完成将音频写入stream_segment_2.webm 并关闭文件。
- 进入 rtp 音频流 10 秒后,流结束,full_stream.webm 完成音频写入并关闭。
最终结果是 3 个音频文件,full_stream.webm 包含 10 秒的音频,stream_segment_1.webm 包含 3 秒的音频,stream_segment_2.webm 包含 1 秒的音频。
到目前为止,尝试执行此操作遇到了困难,因为复用器似乎需要 EOS 事件才能正确完成stream_segment 文件的写入,但是此 EOS 会传播到管道的其他元素,这会产生结束所有流的不良影响。录音。关于如何最好地实现这一目标有什么想法吗?如果有帮助的话我可以提供代码。
感谢您提供的所有帮助!
I'm writing a c++ application with gstreamer and am trying to achieve the following: connect to an rtp audio stream (opus), write one copy of the entire stream to an audio file, and then additionally, based on events triggered by the user, create a separate series of audio files consisting of segments of the rtp stream (think a start/stop record toggle button).
Currently using udpsrc -> rtpbin -> rtpopusdepay -> queue -> tee (pipeline splits here)
tee_stream_1 -> queue -> webmmux -> filesink
tee_stream_2 -> queue -> webmmux -> filesink
tee_stream_1 should be active during the entire duration of the pipeline. tee_stream_2 is what should generate multiple files based on user toggle events.
An example scenario:
- pipeline receive rtp audio stream, tee_stream_1 begins writing audio to full_stream.webm
- 2 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_1.webm
- 5 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_1.webm and closes file.
- 8 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_2.webm
- 9 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_2.webm and closes file.
- 10 seconds into rtp audio stream, stream ends, full_stream.webm finishes writing audio and closes.
End result being 3 audio files, full_stream.webm with 10 seconds of audio, stream_segment_1.webm with 3 seconds of audio, and stream_segment_2.webm with 1 second of audio.
Attempts to do this so far have been met with difficulty since the muxers seem to require an EOS event to finish properly writing the stream_segment files, however this EOS is propogated to the other elements of the pipeline which has the undesired effect of ending all of the recordings. Any ideas on how to best accomplish this? I can provide code if it would be helpful.
Thank you for any and all assistance!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于这种情况,我建议尝试 RidgeRun 的开源gstd 和 interpipe 插件,提供动态管道的高级控制。
您可以安装类似的东西:
interpipes 需要一个管理守护进程,因此在第一个终端中您只需启动它即可。它将显示一些操作和错误(如果有):
现在在第二个终端中,您将尝试此脚本(此处记录到目录 /home/user/tmp/tmp2...根据您的情况进行调整):
并检查生成的文件。
For such case, I'd suggest to give a try to RidgeRun's open source gstd and interpipe plugins that provide high level control of dynamic pipelines.
You may install with something like:
interpipes need a daemon that manages, so in a first terminal you would just start it. It will display some operations and errors if any:
Now in a second terminal you would try this script (here recording into directory /home/user/tmp/tmp2...adjust for your case):
and check resulting files.