使用 GStreamer 通过本地网络移动音频
我需要在两台 Linux 机器之间移动实时音频,这两台机器都运行构建在 Gstreamer 之上的(我的)自定义软件。 (该软件已经通过单独的基于 TCP 的协议在机器之间进行其他通信 - 我提到这一点是为了防止可靠的带外数据对解决方案产生影响)。
音频输入将是发送机器上的麦克风/线路输入,普通音频输出将是目的地上的接收器; alsasrc 和 alsasink 是最有可能的,尽管为了测试我一直使用 audiotestsrc 而不是真正的麦克风。
GStreamer 提供了多种通过网络传输数据的方法 - RTP、RTSP、GDP 有效负载、UDP 和 TCP 服务器、客户端和套接字等。网络上还有很多音频和视频流传输的示例 - 但在实践中,它们似乎都不适合我;要么目标管道无法协商上限,要么我听到一个数据包然后管道停止,要么目标管道立即退出而没有可用数据。
在所有情况下,我都在命令行上测试 gst-launch。不需要对音频数据进行压缩 - 原始音频或简单的 WAV、uLaw 或 aLaw 编码都可以;更重要的是低延迟。
I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution).
The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone.
GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available.
In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
要调试此类问题,我会尝试:
fakesink
或filesink
看看我们是否获得任何缓冲区GST_DEBUG
查找管道问题,例如使用GST_DEBUG=GST_CAPS:4
检查 caps 或检查使用*:2
获取所有错误/警告这些管道对我有用:
使用 RTP:
使用 TCP::
To debug that kind of problem i would try:
gst-launch audiotestsrc ! alsasink
to checkthat sounds worksfakesink
orfilesink
to see if we get any buffersGST_DEBUG
, for example check caps withGST_DEBUG=GST_CAPS:4
or check use*:2
to get all errors/warningsThese pipelines work for me:
with RTP:
with TCP::
我的解决方案与tiljoel非常相似,但我使用麦克风(这就是您需要的)作为源 - 因此在gstreamer管道中进行了一些调整。
使用 TCP 解码麦克风音频:
使用 TCP 编码麦克风音频:
使用 RTP 解码麦克风音频:
使用以下命令编码麦克风音频:实时传输协议:
My solution is very similar to tilljoel but I am using Microphone (which is what you need) as a source - Hence some tweaking in the gstreamer pipeline.
Decode Audio from Microphone using TCP:
Encode Audio from Microphone using TCP:
Decode Audio from Microphone using RTP:
Encode Audio from Microphone using RTP:
您可以发布一些您尝试过的 gst-launch 管道吗?这可能有助于理解您遇到问题的原因。一般来说,RTP/RTSP 应该很容易工作。
编辑:
我能想到的几个项目是
1. 将 host=localhost 更改为 host= 其中是另一台 Linux 机器的实际 IP 地址
2. 将 caps="application/x-rtp, media=(string)audio 添加到接收器中的 udpsrc 元素。
Can you post some of the gst-launch pipelines you have tried? That might help in understanding why you are having issues. In general RTP/RTSP should work pretty easily.
Edit:
Couple items I can think of is
1. change host=localhost to host= where is the actual ip-address of the other linux machine
2. add caps="application/x-rtp, media=(string)audio to the udpsrc element in the receiver.
2023 年的一点更新。
发送者:
接收者:
a bit update from 2023 year.
sender:
receiver: