使用 GStreamer 通过本地网络移动音频

发布于 2024-08-29 23:24:58 字数 497 浏览 5 评论 0原文

我需要在两台 Linux 机器之间移动实时音频,这两台机器都运行构建在 Gstreamer 之上的(我的)自定义软件。 (该软件已经通过单独的基于 TCP 的协议在机器之间进行其他通信 - 我提到这一点是为了防止可靠的带外数据对解决方案产生影响)。

音频输入将是发送机器上的麦克风/线路输入,普通音频输出将是目的地上的接收器; alsasrc 和 alsasink 是最有可能的,尽管为了测试我一直使用 audiotestsrc 而不是真正的麦克风。

GStreamer 提供了多种通过网络传输数据的方法 - RTP、RTSP、GDP 有效负载、UDP 和 TCP 服务器、客户端和套接字等。网络上还有很多音频和视频流传输的示例 - 但在实践中,它们似乎都不适合我;要么目标管道无法协商上限,要么我听到一个数据包然后管道停止,要么目标管道立即退出而没有可用数据。

在所有情况下,我都在命令行上测试 gst-launch。不需要对音频数据进行压缩 - 原始音频或简单的 WAV、uLaw 或 aLaw 编码都可以;更重要的是低延迟。

I need to move realtime audio between two Linux machines, which are both running custom software (of mine) which builds on top of Gstreamer. (The software already has other communication between the machines, over a separate TCP-based protocol - I mention this in case having reliable out-of-band data makes a difference to the solution).

The audio input will be a microphone / line-in on the sending machine, and normal audio output as the sink on the destination; alsasrc and alsasink are the most likely, though for testing I have been using the audiotestsrc instead of a real microphone.

GStreamer offers a multitude of ways to move data round over networks - RTP, RTSP, GDP payloading, UDP and TCP servers, clients and sockets, and so on. There's also many examples on the web of streaming both audio and video - but none of them seem to work for me, in practice; either the destination pipeline fails to negotiate caps, or I hear a single packet and then the pipeline stalls, or the destination pipeline bails out immediately with no data available.

In all cases, I'm testing on the command-line just gst-launch. No compression of the audio data is required - raw audio, or trivial WAV, uLaw or aLaw encoding is fine; what's more important is low-ish latency.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

小苏打饼 2024-09-05 23:24:58

要调试此类问题,我会尝试:

  1. 运行 gst-launch audiotestsrc ! alsasink 检查声音是否正常
  2. 使用 fakesinkfilesink 看看我们是否获得任何缓冲区
  3. 尝试使用 GST_DEBUG 查找管道问题,例如使用 GST_DEBUG=GST_CAPS:4 检查 caps 或检查使用 *:2 获取所有错误/警告
  4. 使用wireshark查看是否发送数据包

这些管道对我有用:

使用 RTP:

gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false

gst-launch-0.10 audiotestsrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay  ! udpsink host=localhost port=5000

使用 TCP:

gst-launch-0.10 tcpserversrc host=localhost port=3000 ! audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)44100", channels="(int)1" ! alsasink

gst-launch-0.10 audiotestsrc ! tcpclientsink host=localhost port=3000

To debug that kind of problem i would try:

  1. Run gst-launch audiotestsrc ! alsasink to checkthat sounds works
  2. Use a fakesink or filesink to see if we get any buffers
  3. Try to find the pipeline problem with GST_DEBUG, for example check caps with GST_DEBUG=GST_CAPS:4 or check use *:2 to get all errors/warnings
  4. Use wireshark to see if packets are sent

These pipelines work for me:

with RTP:

gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false

gst-launch-0.10 audiotestsrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay  ! udpsink host=localhost port=5000

with TCP::

gst-launch-0.10 tcpserversrc host=localhost port=3000 ! audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)44100", channels="(int)1" ! alsasink

gst-launch-0.10 audiotestsrc ! tcpclientsink host=localhost port=3000
岁月静好 2024-09-05 23:24:58

我的解决方案与tiljoel非常相似,但我使用麦克风(这就是您需要的)作为源 - 因此在gstreamer管道中进行了一些调整。

使用 TCP 解码麦克风音频:

gst-launch-0.10 tcpserversrc host=localhost port=3000 !  audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)22000", channels="(int)1" ! alsasink

使用 TCP 编码麦克风音频:

gst-launch-0.10 pulsesrc ! audio/x-raw-int,rate=22000,channels=1,width=16 ! tcpclientsink host=localhost port=3000

使用 RTP 解码麦克风音频:

gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)22000, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false

使用以下命令编码麦克风音频:实时传输协议:

gst-launch-0.10 pulsesrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=22000 ! rtpL16pay  ! udpsink host=localhost port=5000

My solution is very similar to tilljoel but I am using Microphone (which is what you need) as a source - Hence some tweaking in the gstreamer pipeline.

Decode Audio from Microphone using TCP:

gst-launch-0.10 tcpserversrc host=localhost port=3000 !  audio/x-raw-int, endianness="(int)1234", signed="(boolean)true", width="(int)16", depth="(int)16", rate="(int)22000", channels="(int)1" ! alsasink

Encode Audio from Microphone using TCP:

gst-launch-0.10 pulsesrc ! audio/x-raw-int,rate=22000,channels=1,width=16 ! tcpclientsink host=localhost port=3000

Decode Audio from Microphone using RTP:

gst-launch-0.10 -v udpsrc port=5000 ! "application/x-rtp,media=(string)audio, clock-rate=(int)22000, width=16, height=16, encoding-name=(string)L16, encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, payload=(int)96" ! rtpL16depay ! audioconvert ! alsasink sync=false

Encode Audio from Microphone using RTP:

gst-launch-0.10 pulsesrc ! audioconvert ! audio/x-raw-int,channels=1,depth=16,width=16,rate=22000 ! rtpL16pay  ! udpsink host=localhost port=5000
帝王念 2024-09-05 23:24:58

您可以发布一些您尝试过的 gst-launch 管道吗?这可能有助于理解您遇到问题的原因。一般来说,RTP/RTSP 应该很容易工作。

编辑:
我能想到的几个项目是
1. 将 host=localhost 更改为 host= 其中是另一台 Linux 机器的实际 IP 地址
2. 将 caps="application/x-rtp, media=(string)audio 添加到接收器中的 udpsrc 元素。

Can you post some of the gst-launch pipelines you have tried? That might help in understanding why you are having issues. In general RTP/RTSP should work pretty easily.

Edit:
Couple items I can think of is
1. change host=localhost to host= where is the actual ip-address of the other linux machine
2. add caps="application/x-rtp, media=(string)audio to the udpsrc element in the receiver.

浅黛梨妆こ 2024-09-05 23:24:58

2023 年的一点更新。

发送者:

gst-launch-1.0 pulsesrc ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay  ! \
udpsink host=192.168.1.108 port=5200

接收者:

gst-launch-1.0 -v udpsrc port=5200 ! "application/x-rtp,media=(string)audio, \
clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16,\
encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, \
payload=(int)96" ! rtpL16depay ! audioconvert ! autoaudiosink sync=false

a bit update from 2023 year.

sender:

gst-launch-1.0 pulsesrc ! audioconvert ! audio/x-raw,channels=1,depth=16,width=16,rate=44100 ! rtpL16pay  ! \
udpsink host=192.168.1.108 port=5200

receiver:

gst-launch-1.0 -v udpsrc port=5200 ! "application/x-rtp,media=(string)audio, \
clock-rate=(int)44100, width=16, height=16, encoding-name=(string)L16,\
encoding-params=(string)1, channels=(int)1, channel-positions=(int)1, \
payload=(int)96" ! rtpL16depay ! audioconvert ! autoaudiosink sync=false
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文