Gstreamer+python:在管道运行时添加和删除音频源
我正在编写一个示例 python 脚本,最初在这里找到:在 GStreamer 管道中动态添加和删除音频源。 目的是制作一个像上面这样的脚本,能够在管道运行时插入和删除音频源,但在源和加法器之间有一个 audioconvert 元素。这是因为在更一般的情况下,Adder 希望传入流具有相同的格式。
这是代码;我们创建 2 个发生器(蜂鸣器)。第一个发出 1000Hz 的声音并等待返回键。第二个是 500Hz 的音调,在按键后与第一个音调相加。再次按下返回键,只会听到第二个发生器的声音。
#!/usr/bin/python
import gobject;
gobject.threads_init()
import gst
# THE FOLLOWING FUNCTION IS A REWORK OF THE ORIGINAL, STILL DOING THE JOB
def create_raw_audiotest_signal(pipe, freq, adder):
# create buzzer of a given freq
buzzer = gst.element_factory_make("audiotestsrc","buzzer%d" % freq)
buzzer.set_property("freq",freq)
pipe.add(buzzer)
buzzersrc=buzzer.get_pad("src")
# Gather a request sink pad on the mixer
sinkpad=adder.get_request_pad("sink%d")
# .. and connect it to the buzzer
buzzersrc.link(sinkpad)
return buzzer, buzzersrc, sinkpad
# THIS IS A MODIFIED VERSION, NOT WORKING, THAT JUST PUTS AN AUDIOCONVERT
# ELEMENT BETWEEN THE GENERATOR AND THE ADDER.
def create_audiotest_signal_with_converter(pipe, freq, adder):
# create buzzer of a given freq
buzzer = gst.element_factory_make("audiotestsrc","buzzer%d" % freq)
buzzer.set_property("freq",freq)
# add a converter because adder wants inputs with the same format.
ac = gst.element_factory_make("audioconvert", "ac%d" % freq)
pipe.add(buzzer, ac)
# link the buzzer with the converter ...
buzzer.link(ac)
buzzersrc=buzzer.get_pad("src")
# Gather a request sink pad on the mixer
sinkpad=adder.get_request_pad("sink%d")
# and then the converter to the adder
ac.get_pad('src').link(sinkpad)
return buzzer, buzzersrc, sinkpad
if __name__ == "__main__":
# First create our pipeline
pipe = gst.Pipeline("mypipe")
# Create a software mixer with "Adder"
adder = gst.element_factory_make("adder","audiomixer")
pipe.add(adder)
# Create the first buzzer..
#buzzer1, buzzersrc1, sinkpad1 = create_raw_audiotest_signal(pipe, 1000, adder)
buzzer1, buzzersrc1, sinkpad1 = create_audiotest_signal_with_converter(pipe, 1000, adder)
# Add some output
output = gst.element_factory_make("autoaudiosink", "audio_out")
pipe.add(output)
adder.link(output)
# Start the playback
pipe.set_state(gst.STATE_PLAYING)
raw_input("1kHz test sound. Press <ENTER> to continue.")
# Get another generator
#buzzer2, buzzersrc2, sinkpad2 = create_raw_audiotest_signal(pipe, 500, adder)
buzzer2, buzzersrc2, sinkpad2 = create_audiotest_signal_with_converter(pipe, 500, adder)
# Start the second buzzer (other ways streaming stops because of starvation)
buzzer2.set_state(gst.STATE_PLAYING)
raw_input("1kHz + 500Hz test sound playing simoultenously. Press <ENTER> to continue.")
# Before removing a source, we must use pad blocking to prevent state changes
buzzersrc1.set_blocked(True)
# Stop the first buzzer
buzzer1.set_state(gst.STATE_NULL)
# Unlink from the mixer
buzzersrc1.unlink(sinkpad2)
# Release the mixers first sink pad
adder.release_request_pad(sinkpad1)
# Because here none of the Adder's sink pads block, streaming continues
raw_input("Only 500Hz test sound. Press <ENTER> to stop.")
如果您在两个调用中使用 create_raw_audiotest_signal 代替 create_audiotest_signal_with_converter ,那么它当然可以工作。如果您混合使用两者,它可以工作,但其间会产生不必要的额外延迟。最有趣的情况是,当您在两个调用中都使用 audioconvert 时,但 gtk 在第一个返回键处阻塞。
有人有什么建议吗?我做错了什么? 先感谢您。
I'm working on a sample python script, originally found here: Adding and removing audio sources to/from GStreamer pipeline on-the-go.
The aim is to make a script such as the one above, able to insert and remove audio sources while the pipeline is running but with an audioconvert element between the source and the adder. This is because in a more general case Adder wants the incoming streams to be of the same format.
So here's the code; we create 2 generators (buzzers). The first emits a 1000Hz tone and waits for a return key. The second is a 500Hz tone, which is summed to the first one after the key press. Again, by pressing the return key, only the second generator is heard.
#!/usr/bin/python
import gobject;
gobject.threads_init()
import gst
# THE FOLLOWING FUNCTION IS A REWORK OF THE ORIGINAL, STILL DOING THE JOB
def create_raw_audiotest_signal(pipe, freq, adder):
# create buzzer of a given freq
buzzer = gst.element_factory_make("audiotestsrc","buzzer%d" % freq)
buzzer.set_property("freq",freq)
pipe.add(buzzer)
buzzersrc=buzzer.get_pad("src")
# Gather a request sink pad on the mixer
sinkpad=adder.get_request_pad("sink%d")
# .. and connect it to the buzzer
buzzersrc.link(sinkpad)
return buzzer, buzzersrc, sinkpad
# THIS IS A MODIFIED VERSION, NOT WORKING, THAT JUST PUTS AN AUDIOCONVERT
# ELEMENT BETWEEN THE GENERATOR AND THE ADDER.
def create_audiotest_signal_with_converter(pipe, freq, adder):
# create buzzer of a given freq
buzzer = gst.element_factory_make("audiotestsrc","buzzer%d" % freq)
buzzer.set_property("freq",freq)
# add a converter because adder wants inputs with the same format.
ac = gst.element_factory_make("audioconvert", "ac%d" % freq)
pipe.add(buzzer, ac)
# link the buzzer with the converter ...
buzzer.link(ac)
buzzersrc=buzzer.get_pad("src")
# Gather a request sink pad on the mixer
sinkpad=adder.get_request_pad("sink%d")
# and then the converter to the adder
ac.get_pad('src').link(sinkpad)
return buzzer, buzzersrc, sinkpad
if __name__ == "__main__":
# First create our pipeline
pipe = gst.Pipeline("mypipe")
# Create a software mixer with "Adder"
adder = gst.element_factory_make("adder","audiomixer")
pipe.add(adder)
# Create the first buzzer..
#buzzer1, buzzersrc1, sinkpad1 = create_raw_audiotest_signal(pipe, 1000, adder)
buzzer1, buzzersrc1, sinkpad1 = create_audiotest_signal_with_converter(pipe, 1000, adder)
# Add some output
output = gst.element_factory_make("autoaudiosink", "audio_out")
pipe.add(output)
adder.link(output)
# Start the playback
pipe.set_state(gst.STATE_PLAYING)
raw_input("1kHz test sound. Press <ENTER> to continue.")
# Get another generator
#buzzer2, buzzersrc2, sinkpad2 = create_raw_audiotest_signal(pipe, 500, adder)
buzzer2, buzzersrc2, sinkpad2 = create_audiotest_signal_with_converter(pipe, 500, adder)
# Start the second buzzer (other ways streaming stops because of starvation)
buzzer2.set_state(gst.STATE_PLAYING)
raw_input("1kHz + 500Hz test sound playing simoultenously. Press <ENTER> to continue.")
# Before removing a source, we must use pad blocking to prevent state changes
buzzersrc1.set_blocked(True)
# Stop the first buzzer
buzzer1.set_state(gst.STATE_NULL)
# Unlink from the mixer
buzzersrc1.unlink(sinkpad2)
# Release the mixers first sink pad
adder.release_request_pad(sinkpad1)
# Because here none of the Adder's sink pads block, streaming continues
raw_input("Only 500Hz test sound. Press <ENTER> to stop.")
If you use create_raw_audiotest_signal in place of create_audiotest_signal_with_converter in both the calls of course it works. If you use a mixture of the two, it works, but with an unwanted extra delay inbetween. The most interesting case is when you use the audioconvert in both the calls, but gtk blocks at the first return key.
Does anybody have any suggestion? What am I doing wrong?
Thank you in advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我自己找到了答案,确实很简单......
我添加了其他组件,但它们位于管道中并保持独立的播放状态。因此,解决方案是将所有管道设置为“正在播放”,这又将状态设置为所有子项。
pipe.set_state(gst.STATE_PLAYING)
而不是:
buzzer2.set_state(gst.STATE_PLAYING)
并且它再次起作用。
I found the answer myself, it was simple indeed...
I added other components, but they live in the pipeline and keep having an independent play status. So the solution is set all the pipeline to playing, which in turn sets the status to all the children.
pipe.set_state(gst.STATE_PLAYING)
instead of:
buzzer2.set_state(gst.STATE_PLAYING)
and it works again.