如何清除caputre卡接收到的播放声音
我正在尝试设置我的 Linux 桌面,以便能够查看和收听连接到我的采集卡的设备。我写了这个 2 行脚本来做到这一点,但是我的声音跑调并且有点失真,我该如何清理它?
arecord --buffer-time=1 -f cd - | aplay --buffer-time=1 -c 5 -r 48000 -f S16_LE - 2> /dev/null &
ffplay -f video4linux2 -framerate 30 -video_size 1920x1080 -input_format mjpeg /dev/video1 2> /dev/null &
我也尝试使用 ffmpeg 通过管道传输到 ffplay 来做到这一点,声音非常清晰,但是视频和声音有 2-3 秒的延迟,有办法解决这个问题吗?
ffmpeg -framerate 30 -video_size 1920x1080 -thread_queue_size 1024 -input_format mjpeg -i /dev/video1 -f pulse -i 'Analog Input - USB Video' -r 30 -threads 4 -vcodec libx264 -crf 0 -preset ultrafast -vsync 1 -async 1 -f matroska - |ffplay -
I am trying to setup my linux desktop to be able to view and listent to the device connected to my capture card. I wrote this 2 liner script to be able to do that however my sound is out of tone and a bit distorted, how could I clean it up?
arecord --buffer-time=1 -f cd - | aplay --buffer-time=1 -c 5 -r 48000 -f S16_LE - 2> /dev/null &
ffplay -f video4linux2 -framerate 30 -video_size 1920x1080 -input_format mjpeg /dev/video1 2> /dev/null &
I also tried to do that with ffmpeg piped to ffplay and the sound is crystal clear however there is 2-3 seconds delay on the video and sound, is there a way to fix this?
ffmpeg -framerate 30 -video_size 1920x1080 -thread_queue_size 1024 -input_format mjpeg -i /dev/video1 -f pulse -i 'Analog Input - USB Video' -r 30 -threads 4 -vcodec libx264 -crf 0 -preset ultrafast -vsync 1 -async 1 -f matroska - |ffplay -
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您可以尝试仅使用
ffplay
作为第二种方法吗?我可能会偏离基础,因为我只熟悉 ffmpeg ,并且个人不使用 ffplay ,但它们共享很多东西(例如,后端库和命令行解析)所以我对冲这会起作用。
另外,“视频和声音有2-3秒延迟”是什么意思?它们是否比您实际看到和听到的落后 2-3 秒?或者说它们不同步那么多秒?
[附录]
不确定 OP 是否仍在检查这篇文章,但有一个解决方案可以通过使用带有
movie
和amovie
过滤器的输入过滤器图来组合 ffplay 的两个输入。以下内容在 Windows 中有效(尽管延迟大得令人无法接受):请注意,这仅用于说明目的,因为 dshow 设备可以输出多个流(尽管延迟对于实时使用来说仍然太糟糕)。
Linux 中也应该有同样的情况:(
免责声明:未经测试,可能缺少转义)
Linux 中的延迟可能更好(并且使用比我的更高规格的 PC),因此可能值得一试。
Could you try just using
ffplay
for your second approach?I could be off-base as I'm only familiar with
ffmpeg
and don't personally useffplay
, but they share a lot of things (e.g., backend libraries and command line parsing) so I'm hedging this would work.Also, what do you mean by "there is 2-3 seconds delay on the video and sound"? Are they 2-3 seconds behind what you are physically seeing and hearing? Or are they out of sync by that many seconds?
[addendum]
Not sure if OP is still checking this post, but there is a solution to combine two inputs for ffplay by using an input filtergraph with
movie
andamovie
filters. The following worked in Windows (despite unacceptably large latency):Note that this is only for the illustration purpose as
dshow
device can output multiple streams (though the latency is still too bad for real-time use).The same should be possible in Linux:
(Disclaimer: Untested and it may be missing escaping)
The latency may be better in Linux (and with a higher spec'ed PC than mine) so it might be worth a try.