从音频输入捕获原始音频以在 Mac 上进行实时处理的最简单方法
从内置音频输入捕获音频并能够在请求时实时读取原始采样值(如 .wav 中)的最简单方法是什么,就像从套接字读取一样。
希望代码使用 Apple 的框架之一(音频队列)。文档不是很清楚,我需要的是非常基础的。
What is the simplest way to capture audio from the built in audio input and be able to read the raw sampled values (as in a .wav) in real time as they come in when requested, like reading from a socket.
Hopefully code that uses one of Apple's frameworks (Audio Queues). Documentation is not very clear, and what I need is very basic.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
为此尝试使用 AudioQueue 框架。您主要需要执行 3 个步骤:
在步骤 3 中,您有机会分析传入的音频数据包音频数据与 AudioQueueGetProperty()
大致是这样的:
我建议 Apple AudioQueue 服务编程指南,了解有关如何启动和停止 AudioQueue 以及如何正确设置所有所需对象的详细信息。
您还可以仔细查看 Apple 的演示程序 SpeakHere。但恕我直言,一开始就有点令人困惑。
Try the AudioQueue Framework for this. You mainly have to perform 3 steps:
In step 3 you have a chance to analyze the incoming audio data with AudioQueueGetProperty()
It's roughly like this:
I suggest the Apple AudioQueue Services Programming Guide for detailled information about how to start and stop the AudioQueue and how to setup correctly all ther required objects.
You may also have a closer look into Apple's demo prog SpeakHere. But this is IMHO a bit confusing to start with.
这取决于您需要的“实时”程度,
如果您需要非常清晰的声音,请直接到底层并使用音频单元。这意味着设置一个 INPUT 回调。请记住,当此事件触发时,您需要分配自己的缓冲区,然后从麦克风请求音频。
即不要被参数中存在的缓冲区指针所迷惑...它之所以存在只是因为 Apple 对输入和渲染回调使用相同的函数声明。
这是我的一个项目的粘贴:
It depends how ' real-time ' you need it
if you need it very crisp, go down right at the bottom level and use audio units. that means setting up an INPUT callback. remember, when this fires you need to allocate your own buffers and then request the audio from the microphone.
ie don't get fooled by the presence of a buffer pointer in the parameters... it is only there because Apple are using the same function declaration for the input and render callbacks.
here is a paste out of one of my projects: