iOS:音频处理所需的线程解决方案
我正在远程IO 渲染回调中监视麦克风。
我运行一个简单的算法来检测是否检测到声音信号。
如果是,我开始记录到缓冲区中,直到再次沉默。
一旦再次检测到静音,我需要告诉我的应用程序有一个声音缓冲区可供处理。然后应用程序将执行处理。
重要的是这是在不同的线程上完成的! (remoteIO渲染回调线程不能被阻塞,它是一个实时线程,它会使系统卡顿)。
我天真地认为我可以从渲染回调中发送 NSNotification,并且它会在不同的线程上被拾取。但这不会发生!它在同一线程上执行。
完成此任务的简洁方法是什么?
我的感觉是我应该生成一个单独的线程。即使在主线程上进行处理似乎也有点问题......它也可能需要四分之一秒,这足以导致一些用户体验伪影。
I am monitoring the microphone in a remoteIO render callback.
I run a simple algorithm to detect whether an audible signal is detected.
If it is, I start to record into a buffer, until again silence.
Once silence is again detected, I need to tell my app that there is a buffer of sound ready for processing. The app will then perform the processing.
It is essential this is done on a different thread! (the remoteIO render callback thread cannot be blocked, it is a real-time thread and it would stutter the system).
I naïvely assumed I could send an NSNotification from my render callback and it would get picked up on a different thread. But this doesn't happen! It executes on the SAME thread.
What is a tidy way to accomplish this?
My feeling is I should probably spawn a separate thread. Even doing the processing on the main thread seems a bit glitchy... it might take a quarter of the second also, which would be enough to cause some UX artefact.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我做了类似的事情,“简单”的答案是我创建一个串行调度队列,然后在音频单元渲染回调中我抓取数据并使用dispatch_async将新的音频数据传递到串行队列。我使用辅助队列,因为音频单元要求您在回调中花费尽可能少的时间 - 而且您不应该分配内存或生成中断等。
这里lockData和unlockData获取预先分配的NSMutableData对象并将其存储在锁定/解锁数组中。
init 方法中
// 在渲染回调中的
:在串行队列中,我对音频数据进行所有数据处理,然后当串行队列检测到它正在寻找的东西时,它可以使用
dispatch_get_main_queue 意味着它在主线程上执行线程,这样你就可以进行 UI 更新等!
I have done a similar thing, the 'simple' answer is I create a serial dispatch queue, then in the audio unit render callback I grab the data and use dispatch_async to pass the new audio data to the serial queue. I use the secondary queue as audio units require you spend as little time as possible in the callback - also you shouldn't malloc memory or generate interrupts and such.
here lockData and unlockData grab pre-allocated NSMutableData objects and store then in a locked/unlocked array.
// in your init method
in the render callback:
In the serial queue is where I do all the data processing on the audio data, then when the serial queue detects something its looking for it can use
The dispatch_get_main_queue means it is executed on the main thread, so you can do UI updates and such!