在 iPhone 上拆分音频单元流
我正在开发一种方法,将多个传感器信号一起调制成一个信号,通过音频输入传入 iPhone。我需要做几件事:
- 解调来自 输入信号通过一个简单的 过滤链,然后输出每个 沿着自己的信号路径进一步 处理——必须是实时的。
- 播放每个的声音化版本 信号——最好是实时的。
- 通过流式传输每个信号 网络连接——最好 即时的。
- 将每个信号存储在 PCM 中 文件——不必是实时的。
我需要帮助概念化这个过程中的信号链。我已经开始使用音频单元绘制设计草图。首先,我选择音频单元是否太低级了?这可以通过音频队列服务来实现吗?尽管如此,我已经达到了这样的程度:我已经收到调制信号(尚未解调),实时对其进行发声,并将发声信号传回输出。现在,为了将该信号分成信号链的两个独立部分,我会想象做一些事情,例如将远程 I/O 单元的输出路由到多通道混音器单元上的两个独立的输入总线,然后进行声音处理/写入-多通道混合器单元回调中的磁盘/写入网络。
但是,对于实时线程来说,这样的处理量是否太多了?我真的能够实现这一目标吗?还是需要将某些功能脱机?其次,是否可以将 I/O 单元的输入元件的输出路由到多通道混合器单元的单独输入元件?如果没有,我是否能够指定多通道流描述并分割源
I'm developing a means of bringing several sensor signals, modulated together into one signal, into the iPhone through the audio input. I need to do several things:
- Demodulate these signals from the
input signal through a trivial
filter chain, and then output each
down its own signal path for further
processing--must be realtime. - Playback a sonified version of each
signal--preferably realtime. - Stream each signal out over a
network connection--preferably
realtime. - Store each signal in a PCM
file--need not be realtime.
I need help conceptualising the signal chain in this process. I've begun to sketch the design using Audio Units. First of all, have I gone too low-level by choosing Audio Units? Would this be implementable with Audio Queue Services? Nevertheless, I've got to the point where I've the modulated signal coming in (have not demodulated it yet), am sonifying it in real time, and am passing the sonified signal back out the output. Now, in order to split this signal into two separate sections of the signal chain, I would imagine doing something like routing the output of my Remote I/O unit into two separate input buses on a Multichannel Mixer Unit, and sonifying/writing-to-disk/writing-to-network in the Multichannel Mixer Unit's callbacks.
However, is this too much processing for a realtime thread? Am I really going to be able to accomplish this, or will I need to pull some of the functionality offline? Second, is it possible to route the I/O Unit's input element's output to separate input elements of a Multichannel Mixer Unit? If not, would I be able to specify a multichannel stream description, and split the origin
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在 iOS 设备上当然可以进行多通道音频解调。它是使用 DSP IIR 滤波器组、FFT 滤波等来完成的。ARM NEON 矢量单元比几年前的许多专用 DSP 芯片具有更强的 DSP 处理能力。
我建议仅使用音频单元或音频队列服务来获取数据。然后只需对 PCM 样本进行排队并将它们输入到 DSP 处理模块即可。
是否可以通过网络传输数据取决于通道数量、每个通道的数据速率、数据压缩比、网络带宽的变化等。
Multi-channel audio demodulation is certainly possible on an iOS device. It's been done using DSP IIR filter banks, FFT filtering, and etc. The ARM NEON vector unit has more DSP crunching power than many specialized DSP chips of just a few years back.
I suggest using the Audio Unit or Audio Queue services just for data acquisition. Then just queue up the PCM samples and feed them to your DSP processing blocks.
Whether you can stream the data over a network depends on the number of channels, data rate per channel, data compression ratios, variance in network bandwidth, and etc.