iOS 音频单元:什么时候需要使用 AUGraph?

发布于 11-25 03:29 字数 422 浏览 2 评论 0原文

我对 iOS 编程完全陌生(我更喜欢 Android 人..)并且必须构建一个处理音频 DSP 的应用程序。 (我知道这不是接触 iOS 开发的最简单方法;))

应用程序需要能够接受以下输入:

1- 内置麦克风 2- iPod 库

然后可以将滤波器应用于输入声音,并将结果输出到:

1- 扬声器 2-记录到文件

我的问题如下:是否需要 AUGraph 才能将多个过滤器应用于输入,或者可以通过使用不同的渲染回调处理样本来应用这些不同的效果?

如果我使用 AUGraph,我是否需要:每个输入 1 个音频单元、输出 1 个音频单元以及每个效果/滤波器 1 个音频输入?

最后,如果我不这样做,我可以只有 1 个音频单元并重新配置它以选择源/目的地吗?

非常感谢您的回答!我对这些东西迷失了......

I'm totally new to iOS programing (I'm more an Android guy..) and have to build an application dealing with audio DSP. (I know it's not the easiest way to approach iOS dev ;) )

The app needs to be able to accept inputs both from :

1- built-in microphone
2- iPod library

Then filters may be applied to the input sound and the resulting is to be outputed to :

1- Speaker
2- Record to a file

My question is the following : Is an AUGraph necessary in order to be able for example to apply multiple filters to the input or can these different effects be applied by processing the samples with different render callbacks ?

If I go with AUGraph do I need : 1 Audio Unit for each input, 1 Audio Unit for the output and 1 Audio Input for each effect/filter ?

And finally if I don't may I only have 1 Audio Unit and reconfigure it in order to select the source/destination ?

Many thanks for your answers ! I'm getting lost with this stuff...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

┈┾☆殇2024-12-02 03:29:31

如果您愿意的话,您确实可以使用渲染回调,但是内置的音频单元非常棒(并且有些事情我在 NDA 等下还不能在这里说,我已经说了太多了,如果您有权访问iOS 5 SDK我推荐你看看)。

您可以在不使用 AUGraph 的情况下实现您想要的行为,但是建议您这样做,因为它可以在幕后处理很多事情并节省您的时间和精力。

使用

音频单元托管指南(iOS 开发者库)

AUGraph 类型为音频单元故事添加了线程安全性:它使您能够动态重新配置处理链。例如,您可以安全地插入均衡器,甚至在播放音频时为混音器输入交换不同的渲染回调函数。事实上,AUGraph 类型提供了 iOS 中唯一用于在音频应用中执行此类动态重新配置的 API。

选择设计模式(iOS 开发者库) 详细介绍了如何选择如何实施您的音频单元环境。从设置音频会话、图形和配置/添加单元、编写回调。

至于您想要在图表中使用哪些音频单元,除了您已经说过的之外,您还需要一个多通道混音器单元(请参阅使用特定音频单元(iOS 开发者库)) 混合两个音频输入,然后将混音器连接到输出装置。

直接连接

或者,如果您要直接执行此操作而不使用 AUGraph,则以下代码是您自己将音频单元连接在一起的示例。 (来自

您也可以建立和断开音频之间的连接
通过使用音频单元属性机制直接单元。为此,
使用 AudioUnitSetProperty 函数以及
kAudioUnitProperty_MakeConnection 属性,如清单 2-6 所示。
此方法要求您定义一个 AudioUnitConnection
每个连接的结构作为其属性值。

/*Listing 2-6*/
AudioUnitElement mixerUnitOutputBus  = 0;
AudioUnitElement ioUnitOutputElement = 0;

AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit    = mixerUnitInstance;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber    = ioUnitOutputElement;

AudioUnitSetProperty (
    ioUnitInstance,                     // connection destination
    kAudioUnitProperty_MakeConnection,  // property key
    kAudioUnitScope_Input,              // destination scope
    ioUnitOutputElement,                // destination element
    &mixerOutToIoUnitIn,                // connection definition
    sizeof (mixerOutToIoUnitIn)
);

You may indeed use render callbacks if you so wished to but the built in Audio Units are great (and there are things coming that I can't say here yet under NDA etc., I've said too much, if you have access to the iOS 5 SDK I recommend you have a look).

You can implement the behavior you wish without using AUGraph, however it is recommended you do as it takes care of a lot of things under the hood and saves you time and effort.

Using AUGraph

From the Audio Unit Hosting Guide (iOS Developer Library):

The AUGraph type adds thread safety to the audio unit story: It enables you to reconfigure a processing chain on the fly. For example, you could safely insert an equalizer, or even swap in a different render callback function for a mixer input, while audio is playing. In fact, the AUGraph type provides the only API in iOS for performing this sort of dynamic reconfiguration in an audio app.

Choosing A Design Pattern (iOS Developer Library) goes into some detail on how you would choose how to implement your Audio Unit environment. From setting up the audio session, graph and configuring/adding units, writing callbacks.

As for which Audio Units you would want in the graph, in addition to what you already stated, you will want to have a MultiChannel Mixer Unit (see Using Specific Audio Units (iOS Developer Library)) to mix your two audio inputs and then hook up the mixer to the Output unit.

Direct Connection

Alternatively, if you were to do it directly without using AUGraph, the following code is a sample to hook up Audio units together yourself. (From Constructing Audio Unit Apps (iOS Developer Library))

You can, alternatively, establish and break connections between audio
units directly by using the audio unit property mechanism. To do so,
use the AudioUnitSetProperty function along with the
kAudioUnitProperty_MakeConnection property, as shown in Listing 2-6.
This approach requires that you define an AudioUnitConnection
structure for each connection to serve as its property value.

/*Listing 2-6*/
AudioUnitElement mixerUnitOutputBus  = 0;
AudioUnitElement ioUnitOutputElement = 0;

AudioUnitConnection mixerOutToIoUnitIn;
mixerOutToIoUnitIn.sourceAudioUnit    = mixerUnitInstance;
mixerOutToIoUnitIn.sourceOutputNumber = mixerUnitOutputBus;
mixerOutToIoUnitIn.destInputNumber    = ioUnitOutputElement;

AudioUnitSetProperty (
    ioUnitInstance,                     // connection destination
    kAudioUnitProperty_MakeConnection,  // property key
    kAudioUnitScope_Input,              // destination scope
    ioUnitOutputElement,                // destination element
    &mixerOutToIoUnitIn,                // connection definition
    sizeof (mixerOutToIoUnitIn)
);
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文