核心音频指导/入门

发布于 2024-10-02 11:00:43 字数 429 浏览 1 评论 0原文

我一直在阅读 ios 4 的核心音频,目的是构建一个小测试应用程序。

目前我对所有 api 的研究感到非常困惑。理想情况下,我想知道如何从两个 mp3 中提取多个样本到数组中。

然后在回调循环中,我想将这些样本混合在一起并将它们发送到扬声器。

苹果开发网站上有一些示例,但我发现它们很难剖析和消化。有人知道某个地方有一个很好的精简示例吗?

我也无法确定要使用哪个 api。

有扩展音频文件和音频文件。这些似乎是用于提取音频的。我应该使用哪一个?

绝对有必要使用混合单元,或者我最好做我自己的混合代码(我想要尽可能多的样本控制)。

我需要使用音频队列服务吗?我听说它们的延迟很差,这是真的吗?

最后我必须使用音频会话服务。如果没有它,音频应用程序还能工作吗?音频会话如何适应整个音频提取和回调?纯粹只是为了处理中断吗?

I've being doing some reading up on core audio for ios 4 with the aim of building a little test app.

I'm pretty confused at this point in reseach with all the api's. Ideally what I want to know how to do is to extract a number of samples from two mp3s into arrays.

Then in a callback loop I want to mix these samples together and send them to the speaker.

There are examples on the apple dev site but I'm finding them difficult to disect and digest. Is anybody aware of a nice stripped down example somewhere?

Also I can't determine which api's to be using.

There extendedaudiofile and audio file. These seem to be the one's for extracting audio. Which one should I use?

It it absoultly neccessary to use the mix unit or would I be as well off to do my own mixing code (I want as much sample control as possible).

Do I need to use audio queue services? I've heard that they provide poor latency, is this true?

Finally do I have to use an audio session service. Would an audio app work without it? How would the audio session fit in to the whole audio extraction and callback? Is it purely just to handle interruptions?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

┾廆蒐ゝ 2024-10-09 11:00:43

在过去的几年里,Core Audio 的文档已经有了很大的改进,但仍然不完整,有时令人困惑,有时甚至是错误的。而且我发现框架本身的结构相当混乱(AudioToolbox、AudioUnit、CoreAudio,...什么是什么?)。

但我对解决您的任务的建议是这样的(警告:我没有在 iOS 中执行以下操作,仅在 MacOS 中执行以下操作,但我认为大致相同):

  1. 使用 ExtendedAudioFile (在 AudioToolbox 框架中声明)来读取 mp3 。正如其名称所示,它扩展了 AudioFile 的功能。即,您可以将音频流格式 (AudioStreamBasicDescription) 分配给 eaf,当您从中读取时,它将转换为您的格式(为了进一步处理音频单元,您可以使用格式 ID 'kAudioFormatLinearPCM' 和格式标志 'kAudioFormatFlagsAudioUnitCanonical' )。

  2. 然后,您使用 ExtAudioFile 的“ExtAudioFileRead”将转换后的音频读入 AudioBufferList 结构,该结构是 AudioBuffer 结构的集合(均在 CoreAudio 框架中声明),每个通道一个(通常是两个)。查看文档音频部分中的“核心音频数据类型参考”,了解 AudioStreamBasicDescription、AudioBufferList 和 AudioBuffer 等内容。

  3. 现在,使用音频单元来播放和混合文件,这并不难。音频单元看似“大事”,但实际上并非如此。查看“AudioUnitProperties.h”和“AUComponent.h”(在 AudioUnit 框架中)以获取可用音频单元的描述。查看文档中的“iOS 音频单元托管指南”。这里唯一的问题是,没有适用于 iOS 的音频文件播放器单元...如果我没记错的话,您必须手动向音频单元提供样本。

  4. 音频单元位于 AUGraph 中(在 AudioToolbox 框架中声明),并像音频硬件一样通过接线盒互连。该图还为您处理音频输出。您可以查看与此相关的“PlaySoftMIDI”和“MixerHost”示例代码(实际上,我刚刚再次查看了 MixerHost,我认为这正是您想要做的!)。

经验法则:查看头文件!它们提供比文档更完整和准确的信息,至少这是我的印象。查看上述框架的标题并尝试熟悉它们会有很大帮助。

此外,还将有一本关于 Core Audio 的书(Kevin Avila 和 Chris Adamson 的“Core Audio”),但尚未发布。

希望这一切能有所帮助!祝你好运,
塞巴斯蒂安

The documentation on Core Audio has improved very much over the past years but it's still incomplete, sometimes confusing and sometimes just wrong. And I find the structure of the framework itself quite confusing (AudioToolbox, AudioUnit, CoreAudio, ... what is what?).

But my suggestions to tackle your task are this (Warning: I haven't done the following in iOS, only MacOS, but I think it's roughly the same):

  1. Use ExtendedAudioFile (declared in the AudioToolbox framework) to read the mp3s. It does just what the name suggests, it extends the capabilities of AudioFile. I.e. you can assign a audio stream format (AudioStreamBasicDescription) to an eaf and when you read from it, it will convert into that format for you (for further processing with audio units you use the format ID 'kAudioFormatLinearPCM' and format flags 'kAudioFormatFlagsAudioUnitCanonical').

  2. Then, you use ExtAudioFile's 'ExtAudioFileRead' to read the converted audio into an AudioBufferList struct which is a collection of AudioBuffer structs (both declared in the CoreAudio framework), one for each channel (so usually two). Check out the 'Core Audio Data Types Reference' in the Audio section of the Docs for things like AudioStreamBasicDescription, AudioBufferList and AudioBuffer.

  3. Now, use audio units to playback and mix the files, it's not that hard. Audio units seem this 'big thing' but they really aren't. Look into 'AudioUnitProperties.h' and 'AUComponent.h' (in the AudioUnit framework) for descriptions of available audio units. Check out 'Audio Unit Hosting Guide for iOS' in the docs. The only problem here is that there is no audio file player unit for iOS... If I remember correctly, you have to feed your audio units with samples manually.

  4. Audio units live in an AUGraph (declared in the AudioToolbox framework) and are interconnected like audio hardware thru a patchbay. The graph also handles the audio output for you. You can check out the 'PlaySoftMIDI' and 'MixerHost' example code regarding this (actually, I just had a look into MixerHost again and I think, it's just what you want to do!).

A rule of thumb: Look into the header files! They yield more complete and precise information than the docs, at least that was my impression. It can help a lot to look at the headers of the above mentioned frameworks and try to get familiar with them.

Also, there will be a book about Core Audio ('Core Audio' by Kevin Avila and Chris Adamson), but it's not yet released.

Hope, all this helps a little! Good luck,
Sebastian

孤君无依 2024-10-09 11:00:43

有扩展音频文件和音频文件。这些似乎是用于提取音频的。我应该使用哪一个?

如果您要访问存储在 iPod 库中的音频文件,那么这两种方法都不起作用。您必须使用 AVAssetReader。 (注意:在 AVAssetReader 文档中......它指出 AVAssetReader 不适用于实时源,并且不能保证实时操作的性能。我只能说它有效对我来说很好..我已经使用 AVAssetReader 创建了几个实时应用程序..这里是 请参阅

了解有关 iOS 音频编程的更多一般技巧。

我的回答此处 , “https://rads.stackoverflow.com/amzn/click/com/0321636848” rel="nofollow noreferrer">学习核心音频现在显然已经发布了,强烈建议您耐心看完章节并玩。使用示例代码.. 最好花时间阅读示例并在跳入更复杂的场景之前先理解概念。. 从网络上复制并粘贴示例代码和/或遵循网络上人们的高级建议一开始可能会起作用,但后来你会遇到真正棘手的问题,没有人会帮你解决。相信我,我是通过惨痛的教训才学到的!

There extendedaudiofile and audio file. These seem to be the one's for extracting audio. Which one should I use?

Neither of those would work if you are accessing audio files stored in the iPod library. You will have to use AVAssetReader. (Note: in the AVAssetReader documentation.. it states that AVAssetReader is not intended for use with real-time sources, and its performance is not guaranteed for real-time operations. All I can say that it worked fine for me.. and I've created several real time applications using just AVAssetReader.. here is a sample.

Please see my answer here for more general tips on iOS audio programming as well.

Finally, the book learning core audio is obviously released by now. I strongly recommend you patiently go through the chapters and play with the sample code.. It's best you take your time with the examples and have the concepts sink in before you jump your more complex scenario.. copying and pasting sample code from the web and/or following high level advice of people on the web may work at the beginning, but you'll get into really hairy problems later on that no one else will help you fix.. trust me I learned the hard way!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文