iOS:音频单元、OpenAL 与 Core Audio
有人可以向我解释一下 OpenAL 如何适应 iPhone 上的声音模式吗?
似乎有不同级别的 API 用于处理声音。更高层次的内容很容易理解。
但我的理解越往下越模糊。有核心音频、音频单元、OpenAL。
这些之间有什么联系呢? openAL 是核心音频(包含音频单元作为其较低级别对象之一)的基础吗?
Xcode 似乎没有记录 OpenAL,但我可以运行使用其功能的代码。
Could someone explain to me how OpenAL fits in with the schema of sound on the iPhone?
There seem to be APIs at different levels for handling sound. The higher level ones are easy enough to understand.
But my understanding gets murky towards the bottom. There is Core Audio, Audio Units, OpenAL.
What is the connection between these? Is openAL the substratum, upon which rests Core Audio (which contains as one of its lower-level objects Audio Units) ?
OpenAL doesn't seem to be documented by Xcode, yet I can run code that uses its functions.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Core Audio 涵盖了很多内容,例如读取和写入各种文件格式、编码之间的转换、从流中提取帧等。其中大部分功能都被收集为“音频工具箱”。 Core Audio 还提供多个 API,用于处理音频流、播放、捕获或两者兼而有之。最低级别的是音频单元,它适用于未压缩 (PCM) 音频,并且有一些用于应用效果、混合等的好东西。在音频单元之上实现的音频队列要容易得多,因为它们适用于压缩格式(不仅仅是PCM)并帮助您解决一些线程挑战。 OpenAL 也在音频单元之上实现;你仍然必须使用 PCM,但至少线程并不可怕。区别在于,由于它不是来自 Apple,因此它的编程约定与 Core Audio 和 iOS 的其余部分完全不同(最明显的是,它是一个推送 API:如果您想使用 OpenAL 进行流式传输,您可以轮询您的源以查看它们是否已耗尽它们的缓冲区并推入新的缓冲区;相比之下,音频队列和音频单元是基于拉动的,因为当需要播放新样本时您会收到回调)。
正如您所看到的,更高级别是媒体播放器和 AV Foundation 等好东西。如果您只是播放文件,这些会容易得多,但如果您想做某种效果、信号处理等,可能不会为您提供足够深入的访问。
Core Audio covers a lot of things, such as reading and writing various file formats, converting between encodings, pulling frames out of streams, etc. Much of this functionality is collected as the "Audio Toolbox". Core Audio also offers multiple APIs for processing streams of audio, for playback, capture, or both. The lowest level one is Audio Units, which works with uncompressed (PCM) audio and has some nice stuff for applying effects, mixing, etc. Audio Queues, implemented atop Audio Units, are a lot easier because they work with compressed formats (not just PCM) and save you from some threading challenges. OpenAL is also implemented atop Audio Units; you still have to use PCM, but at least the threading isn't scary. Difference is that since it's not from Apple, its programming conventions are totally different from Core Audio and the rest of iOS (most obviously, it's a push API: if you want to stream with OpenAL, you poll your sources to see if they've exhausted their buffers and push in new ones; by contrast, Audio Queues and Audio Units are pull-based, in that you get a callback when new samples are needed for playback).
Higher level, as you've seen, is nice stuff like Media Player and AV Foundation. These are a lot easier if you're just playing a file, but probably aren't going to give you deep enough access if you want to do some kind of effects, signal processing, etc.
这就是我的结论:
底层是 Core Audio。具体来说,音频单元。
因此,音频单元构成了基础层,一些低级框架建立在其之上。整个组合被称为核心音频。
OpenAL 是一个多平台 API——创建者试图反映 OpenGL 的可移植性。有几家公司赞助 OpenAL,包括 Creative Labs 和 Apple!
因此 Apple 提供了这个 API,基本上是作为 Core Audio 的薄包装。我猜这是为了让开发人员可以轻松地拉取代码。请注意,这是一个不完整的实现,因此如果您希望 OpenAL 做 Core Audio 可以做的事情,它就会做。但否则就不会了。
有点违反直觉——只看源代码,看起来 OpenAL 的级别较低。不是这样的!
This is what I have figured out:
The substratum is Core Audio. Specifically, Audio Units.
So Audio Units form the base layer, and some low-level framework has been built on top of this. And the whole caboodle is termed Core Audio.
OpenAL is a multiplatform API -- the creators are trying to mirror the portability of OpenGL. A few companies are sponsoring OpenAL, including Creative Labs and Apple!
So Apple has provided this API, basically as a thin wrapper over Core Audio. I am guessing this is to allow developers to pull over code easily. Be warned, it is an incomplete implementation, so if you want OpenAL to do something that Core Audio can do, it will do it. But otherwise it won't.
Kind of counterintuitive -- just looking at the source, it looks as if OpenAL is lower level. Not so!