iOS 音频合成器从哪里开始?

发布于 2024-10-09 18:00:36 字数 565 浏览 9 评论 0原文

我知道这是一个非常广泛的话题,但我一直在演示和我自己的测试中挣扎,不确定我是否正确地解决了这个问题。因此,任何关于我应该从哪里开始的线索将不胜感激。

目标是让应用程序根据用户的设置生成一些合成声音。 (这不是唯一的应用程序功能,我不会在这里重新创建 Korg,但合成器是其中的一部分。)用户将设置典型的合成器设置,如波形、混响等,然后选择何时播放音符,可能带有音调和速度调节器。

我对音频单元和 RemoteIO 进行了一些尝试,但几乎不明白我在做什么。在我深入那个兔子洞之前,我想知道我是否在正确的范围内。我知道音频合成器将是低级别的,但我希望也许有一些我可以使用的更高级别的库。

如果您对从哪里开始以及我应该阅读哪些 iOS 技术有任何建议,请告诉我。

谢谢!

编辑:让我更好地总结一下问题。

是否已经为 iOS 构建了任何合成器库? (商业或开源 - 我没有通过大量搜索找到任何内容,但也许我错过了。)

是否有任何更高级别的 API 可以帮助更轻松地生成缓冲区?

假设我已经可以生成缓冲区,是否有比 RemoteIO 音频单元更好/更简单的方法将这些缓冲区提交到 iOS 音频设备?

I know this is a very broad topic, but I've been floundering around with demos and my own tests and am not sure if I'm attacking the problem correctly. So any leads on where I should start would be appreciated.

The goal is to have the app generate some synthesized sounds, per the user's settings. (This isn't the only app function, I'm not recreating Korg here, but synth is part of it.) The user would set the typical synth settings like wave, reverb, etc. then would pick when the note would play, probably with a pitch and velocity modifier.

I've played around a bit with audio unit and RemoteIO, but only barely understand what I'm doing. Before I go TOO far down that rabbit hole, I'd like to know if I'm even in the right ballpark. I know audio synth is going to be low level, but I'm hoping that maybe there are some higher level libraries out there that I can use.

If you have any pointers on where to start, and which iOS technology I should be reading about more, please let me know.

Thanks!

EDIT: let me better summarize the questions.

Are there any synth libraries already built for iOS? (commercial or Open Source - I haven't found any with numerous searches, but maybe I'm missing it.)

Are there any higher level APIs that can help generate buffers easier?

Assuming that I can already generate buffers, is there a better / easier way to submit those buffers to the iOS audio device than the RemoteIO Audio Unit?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(10

美人如玉 2024-10-16 18:00:36

这是一个非常好的问题。我有时会问自己同样的事情,最终我总是使用斯坦福大学的人提供的 MoMu 工具包。这个库提供了一个很好的回调函数,连接到AudioUnits/AudioToolbox(不确定),这样你所关心的只是设置音频样本的采样率、缓冲区大小和位深度,你可以轻松地合成/在回调函数中处理您喜欢的任何内容。

我还推荐 iOS 综合工具包 (STK),它也是由斯坦福大学的 Ge Wang 发布的。合成/处理音频非常酷的东西。

每次 Apple 发布新的 iOS 版本时,我都会检查新文档,以找到更好(或更简单)的音频合成方法,但总是没有运气。

编辑:我想添加 AudioGraph 源代码的链接: https://github.com/tkzic/audiograph 这是一个非常有趣的应用程序,展示了 AudioUnits 的潜力,由 Tom Zicarelli 制作。该代码确实很容易理解,也是了解 iOS 中处理低级音频的复杂过程的好方法(有人会说)。

This is a really good question. I sometimes ask myself the same things and I always end up using the MoMu Toolkit from the guys at Stanford. This library provides a nice callback function that connects to AudioUnits/AudioToolbox (not sure), so that all you care about is to set the sampling rate, the buffer size, and the bit depth of the audio samples, and you can easily synthesize/process anything you like inside the callback function.

I also recommend the Synthesis ToolKit (STK) for iOS that was also released by Ge Wang at Stanford. Really cool stuff to synthesize / process audio.

Every time Apple releases a new iOS version I check the new documentation in order to find a better (or simpler) way to synthesize audio, but always with no luck.

EDIT: I want to add a link to the AudioGraph source code: https://github.com/tkzic/audiograph This is a really interesting app to show the potential of AudioUnits, made by Tom Zicarelli. The code is really easy to follow, and a great way to learn about this --some would say-- convoluted process of dealing with low level audio in iOS.

一页 2024-10-16 18:00:36

斯威夫特& Objective C

有一个很棒的开源项目,其中包含 Objective-C 和 Objective-C 的视频和教程。迅速。

AudioKit.io

Swift & Objective C

There's a great open source project that is well documented with videos and tutorials for both Objective-C & Swift.

AudioKit.io

奶茶白久 2024-10-16 18:00:36

获取声卡缓冲区的最低级别方法是通过audiounit api,特别是remoteIO audiounit。这是一堆胡言乱语,但网络上散布着一些例子。 http://atastypixel.com/blog/using-remoteio-audio-unit/ 是一。

我想还有其他方法来填充缓冲区,或者使用 AVFoundation 框架,但我从未这样做过。

另一种方法是使用 openframeworks 来处理所有音频内容,但这也假设您想在 openGL 中进行绘图。如果您确实想以另一种方式进行绘图,那么删除音频单元的实现应该不是什么太大的问题。这个特定的实现很好,因为它将所有内容转换为 -1..1 浮点数供您填充。

最后,如果您想快速启动一堆振荡器/滤波器/延迟线,您可以将其连接到 openframeworks 音频系统(或任何使用 -1..1 浮点数组的系统),您可能需要查看 http://www.maximilian.strangeloop.co.uk

The lowest level way to get the buffers to the soundcard is through the audiounit api, and particularly the remoteIO audiounit. This is a bunch of gibberish, but there are a few examples scattered around the web. http://atastypixel.com/blog/using-remoteio-audio-unit/ is one.

I imagine that there are other ways to fill buffers, either using the AVFoundation framework, but I have never done them.

The other way to do it is use openframeworks for all of your audio stuff, but that also assumes that you want to do your drawing in openGL. Tearing out the audiounit implementation shouldn't be too much of an issue though, if you do want to do your drawing in another way. This particular implementation is nice because it casts everything to -1..1 floats for you to fill up.

Finally, if you want a jump start on a bunch of oscillators / filters / delay lines that you can hook into the openframeworks audio system (or any system that uses arrays of -1..1 floats) you might want to check out http://www.maximilian.strangeloop.co.uk.

趁微风不噪 2024-10-16 18:00:36

这有两个部分:首先,您需要生成合成音频的缓冲区 - 这几乎与平台无关,您需要对音频合成有很好的了解才能编写这部分。第二部分是将这些缓冲区传递给适当的特定于操作系统的 API,以便实际播放声音。大多数用于音频播放的 API 支持双缓冲甚至多个缓冲区,以便您可以在播放当前缓冲区的同时合成未来的缓冲区。至于使用哪个 iOS API,这可能取决于您的应用程序的总体架构类型,但这确实是最简单的部分。综合部分是您需要完成大部分工作的地方。

There are two parts to this: firstly you need to generate buffers of synthesised audio - this is pretty much platform-agnostic and you'll need a good understanding of audio synthesis to write this part. The second part is passing these buffers to an appropriate OS-specific API so that the sound actually gets played. Most APIs for audio playback support double buffering or even multiple buffers so that you can synthesise future buffers while playing the current buffer. As to which iOS API to use, that will probably depend on what kind of overall architecture you have for your app, but this is really the easy part. The synthesis part is where you'll need to do most of the work.

梦忆晨望 2024-10-16 18:00:36

我知道这有点旧,但这对我来说似乎是错误的方法 - 您可能应该做的是找到一个音频单元合成器来模拟您想要做的更改类型。它们有很多,其中一些是开源的,另一些可能是可授权的 - 并通过您的代码托管音频单元。上面描述的机制看起来工作得很好,但它们并没有真正针对 ios 平台进行优化。

I know this is a little old, but this seems like the wrong approach to me - what you should probably be doing is finding an audio unit synthesizer that models the kind of changes that you want to do. There are many of them, some of them open source, others possibly licensable - and host the audio units from your code. The mechanisms described above seem like they would work just fine, but they're not really going to be optimized for the ios platform.

醉南桥 2024-10-16 18:00:36

我知道这个话题已经很老了,而且我很惊讶 iOS 上的音频情况仍然没有改善。

然而,还有一线希望:iOS 6 支持 WebAudio API。我成功地用 JavaScript 编写了几行代码就成功地创建了一个不错的多音素合成器。至少有一些基本的东西,比如开箱即用的振荡器:

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html

和(仅举一个例子)

I know this topic is old, and I'm amazed that the situation on iOS still hasn't improved when it comes to audio.

However, there's a silver line on the horizon: iOS 6 supports the WebAudio API. I successfully managed to have a nice polyphone synth with barely a couple of lines in JavaScript. At least there are basic things like Oscillators available out of the box:

https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html

and (just to pick one example out of many)

御弟哥哥 2024-10-16 18:00:36

我知道这是一篇旧帖子,但请查看神奇音频引擎

Amazing Audio Engine 是一个针对 iOS 音频应用程序的复杂框架,其构建使您无需这样做。它的设计非常易于使用,并可以代表您处理 iOS 音频的所有复杂问题。< /p>

这是来自 iOS 版 AudioBus 的开发者。

I know this is a old post, but check out The Amazing Audio Engine.

The Amazing Audio Engine is a sophisticated framework for iOS audio applications, built so you don't have to.It is designed to be very easy to work with, and handles all of the intricacies of iOS audio on your behalf.

This came form the developer of AudioBus for iOS.

拥有 2024-10-16 18:00:36

基本上,这将是音频队列和音频单元之间的折腾。如果您需要接近实时,例如需要处理麦克风输入,音频单元是实现最小延迟的方法。

但是,在渲染回调中可以执行的处理量存在一些限制——即,一大块数据到达超高优先级系统线程。如果你尝试在这个线程中做太多事情,它会破坏整个操作系统。

所以你需要在这个回调中编写智能代码。几乎没有什么陷阱,比如使用 NSLog 和访问另一个没有非原子声明的对象的属性(即它们将隐式创建锁)。

这就是苹果建立了一个更高级别的框架(AQ)来解决这个低级别棘手业务的主要原因。 AQ 允许您在线程上接收进程并吐出音频缓冲区,即使造成延迟也没关系。

但是,您可以逃避大量处理,特别是如果您使用加速框架来加速数学运算。

事实上,只需使用音频设备即可 - 从 Jonbro 给您的链接开始。尽管AQ是一个更高级别的框架,但使用起来比较头疼,而RemoteIO音频单元正是适合这项工作的工具。

Basically it is going to be a toss up between audio queues and audio units. if you need to get close to real-time, for example if you need to process microphone input, audio units are your way to achieve minimum latency.

however, there is some limitation to how much processing you can do inside the render callback -- ie a chunk of data arrives on an ultrahigh priority system thread. And if you try to do too much in this thread, it will chugg the whole OS.

so you need to code smart inside this callback. there are few pitfalls, like using NSLog and accessing properties of another object that were declared without nonatomic (ie they will be implicitly creating locks).

this is the main reason Apple built a higher level framework (AQ) to take out this low level tricky business. AQ lets you receive process and spit out audio buffers on a thread where it doesn't matter if you cause latency.

However, you can get away with a lot of processing, especially if you're using accelerate framework to speed up your mathematical manipulations.

In fact, just go with audio units -- start with that link jonbro gave you. even though AQ is a higher-level framework, it is more headache to use, and RemoteIO audio unit is the right tool for this job.

旧伤慢歌 2024-10-16 18:00:36

我一直在使用开放框架和 stanford stk 合成库中的音频输出示例来处理我的 iOS 合成器应用程序。

I have been using the audio output example from open frameworks and the stanford stk synthesis library to work on my iOS synth application.

小忆控 2024-10-16 18:00:36

我一直在尝试 Tonic 音频合成器库。干净且易于理解的代码以及可编译的 macOS 和 iOS 示例。

在某些时候,我开始使用简单的 C 代码从头开始生成自己的缓冲区,以执行正弦发生器、ADSR 和延迟等基本功能,这对于实验来说非常令人满意。

我使用 Tonic 的对应产品 Novocaine 将浮动阵列推送到扬声器。

例如 256k 将这些用于所有它产生的音乐。

就在最近,我发现 AVAudioUnitSampler,这是一种以不同的速度播放基于样本的音频的超级简单方法低延迟的音调。

I have been experimenting with the Tonic Audio synth library. Clean and easy to understand code with ready to compile macOS and iOS examples.

At some point I've started generating my own buffers with simple C code from scratch to do basic stuff like sine generators, ADSR's and delays, which was very satisfying to experiment with.

I pushed my float arrays to speakers using Tonic's counterpart, Novocaine.

For example 256k use these for all the music it generates.

Just recently I found AVAudioUnitSampler, a super-easy way to playback sample based audio at different pitches with low latency.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文