同时生成多个正弦波到音频单元的样本缓冲区中(iOS)
给定一个频率和幅度的数组(长度不断变化),我可以逐个样本地生成一个包含数组中所有音调的音频缓冲区吗?如果没有,在单个音频单元中生成多个音调的最佳方法是什么?每个音符是否生成自己的缓冲区,然后将它们汇总到输出缓冲区中?这不是和一次性完成所有事情是一样的吗?
正在开发一个通过触摸生成音符的 iOS 应用程序,考虑使用 STK 但不想发送音符关闭消息,而宁愿只为我在数组中保存的音符生成正弦音调。每个音符实际上需要产生两个具有不同频率和幅度的正弦曲线。一个音符可能与另一个音符以相同的频率播放,因此该频率的音符关闭消息可能会导致问题。最终我想管理音频单元外部每个音符的幅度(adsr)包络。我还希望响应时间尽可能快,因此我愿意做一些额外的工作/学习,以使音频内容尽可能保持低水平。
我一直在研究正弦波单音发生器示例。基本上尝试将其中之一加倍,例如:
Buffer[frame] = (sin(theta1) + sin(theta2))/2
在采样率上按频率1/频率2增加theta1/theta2,(我意识到这不是最有效的调用sin() )但会产生混叠效应。除了从文件中读取音频之外,我还没有找到具有多个频率或数据源的示例。
有什么建议/例子吗?我最初让每个音符生成自己的音频单元,但这给我从触摸到音符发声带来了太多的延迟(而且似乎效率也很低)。与一般的数字音频相比,我对这种级别的编程还比较陌生,所以如果我遗漏了一些明显的东西,请保持温柔。
Given an array (of changing length) of frequencies and amplitudes, can I generate a single audio buffer on a sample by sample basis that includes all the tones in the array? If not, what is the best way to generate multiple tones in a single audio unit? Have each note generate it's own buffer then sum those into an output buffer? Wouldn't that be the same thing as doing it all at once?
Working on an iOS app that generates notes from touches, considering using STK but don't want to have to send note off messages, would rather just generate sinusoidal tones for the notes I'm holding in an array. Each note actually needs to produce two sinusoids, with varying frequency and amplitude. One note may be playing the same frequency as a different note so a note off message at that frequency could cause problems. Eventually I want to manage amplitude (adsr) envelopes for each note outside of the audio unit. I also want response time to be as fast as possible so I'm willing to do some extra work/learning to keep the audio stuff as low level as I can.
I've been working with sine wave single tone generator examples. Tried essentially doubling one of these, something like:
Buffer[frame] = (sin(theta1) + sin(theta2))/2
Incrementing theta1/theta2 by frequency1/frequency2 over sample rate, (I realize this is not the most efficient calling sin() ) but get aliasing effects. I've yet to find an example with multiple frequencies or data sources other than reading audio from file.
Any suggestions/examples? I originally had each note generate its own audio unit, but that gave me too much latency from touch to note sounding (and seems inefficient too). I am newer to this level of programming than I am to digital audio in general, so please be gentle if I'm missing something obvious.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
是的,当然可以,您可以在渲染回调中做任何您喜欢的事情。当您设置此调用时,您可以传入一个指向对象的指针。
该对象可以包含每个音调的开关状态。事实上,该对象可以包含一个负责填充缓冲区的方法。 (只要确保该对象是非原子的,如果它是一个属性 - 否则你会因为锁定问题而得到人工制品)
你到底想要实现什么?您真的需要即时生成吗?
如果是这样,您将面临远程IO音频单元的渲染回调超载的风险,这会给您带来故障和伪影,
您可能会在模拟器上摆脱它,然后将其移至设备上,并发现神秘地它不起作用更多,因为你运行的处理器少了 50 倍,并且一个回调无法在下一个回调到达之前完成
,也就是说,你可以逃避很多
我已经制作了一个 12 音调播放器,可以同时播放任意数量的单独音调
我所做的一切是为每个音调都有一个环形缓冲区(我使用的是相当复杂的波形,因此这需要很多时间,事实上,我实际上是在应用程序第一次运行时计算它,然后从文件加载它),并维护一个读头以及每个环的启用标志。
然后我将所有内容添加到渲染回调中,这在设备上处理得很好,即使所有 12 个都一起播放。我知道文档告诉你不要这样做,它建议只使用这个回调来填充另一个缓冲区,但你可以逃避很多,并且它是一个 PITA 来编码某种缓冲系统来计算在不同的线程上。
yes of course you can, you can do whatever you like inside your render callback. when you set this call back up, you can pass in a pointer to an object.
that object could contain the on off states for each tone. in fact the object could contain a method responsible for filling up the buffer. ( just make sure the object is nonatomic if it is a property -- otherwise you will get artefacts due to locking issues )
What exactly are you trying to achieve? Do you really need to generate on-the-fly?
if so, you run the risk of overloading the remoteIO audio unit's render callback, which will give you glitches and artefacts
you might get away with it on the simulator and then move it over to a device and find that mysteriously it isn't working any more because you are running on 50 times less processor, and one callback cannot complete before the next one arrives
having said, you can get away with a lot
I have made a 12 tone player that can simultaneously play any number of individual tones
all I do is have a ring buffer for each tone (I am using quite a complex waveform so this takes a lot of time, in fact I actually calculate it the first time the application is run and subsequently load it from file), and maintain a read-head and an enabled flag for each ring.
Then I add everything up in the render callback, and this handles fine on the device, even if all 12 are playing together. I know the documentation tells you not to do this, it recommends only using this callback in order to fill one buffer from another, but you can get away with a lot, and it is a PITA to code up some sort of buffering system that calculates on a different thread.