AUGraph混合&渲染到文件,作为后台任务? (目标-C)
我目前正在开发一个非常简单的应用程序,它使用 AVAudioRecorder 记录用户的歌声,并使用 AUGraph (来自 iPhoneMixerEQGraphTest 示例)对其进行处理,该应用程序对声音应用效果,然后最终合并歌曲 + 声音。
我现在唯一的问题是我事先记录,然后尝试处理它。然而,我不希望用户必须听完整首歌曲+他的歌声才能将其渲染到文件中。
我的问题是:
- 有没有办法让 AUGraph 在后台渲染(使用 CAAudioUnitOutputCapturer.h)?哪个会更快(不是实时速度)并且不会通过扬声器输出。
- 或者有没有一种方法可以将麦克风音频立即混合为 AudioUnit,而无需通过扬声器输出麦克风,而仅输出音乐。
干杯,
M0rph3v5
I'm currently working on a pretty straight forward app which records the users singing voice using AVAudioRecorder and processes it using AUGraph (from the iPhoneMixerEQGraphTest example) which applies an effect to the voice and then merges the song + voice eventually.
The only problem I have now is that I record beforehand, and try to process it afterwards. However I don't want the user to have to listen out the whole song + his singing to be able to render it to a file.
My questions are:
- Is there a way to let AUGraph render (using CAAudioUnitOutputCapturer.h) in the background? Which would be faster (not realtime speed) and no output through the speakers.
- Or is there a way to mix in the microphone audio immediately as an AudioUnit without the mic being outputted through the speakers but just the music.
Cheers,
M0rph3v5
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
最终,我找到了一家外部公司来进行音频处理,该公司提供了一个库,该库可以通过扬声器输出麦克风,在其上添加一些均衡器并最终合并它。
CoreAudio 是个混蛋:(
Eventually I just got an external company to do the audio processing which supplied an library which would output the microphone through the speakers, throw some eq's over it and merges it eventually.
CoreAudio's a bitch :(