iOS:同时录制和播放的示例代码
我正在为多轨录音机设计一个简单的概念验证。
明显的起点是从文件 A.caf 播放到耳机,同时将麦克风输入记录到文件 B.caf
这个问题 - 同时录制和播放音频 -- 指出我可以在三个级别上工作:
- AVFoundation API (AVAudioPlayer + AVAudioRecorder)
- 音频队列API
- 音频单元 API (RemoteIO)
最佳工作级别是什么?显然,通用的答案是在完成工作的最高级别上工作,即 AVFoundation。
但我从一个因延迟问题而放弃的人那里接手了这项工作(他在文件之间有 0.3 秒的延迟),所以也许我需要在较低的级别上工作以避免这些问题?
此外,跳板可以从哪些源代码获得?我一直在查看 SpeakHere 示例( http://developer .apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html)。如果我找不到更简单的东西,我会使用这个。
但有人能建议一些更简单的/其他的吗?如果可以避免的话,我宁愿不使用 C++ 代码。
有人知道一些使用 AVFoundation 来执行此操作的公共代码吗?
编辑:AVFoundation 示例如下: http://www.iphoneam.com/blog/index.php?title=using-the-iphone-to-record-audio-a-guide&more=1&c=1&tb=1&pb =1
编辑(2):这里看起来更好看:http://www.switchonthecode。 com/tutorials/create-a-basic-iphone-audio-player-with-av-foundation-framework
I'm designing a simple proof of concept for multitrack recorder.
Obvious starting point is to play from file A.caf to headphones while simultaneously recording microphone input into file B.caf
This question -- Record and play audio Simultaneously -- points out that there are three levels at which I can work:
- AVFoundation API (AVAudioPlayer + AVAudioRecorder)
- Audio Queue API
- Audio Unit API (RemoteIO)
What is the best level to work at? Obviously the generic answer is to work at the highest level that gets the job done, which would be AVFoundation.
But I'm taking this job on from someone who gave up due to latency issues (he was getting a 0.3sec delay between the files), so maybe I need to work at a lower level to avoid these issues?
Furthermore, what source code is available to springboard from? I have been looking at SpeakHere sample ( http://developer.apple.com/library/ios/#samplecode/SpeakHere/Introduction/Intro.html ). if I can't find something simpler I will use this.
But can anyone suggest something simpler/else? I would rather not work with C++ code if I can avoid it.
Is anyone aware of some public code that uses AVFoundation to do this?
EDIT: AVFoundation example here: http://www.iphoneam.com/blog/index.php?title=using-the-iphone-to-record-audio-a-guide&more=1&c=1&tb=1&pb=1
EDIT(2): Much nicer looking one here: http://www.switchonthecode.com/tutorials/create-a-basic-iphone-audio-player-with-av-foundation-framework
EDIT(3): How do I record audio on iPhone with AVAudioRecorder?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
为了避免延迟问题,您必须在低于 AVFoundation 的级别上工作。查看 Apple 的示例代码 - Auriotouch。它使用远程 I/O。
To avoid latency issues, you will have to work at a lower level than AVFoundation alright. Check out this sample code from Apple - Auriotouch. It uses Remote I/O.
正如 Viraj 所建议的,这就是答案。
是的,使用 AVFoundation 可以获得非常好的结果。首先您需要注意这样一个事实:对于播放器和录音机来说,激活它们是一个两步过程。
首先你要准备好它。
然后你就玩它。
所以,先做好一切准备。然后玩一切。
这将使您的延迟降至约 70 毫秒。我的测试方法是录制节拍器的滴答声,然后通过扬声器播放它,同时将 iPhone 举到扬声器上并同时录音。
第二次录音有清晰的回声,我发现大约 70 毫秒。我本可以在 Audacity 中分析信号以获得精确的偏移量。
因此,为了将所有内容排列起来,我只需执行 PerformSelector:x withObject: y afterDelay: 70.0/1000.0
可能存在隐藏的障碍,例如,延迟可能因设备而异。它甚至可能根据设备活动而有所不同。线程甚至有可能在启动播放器和启动记录器之间被中断/重新安排。
但它确实有效,而且比摆弄音频队列/单元要整洁得多。
As suggested by Viraj, here is the answer.
Yes, you can achieve very good results using AVFoundation. Firstly you need to pay attention to the fact that for both the player and the recorder, activating them is a two step process.
First you prime it.
Then you play it.
So, prime everything. Then play everything.
This will get your latency down to about 70ms. I tested by recording a metronome tick, then playing it back through the speakers while holding the iPhone up to the speakers and simultaneously recording.
The second recording had a clear echo, which I found to be ~70ms. I could have analysed the signal in Audacity to get an exact offset.
So in order to line everything up I just performSelector:x withObject: y afterDelay: 70.0/1000.0
There may be hidden snags, for example the delay may differ from device to device. it may even differ depending on device activity. It is even possible the thread could get interrupted/rescheduled in between starting the player and starting the recorder.
But it works, and is a lot tidier than messing around with audio queues / units.
我遇到了这个问题,在我的项目中只需更改
AudioSession
的PreferredHardwareIOBufferDuration
参数即可解决该问题。我想我现在只有 6 毫秒的延迟,这对于我的应用程序来说已经足够了。检查这个答案,它有很好的解释。
I had this problem and I solved it in my project simply by changing the
PreferredHardwareIOBufferDuration
parameter of theAudioSession
. I think I have just 6ms latency now, that is good enough for my app.Check this answer that has a good explanation.