iOS:使用 AudioUnitRender 的模拟器中的错误
我又遇到了另一个 iOS 模拟器错误。我的问题是,有什么解决方法吗?
错误是这样的:
加载苹果的AurioTouch示例项目。
并简单地打印出渲染回调接收到的帧数(在aurioTouchAppDelegate.mm中),
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
printf( "%u, ", (unsigned int)inNumberFrames );
我得到以下输出:
471, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, ...
但是,如果您注释掉对AudioUnitRender的调用在下一行:
{
printf( "%u, ", (unsigned int)inNumberFrames );
aurioTouchAppDelegate *THIS = (aurioTouchAppDelegate *)inRefCon;
OSStatus err = 0; // AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
现在每次都会发送适当数量的浮点数。
471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470,
我的另一个问题是:为什么是 470、471 这样的随机数?我在某处读到,您通过指定其持续时间来隐式指定缓冲区长度,并将缓冲区长度设置为二的幂,从而产生该持续时间的最佳近似值。但经验证据表明事实并非如此。
无论如何,很确定这是一个错误。我要继续归档。如果有人可以提供一些线索,请这样做!
I have hit yet another iOS simulator bug. My question is, is there some workaround?
Bug is this:
Load apple's AurioTouch sample Project.
and simply print out the number of frames getting received by the render callback (in aurioTouchAppDelegate.mm)
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
printf( "%u, ", (unsigned int)inNumberFrames );
I get the following output:
471, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, ...
However, if you comment out the call to AudioUnitRender on the next line:
{
printf( "%u, ", (unsigned int)inNumberFrames );
aurioTouchAppDelegate *THIS = (aurioTouchAppDelegate *)inRefCon;
OSStatus err = 0; // AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
It now sends an appropriate number of floats each time.
471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470,
Another question I have is: why such a random number as 470, 471? I read somewhere that you specify the buffer length implicitly by specifying its time duration, and it sets the buffer length to the power of two that yields the best approximation to this duration. But empirical evidence suggests this is not so.
Anyway, pretty sure this is a bug. I'm going to go on file it. If anyone can shed some light, please do!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
一般来说,解决模拟器错误的方法是在设备上测试应用程序。 iOS模拟器只是一个模拟器,而不是模拟器。
iOS 模拟器有一些奇怪的错误。根据 Christopher Penrose 的这篇文章,这可能与缓冲区大小有关:
可能有更多有用信息的链接: http://osdir.com/ml/ coreaudio-api/2010-04/msg00150.html
Generally the workaround to Simulator bugs is to test the app on the device. The iOS Simulators is just a simulator, not an emulator.
The iOS Simulator has some odd bugs. It may have to do with buffer sizes according to this post by Christopher Penrose:
Link with possibly more helpful info: http://osdir.com/ml/coreaudio-api/2010-04/msg00150.html
如果你想让音频与你的模拟器一起工作,你需要确保你的采样率在 OS X 的音频/MIDI 设置工具中设置为 44.1k。 AVAudioSession/音频服务会将您的采样率报告为 44.1k,无论使用模拟器时的实际采样率是多少。
通过将 Mac 的采样率设置为 44.1k,您将在每个回调中获得一致的 inNumberFrames(默认为 1024),尽管据称系统仍然可以更改它(例如应用程序进入后台)。
If you want to get the audio working with your simulator, you need to make sure your samplerate is set to 44.1k in OS X's audio/midi setup tool. AVAudioSession/Audio Services will report your samplerate as 44.1k no matter what it actually is when using the simulator.
By setting your mac's samplerate to 44.1k, you'll get a consistent inNumberFrames (default is 1024) per callback, although this can still allegedly be changed by the system (ex. app goes to background).