设备中的 iPhone AudioUnitRender 错误 -50

发布于 2024-11-05 05:19:25 字数 1411 浏览 1 评论 0原文

我正在开发一个项目,其中我使用了 AudioUnitRender,它在模拟器中运行良好,但在设备中给出 -50 错误。

如果有人遇到类似的问题,请给我一些解决方案。

RIOInterface* THIS = (RIOInterface *)inRefCon;
COMPLEX_SPLIT A = THIS->A;
void *dataBuffer = THIS->dataBuffer;
float *outputBuffer = THIS->outputBuffer;
FFTSetup fftSetup = THIS->fftSetup;

uint32_t log2n = THIS->log2n;
uint32_t n = THIS->n;
uint32_t nOver2 = THIS->nOver2;
uint32_t stride = 1;
int bufferCapacity = THIS->bufferCapacity;
SInt16 index = THIS->index;

AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;

renderErr = AudioUnitRender(rioUnit, ioActionFlags, 
    inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
NSLog(@"%d",renderErr);
if (renderErr < 0) {
    return renderErr;
}

有关样本大小和框架的数据...

bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1; 

//asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;


//asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;    
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;          

asbd.mSampleRate = sampleRate;      

提前致谢..

I am working on one project in which i have used AudioUnitRender it runs fine in simulator but gives -50 error in the device.

If anyone have faced similar problem please give me some solution.

RIOInterface* THIS = (RIOInterface *)inRefCon;
COMPLEX_SPLIT A = THIS->A;
void *dataBuffer = THIS->dataBuffer;
float *outputBuffer = THIS->outputBuffer;
FFTSetup fftSetup = THIS->fftSetup;

uint32_t log2n = THIS->log2n;
uint32_t n = THIS->n;
uint32_t nOver2 = THIS->nOver2;
uint32_t stride = 1;
int bufferCapacity = THIS->bufferCapacity;
SInt16 index = THIS->index;

AudioUnit rioUnit = THIS->ioUnit;
OSStatus renderErr;
UInt32 bus1 = 1;

renderErr = AudioUnitRender(rioUnit, ioActionFlags, 
    inTimeStamp, bus1, inNumberFrames, THIS->bufferList);
NSLog(@"%d",renderErr);
if (renderErr < 0) {
    return renderErr;
}

data regarding sample size and frame...

bytesPerSample = sizeof(SInt16);
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBitsPerChannel = 8 * bytesPerSample;
asbd.mFramesPerPacket = 1;
asbd.mChannelsPerFrame = 1; 

//asbd.mBytesPerPacket = asbd.mBytesPerFrame * asbd.mFramesPerPacket;
asbd.mBytesPerPacket = bytesPerSample * asbd.mFramesPerPacket;


//asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;    
asbd.mBytesPerFrame = bytesPerSample * asbd.mChannelsPerFrame;          

asbd.mSampleRate = sampleRate;      

thanks in advance..

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

客…行舟 2024-11-12 05:19:25

缓冲区的长度(inNumberFrames)在设备和模拟器上可能不同。根据我的经验,它在设备上通常更大。当您使用自己的 AudioBufferList 时,您必须考虑这一点。我建议为 AudioBufferList 中的缓冲区分配更多内存。

The length of the buffer (inNumberFrames) can be different on the device and the simulator. From my experience it is often larger on the device. When you use your own AudioBufferList this is something you have to take into account. I would suggest allocating more memory for the buffer in the AudioBufferList.

站稳脚跟 2024-11-12 05:19:25

我知道这个线程很旧,但我刚刚找到了这个问题的解决方案。

设备的缓冲持续时间与模拟器上的缓冲持续时间不同。所以你必须改变缓冲持续时间:

Float32 bufferDuration = ((Float32) <INSERT YOUR BUFFER DURATION HERE>) / sampleRate; // buffer duration in seconds

AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferDuration), &bufferDuration);

I know this thread is old, but I just found the solution to this problem.

The buffer duration for the device is different from that on the simulator. So you have to change the buffer duration:

Float32 bufferDuration = ((Float32) <INSERT YOUR BUFFER DURATION HERE>) / sampleRate; // buffer duration in seconds

AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(bufferDuration), &bufferDuration);
无声静候 2024-11-12 05:19:25

尝试将 kAudioFormatFlagsNativeEndian 添加到流描述格式标志列表中。不确定这是否会产生影响,但不会有什么坏处。

另外,我对 userData 成员使用 THIS 表示怀疑,默认情况下它肯定不会用任何有意义的数据填充该成员。尝试在调试器中运行代码并查看该实例是否已正确提取和转换。假设是这样,只是为了好玩,尝试将 AudioUnit 对象放入全局变量(是的,我知道..)只是为了看看它是否有效。

最后,为什么使用 THIS->bufferList 而不是传递给渲染回调的那个?这可能不太好。

Try adding kAudioFormatFlagsNativeEndian to your list of stream description format flags. Not sure if that will make a difference, but it can't hurt.

Also, I'm suspicious about the use of THIS for the userData member, which definitely does not fill that member with any meaningful data by default. Try running the code in a debugger and see if that instance is correctly extracted and casted. Assuming it is, just for fun try putting the AudioUnit object into a global variable (yeah, I know..) just to see if it works.

Finally, why use THIS->bufferList instead of the one passed into your render callback? That's probably not good.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文