iOS - AudioUnitRender 在设备上返回错误 -10876,但在模拟器中运行良好

发布于 2024-10-28 04:14:08 字数 7500 浏览 2 评论 0原文

我遇到了一个问题,导致我无法从设备(iPhone4)上的麦克风捕获输入信号。但是,代码在模拟器中运行良好。 该代码最初采用 Apple 的 MixerHostAudio 类(来自 MixerHost 示例代码)。在我开始添加用于捕获麦克风输入的代码之前,它在设备和模拟器中都运行良好。 想知道是否有人可以帮助我。提前致谢!

这是我的 inputRenderCallback 函数,它将信号馈送到混合器输入中:

static OSStatus inputRenderCallback (

void                        *inRefCon,
AudioUnitRenderActionFlags  *ioActionFlags,
const AudioTimeStamp        *inTimeStamp, 
UInt32                      inBusNumber,
UInt32                      inNumberFrames,
AudioBufferList             *ioData) {
recorderStructPtr recorderStructPointer     = (recorderStructPtr) inRefCon;
    // ....
        AudioUnitRenderActionFlags renderActionFlags;
        err = AudioUnitRender(recorderStructPointer->iOUnit, 
                              &renderActionFlags, 
                              inTimeStamp, 
                              1, // bus number for input
                              inNumberFrames, 
                              recorderStructPointer->fInputAudioBuffer
                              );
                    // error returned is -10876
    // ....
}

这是我的相关初始化代码: 现在我在混音器中只保留 1 个输入,因此混音器看起来多余,但在添加输入捕获代码之前工作正常。

// Convenience function to allocate our audio buffers
- (AudioBufferList *) allocateAudioBufferListByNumChannels:(UInt32)numChannels withSize:(UInt32)size {
    AudioBufferList*            list;
    UInt32                      i;

    list = (AudioBufferList*)calloc(1, sizeof(AudioBufferList) + numChannels * sizeof(AudioBuffer));
    if(list == NULL)
        return nil;

    list->mNumberBuffers = numChannels;
    for(i = 0; i < numChannels; ++i) {
        list->mBuffers[i].mNumberChannels = 1;
        list->mBuffers[i].mDataByteSize = size;
        list->mBuffers[i].mData = malloc(size);
        if(list->mBuffers[i].mData == NULL) {
            [self destroyAudioBufferList:list];
            return nil;
        }
    }
    return list;
}

// initialize audio buffer list for input capture
recorderStructInstance.fInputAudioBuffer = [self allocateAudioBufferListByNumChannels:1 withSize:4096];

// I/O unit description
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType          = kAudioUnitType_Output;
iOUnitDescription.componentSubType       = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags         = 0;
iOUnitDescription.componentFlagsMask     = 0;

// Multichannel mixer unit description
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType          = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType       = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags         = 0;
MixerUnitDescription.componentFlagsMask     = 0;

AUNode   iONode;         // node for I/O unit
AUNode   mixerNode;      // node for Multichannel Mixer unit

// Add the nodes to the audio processing graph
result =    AUGraphAddNode (
                processingGraph,
                &iOUnitDescription,
                &iONode);

result =    AUGraphAddNode (
                processingGraph,
                &MixerUnitDescription,
                &mixerNode
            );

result = AUGraphOpen (processingGraph);

// fetch mixer AudioUnit instance
result =    AUGraphNodeInfo (
                processingGraph,
                mixerNode,
                NULL,
                &mixerUnit
            );

// fetch RemoteIO AudioUnit instance
result =    AUGraphNodeInfo (
                             processingGraph,
                             iONode,
                             NULL,
                             &(recorderStructInstance.iOUnit)
                             );


    // enable input of RemoteIO unit
UInt32 enableInput = 1;
AudioUnitElement inputBus = 1;
result = AudioUnitSetProperty(recorderStructInstance.iOUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              inputBus, 
                              &enableInput, 
                              sizeof(enableInput)
                              );
// setup mixer inputs
UInt32 busCount   = 1;

result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_ElementCount,
             kAudioUnitScope_Input,
             0,
             &busCount,
             sizeof (busCount)
         );


UInt32 maximumFramesPerSlice = 4096;

result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_MaximumFramesPerSlice,
             kAudioUnitScope_Global,
             0,
             &maximumFramesPerSlice,
             sizeof (maximumFramesPerSlice)
         );


for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {

    // set up input callback
    AURenderCallbackStruct inputCallbackStruct;
    inputCallbackStruct.inputProc        = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon  = &recorderStructInstance;

    result = AUGraphSetNodeInputCallback (
                 processingGraph,
                 mixerNode,
                 busNumber,
                 &inputCallbackStruct
             );

            // set up stream format
    AudioStreamBasicDescription mixerBusStreamFormat;
    size_t bytesPerSample = sizeof (AudioUnitSampleType);

    mixerBusStreamFormat.mFormatID          = kAudioFormatLinearPCM;
    mixerBusStreamFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
    mixerBusStreamFormat.mBytesPerPacket    = bytesPerSample;
    mixerBusStreamFormat.mFramesPerPacket   = 1;
    mixerBusStreamFormat.mBytesPerFrame     = bytesPerSample;
    mixerBusStreamFormat.mChannelsPerFrame  = 2;
    mixerBusStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
    mixerBusStreamFormat.mSampleRate        = graphSampleRate;

    result = AudioUnitSetProperty (
                                   mixerUnit,
                                   kAudioUnitProperty_StreamFormat,
                                   kAudioUnitScope_Input,
                                   busNumber,
                                   &mixerBusStreamFormat,
                                   sizeof (mixerBusStreamFormat)
                                   );


}

// set sample rate of mixer output
result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_SampleRate,
             kAudioUnitScope_Output,
             0,
             &graphSampleRate,
             sizeof (graphSampleRate)
         );


// connect mixer output to RemoteIO
result = AUGraphConnectNodeInput (
             processingGraph,
             mixerNode,         // source node
             0,                 // source node output bus number
             iONode,            // destination node
             0                  // desintation node input bus number
         );


// initialize AudioGraph
result = AUGraphInitialize (processingGraph);

// start AudioGraph
result = AUGraphStart (processingGraph);

// enable mixer input
result = AudioUnitSetParameter (
                     mixerUnit,
                     kMultiChannelMixerParam_Enable,
                     kAudioUnitScope_Input,
                     0, // bus number
                     1, // on
                     0
                  );

I encountered a problem which made me unable to capture input signal from microphone on the device (iPhone4). However, the code runs fine in the simulator.
The code was originally adopted from Apple's MixerHostAudio class from MixerHost sample code. it runs fine both on device and in simulator before I started adding code for capturing mic input.
Wondering if somebody could help me out. Thanks in advance!

Here is my inputRenderCallback function which feeds signal into mixer input:

static OSStatus inputRenderCallback (

void                        *inRefCon,
AudioUnitRenderActionFlags  *ioActionFlags,
const AudioTimeStamp        *inTimeStamp, 
UInt32                      inBusNumber,
UInt32                      inNumberFrames,
AudioBufferList             *ioData) {
recorderStructPtr recorderStructPointer     = (recorderStructPtr) inRefCon;
    // ....
        AudioUnitRenderActionFlags renderActionFlags;
        err = AudioUnitRender(recorderStructPointer->iOUnit, 
                              &renderActionFlags, 
                              inTimeStamp, 
                              1, // bus number for input
                              inNumberFrames, 
                              recorderStructPointer->fInputAudioBuffer
                              );
                    // error returned is -10876
    // ....
}

Here is my related initialization code:
Now I keep only 1 input in the mixer, so the mixer seems redundant, but works fine before adding input capture code.

// Convenience function to allocate our audio buffers
- (AudioBufferList *) allocateAudioBufferListByNumChannels:(UInt32)numChannels withSize:(UInt32)size {
    AudioBufferList*            list;
    UInt32                      i;

    list = (AudioBufferList*)calloc(1, sizeof(AudioBufferList) + numChannels * sizeof(AudioBuffer));
    if(list == NULL)
        return nil;

    list->mNumberBuffers = numChannels;
    for(i = 0; i < numChannels; ++i) {
        list->mBuffers[i].mNumberChannels = 1;
        list->mBuffers[i].mDataByteSize = size;
        list->mBuffers[i].mData = malloc(size);
        if(list->mBuffers[i].mData == NULL) {
            [self destroyAudioBufferList:list];
            return nil;
        }
    }
    return list;
}

// initialize audio buffer list for input capture
recorderStructInstance.fInputAudioBuffer = [self allocateAudioBufferListByNumChannels:1 withSize:4096];

// I/O unit description
AudioComponentDescription iOUnitDescription;
iOUnitDescription.componentType          = kAudioUnitType_Output;
iOUnitDescription.componentSubType       = kAudioUnitSubType_RemoteIO;
iOUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
iOUnitDescription.componentFlags         = 0;
iOUnitDescription.componentFlagsMask     = 0;

// Multichannel mixer unit description
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType          = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType       = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer  = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags         = 0;
MixerUnitDescription.componentFlagsMask     = 0;

AUNode   iONode;         // node for I/O unit
AUNode   mixerNode;      // node for Multichannel Mixer unit

// Add the nodes to the audio processing graph
result =    AUGraphAddNode (
                processingGraph,
                &iOUnitDescription,
                &iONode);

result =    AUGraphAddNode (
                processingGraph,
                &MixerUnitDescription,
                &mixerNode
            );

result = AUGraphOpen (processingGraph);

// fetch mixer AudioUnit instance
result =    AUGraphNodeInfo (
                processingGraph,
                mixerNode,
                NULL,
                &mixerUnit
            );

// fetch RemoteIO AudioUnit instance
result =    AUGraphNodeInfo (
                             processingGraph,
                             iONode,
                             NULL,
                             &(recorderStructInstance.iOUnit)
                             );


    // enable input of RemoteIO unit
UInt32 enableInput = 1;
AudioUnitElement inputBus = 1;
result = AudioUnitSetProperty(recorderStructInstance.iOUnit, 
                              kAudioOutputUnitProperty_EnableIO, 
                              kAudioUnitScope_Input, 
                              inputBus, 
                              &enableInput, 
                              sizeof(enableInput)
                              );
// setup mixer inputs
UInt32 busCount   = 1;

result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_ElementCount,
             kAudioUnitScope_Input,
             0,
             &busCount,
             sizeof (busCount)
         );


UInt32 maximumFramesPerSlice = 4096;

result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_MaximumFramesPerSlice,
             kAudioUnitScope_Global,
             0,
             &maximumFramesPerSlice,
             sizeof (maximumFramesPerSlice)
         );


for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {

    // set up input callback
    AURenderCallbackStruct inputCallbackStruct;
    inputCallbackStruct.inputProc        = &inputRenderCallback;
inputCallbackStruct.inputProcRefCon  = &recorderStructInstance;

    result = AUGraphSetNodeInputCallback (
                 processingGraph,
                 mixerNode,
                 busNumber,
                 &inputCallbackStruct
             );

            // set up stream format
    AudioStreamBasicDescription mixerBusStreamFormat;
    size_t bytesPerSample = sizeof (AudioUnitSampleType);

    mixerBusStreamFormat.mFormatID          = kAudioFormatLinearPCM;
    mixerBusStreamFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
    mixerBusStreamFormat.mBytesPerPacket    = bytesPerSample;
    mixerBusStreamFormat.mFramesPerPacket   = 1;
    mixerBusStreamFormat.mBytesPerFrame     = bytesPerSample;
    mixerBusStreamFormat.mChannelsPerFrame  = 2;
    mixerBusStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
    mixerBusStreamFormat.mSampleRate        = graphSampleRate;

    result = AudioUnitSetProperty (
                                   mixerUnit,
                                   kAudioUnitProperty_StreamFormat,
                                   kAudioUnitScope_Input,
                                   busNumber,
                                   &mixerBusStreamFormat,
                                   sizeof (mixerBusStreamFormat)
                                   );


}

// set sample rate of mixer output
result = AudioUnitSetProperty (
             mixerUnit,
             kAudioUnitProperty_SampleRate,
             kAudioUnitScope_Output,
             0,
             &graphSampleRate,
             sizeof (graphSampleRate)
         );


// connect mixer output to RemoteIO
result = AUGraphConnectNodeInput (
             processingGraph,
             mixerNode,         // source node
             0,                 // source node output bus number
             iONode,            // destination node
             0                  // desintation node input bus number
         );


// initialize AudioGraph
result = AUGraphInitialize (processingGraph);

// start AudioGraph
result = AUGraphStart (processingGraph);

// enable mixer input
result = AudioUnitSetParameter (
                     mixerUnit,
                     kMultiChannelMixerParam_Enable,
                     kAudioUnitScope_Input,
                     0, // bus number
                     1, // on
                     0
                  );

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

爱的十字路口 2024-11-04 04:14:08

首先,应该注意的是,错误代码-10876对应于名为kAudioUnitErr_NoConnection的符号。您通常可以通过谷歌搜索错误代码号和术语 CoreAudio 来找到这些内容。这应该暗示您要求系统渲染到未正确连接的 AudioUnit。

在渲染回调中,您将 void* 用户数据转换为 recorderStructPtr。我假设当您调试此代码时,此转换返回一个非空结构,其中包含您的实际音频单元的地址。但是,您应该使用传入渲染回调(即 inputRenderCallback 函数)的 AudioBufferList 来渲染它。其中包含您需要处理的系统样本列表。

First, it should be noted that the error code -10876 corresponds to the symbol named kAudioUnitErr_NoConnection. You can usually find these by googling the error code number along with the term CoreAudio. That should be a hint that you are asking the system to render to an AudioUnit which isn't properly connected.

Within your render callback, you are casting the void* user data to a recorderStructPtr. I'm going to assume that when you debugged this code that this cast returned a non-null structure which has your actual audio unit's address in it. However, you should be rendering it with the AudioBufferList which is passed in to your render callback (ie, the inputRenderCallback function). That contains the list of samples from the system which you need to process.

蓝礼 2024-11-04 04:14:08

我自己解决了这个问题。这是由于我的代码中的错误导致 AudioUnitRender() 出现 10876 错误。

我将 AudioSession 的类别设置为 AVAudioSessionCategoryPlayback 而不是 AVAudioSessionCategoryPlayAndRecord。当我将类别固定为 AVAudioSessionCategoryPlayAndRecord 时,我终于可以通过在设备上调用 A*udioUnitRender()* 成功捕获麦克风输入。

使用 AVAudioSessionCategoryPlayback 在调用 AudioUnitRender() 捕获麦克风输入时不会导致任何错误,并且在模拟器中运行良好。我认为这应该是一个问题
iOS 模拟器(尽管并不重要)。

I solved the issue on my own. It is due to a bug in my code causing 10876 error on AudioUnitRender().

I set the category of my AudioSession as AVAudioSessionCategoryPlayback instead of AVAudioSessionCategoryPlayAndRecord. When I fixed the category to AVAudioSessionCategoryPlayAndRecord, I can finally capture microphone input successfully by calling A*udioUnitRender()* on the device.

using AVAudioSessionCategoryPlayback doesn't result to any error upon calling AudioUnitRender() to capture microphone input and is working well in the simulator. I think this should be an issue for
iOS simulator (though not critical).

|煩躁 2024-11-04 04:14:08

我还发现当 I/O 单元的流格式属性中的值不一致时会出现此问题。确保 AudioStreamBasicDescription 的每通道位数、每帧通道数、每帧字节数、每数据包帧数和每数据包字节数都有意义。

具体来说,当我通过更改每帧的通道将流格式从立体声更改为单声道时,我收到了 NoConnection 错误,但忘记更改每帧的字节数和每数据包的字节数以匹配单声道帧中数据量一半的事实作为立体框架。

I have also seen this issue occur when the values in the I/O Unit's stream format property are inconsistent. Make sure that your AudioStreamBasicDescription's bits per channel, channels per frame, bytes per frame, frames per packet, and bytes per packet all make sense.

Specifically I got the NoConnection error when I changed a stream format from stereo to mono by changing the channels per frame, but forgot to change the bytes per frame and bytes per packet to match the fact that there is half as much data in a mono frame as a stereo frame.

千纸鹤 2024-11-04 04:14:08

如果您初始化 AudioUnit 且未设置其 kAudioUnitProperty_SetRenderCallback 属性,则在对其调用 AudioUnitRender 时将会收到此错误。

改为调用 AudioUnitProcess

If you initialise an AudioUnit and don't set its kAudioUnitProperty_SetRenderCallback property, you'll get this error if you call AudioUnitRender on it.

Call AudioUnitProcess on it instead.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文