RemoteIO 音频问题 - 模拟器 = 好 - 设备 = 坏

发布于 2024-10-05 01:21:22 字数 9820 浏览 1 评论 0原文

好的,所以我使用核心音频从 10 个不同的样本源中提取音频,然后在我的回调函数中将它们混合在一起。

它在模拟器中运行完美,一切都很好。然而,当我尝试在 4.2 iPhone 设备上运行它时,我遇到了麻烦。

如果我在回调中混合 2 个音频文件,一切正常。 如果我混合 5 或 6 个音频文件,音频会播放,但过了很短的时间后,音频就会降级,最终不会有音频进入扬声器。 (回调不会停止)。

如果我尝试混合 10 个音频文件,回调会运行,但根本不会输出任何音频。

这几乎就像回调超时,这可能可以解释我混合 5 或 6 个音频源的情况,但无法解释最后混合 10 个音频源(根本不播放任何音频)的情况。

我不确定以下内容是否有任何影响,但当我调试时,此消息总是打印到控制台。这是否可以表明问题所在?

mem 0x1000 0x3fffffff cache
mem 0x40000000 0xffffffff none
mem 0x00000000 0x0fff none
run
Running…
[Switching to thread 11523]
[Switching to thread 11523]
Re-enabling shared library breakpoint 1
continue
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/usr/lib/info/dns.so (file not found).

** 设置我的回调**

#pragma mark -
#pragma mark Callback setup & control

- (void) setupCallback

{
    OSStatus status;


    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);

    UInt32 flag = 1;
    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioOutputUnitProperty_EnableIO, 
                                  kAudioUnitScope_Output, 
                                  kOutputBus,
                                  &flag, 
                                  sizeof(flag));

    //Apply format
    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioUnitProperty_StreamFormat, 
                                  kAudioUnitScope_Input, 
                                  kOutputBus, 
                                  &stereoStreamFormat, 
                                  sizeof(stereoStreamFormat));

    // Set up the playback  callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = playbackCallback; //!!****assignment from incompatible pointer warning here *****!!!!!!
    //set the reference to "self" this becomes *inRefCon in the playback callback
    callbackStruct.inputProcRefCon = self;

    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioUnitProperty_SetRenderCallback, 
                                  kAudioUnitScope_Global, 
                                  kOutputBus,
                                  &callbackStruct, 
                                  sizeof(callbackStruct));

    // Initialise
    status = AudioUnitInitialize(audioUnit); // error check this status


}

回调

static OSStatus playbackCallback (

                                     void                        *inRefCon,      // A pointer to a struct containing the complete audio data 
                                     //    to play, as well as state information such as the  
                                     //    first sample to play on this invocation of the callback.
                                     AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
                                     //    between sounds; for silence, also memset the ioData buffers to 0.
                                      AudioTimeStamp        *inTimeStamp,   // Unused here.
                                     UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                     //        frames of audio data to play.
                                     UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                     //        pointed to by the ioData parameter.
                                     AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary 
                                     //        responsibility is to fill the buffer(s) in the 
                                     //        AudioBufferList.
                                     ) {


    Engine *remoteIOplayer = (Engine *)inRefCon;
    AudioUnitSampleType *outSamplesChannelLeft;
    AudioUnitSampleType *outSamplesChannelRight;

    outSamplesChannelLeft                 = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
     outSamplesChannelRight  = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

    int thetime =0;
    thetime=remoteIOplayer.sampletime;


        for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
        {
            // get NextPacket returns a 32 bit value, one frame.
            AudioUnitSampleType *suml=0;
            AudioUnitSampleType *sumr=0;

            //NSLog (@"frame number -  %i", frameNumber);
            for(int j=0;j<10;j++)

            {


                AudioUnitSampleType valuetoaddl=0;
                AudioUnitSampleType valuetoaddr=0;


                //valuetoadd = [remoteIOplayer getSample:j ];
                valuetoaddl = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:0 ];
                //valuetoaddl = [remoteIOplayer getSample:j];
                valuetoaddr = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:1 ];

                suml = suml+(valuetoaddl/10);
                sumr = sumr+(valuetoaddr/10);

            }


            outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
            outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


            remoteIOplayer.sampletime +=1;


        }

    return noErr;
}

我的音频获取功能

-(AudioUnitSampleType) getNonInterleavedSample:(int) index currenttime:(int)time channel:(int)ch;

{

    AudioUnitSampleType returnvalue= 0;

    soundStruct snd=soundStructArray[index];    
    UInt64 sn= snd.frameCount;  
    UInt64 st=sampletime;
    UInt64 read= (UInt64)(st%sn);


    if(ch==0)
    {
        if (snd.sendvalue==1) {
            returnvalue = snd.audioDataLeft[read];

        }else {
            returnvalue=0;
        }

    }else if(ch==1)

    {
        if (snd.sendvalue==1) {
        returnvalue = snd.audioDataRight[read];
        }else {
            returnvalue=0;
        }

        soundStructArray[index].sampleNumber=read;
    }


    if(soundStructArray[index].sampleNumber >soundStructArray[index].frameCount)
    {
        soundStructArray[index].sampleNumber=0;

    }

    return returnvalue;


}

编辑 1

为了响应@andre,我将回调更改为以下内容,但是但这仍然没有帮助。

static OSStatus playbackCallback (

                                     void                        *inRefCon,      // A pointer to a struct containing the complete audio data 
                                     //    to play, as well as state information such as the  
                                     //    first sample to play on this invocation of the callback.
                                     AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
                                     //    between sounds; for silence, also memset the ioData buffers to 0.
                                      AudioTimeStamp        *inTimeStamp,   // Unused here.
                                     UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                     //        frames of audio data to play.
                                     UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                     //        pointed to by the ioData parameter.
                                     AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary 
                                     //        responsibility is to fill the buffer(s) in the 
                                     //        AudioBufferList.
                                     ) {


    Engine *remoteIOplayer = (Engine *)inRefCon;
    AudioUnitSampleType *outSamplesChannelLeft;
    AudioUnitSampleType *outSamplesChannelRight;

    outSamplesChannelLeft                 = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
     outSamplesChannelRight  = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

    int thetime =0;
    thetime=remoteIOplayer.sampletime;


        for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
        {
            // get NextPacket returns a 32 bit value, one frame.
            AudioUnitSampleType suml=0;
            AudioUnitSampleType sumr=0;

            //NSLog (@"frame number -  %i", frameNumber);
            for(int j=0;j<16;j++)

            {



                soundStruct snd=remoteIOplayer->soundStructArray[j];
                UInt64 sn= snd.frameCount;  
                UInt64 st=remoteIOplayer.sampletime;
                UInt64 read= (UInt64)(st%sn);

                suml+=  snd.audioDataLeft[read];
                suml+= snd.audioDataRight[read];


            }


            outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
            outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


            remoteIOplayer.sampletime +=1;


        }

    return noErr;
}

O.K , so I'm using core audio to extract audio from 10 different sample sources and then mixing them together in my callback function.

It works perfect in the simulator and all was well. I ran into trouble however when I tried to run it on an 4.2 iphone device.

If I mix 2 audio files in the callback everything is ok.
If I mix 5 or 6 audio files the audio plays but after a short amount of time the audio degrades and eventually no audio will go to the speakers. (The callback does not stop).

If I try to mix 10 audio files the callback runs but no audio at all comes out.

It's almost like the callback is running out of time which might explain the case where I mix 5 or 6 but would not explain the last case mixing 10 audio sources where no audio at all is played.

I'm not sure if the following has any bearing but this message always prints to console when I'm debugging. Could this be some indication as to what the problem is?

mem 0x1000 0x3fffffff cache
mem 0x40000000 0xffffffff none
mem 0x00000000 0x0fff none
run
Running…
[Switching to thread 11523]
[Switching to thread 11523]
Re-enabling shared library breakpoint 1
continue
warning: Unable to read symbols for /Developer/Platforms/iPhoneOS.platform/DeviceSupport/4.2.1 (8C148)/Symbols/usr/lib/info/dns.so (file not found).

** set up my callback**

#pragma mark -
#pragma mark Callback setup & control

- (void) setupCallback

{
    OSStatus status;


    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);

    UInt32 flag = 1;
    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioOutputUnitProperty_EnableIO, 
                                  kAudioUnitScope_Output, 
                                  kOutputBus,
                                  &flag, 
                                  sizeof(flag));

    //Apply format
    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioUnitProperty_StreamFormat, 
                                  kAudioUnitScope_Input, 
                                  kOutputBus, 
                                  &stereoStreamFormat, 
                                  sizeof(stereoStreamFormat));

    // Set up the playback  callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = playbackCallback; //!!****assignment from incompatible pointer warning here *****!!!!!!
    //set the reference to "self" this becomes *inRefCon in the playback callback
    callbackStruct.inputProcRefCon = self;

    status = AudioUnitSetProperty(audioUnit, 
                                  kAudioUnitProperty_SetRenderCallback, 
                                  kAudioUnitScope_Global, 
                                  kOutputBus,
                                  &callbackStruct, 
                                  sizeof(callbackStruct));

    // Initialise
    status = AudioUnitInitialize(audioUnit); // error check this status


}

The CallBack

static OSStatus playbackCallback (

                                     void                        *inRefCon,      // A pointer to a struct containing the complete audio data 
                                     //    to play, as well as state information such as the  
                                     //    first sample to play on this invocation of the callback.
                                     AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
                                     //    between sounds; for silence, also memset the ioData buffers to 0.
                                      AudioTimeStamp        *inTimeStamp,   // Unused here.
                                     UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                     //        frames of audio data to play.
                                     UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                     //        pointed to by the ioData parameter.
                                     AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary 
                                     //        responsibility is to fill the buffer(s) in the 
                                     //        AudioBufferList.
                                     ) {


    Engine *remoteIOplayer = (Engine *)inRefCon;
    AudioUnitSampleType *outSamplesChannelLeft;
    AudioUnitSampleType *outSamplesChannelRight;

    outSamplesChannelLeft                 = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
     outSamplesChannelRight  = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

    int thetime =0;
    thetime=remoteIOplayer.sampletime;


        for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
        {
            // get NextPacket returns a 32 bit value, one frame.
            AudioUnitSampleType *suml=0;
            AudioUnitSampleType *sumr=0;

            //NSLog (@"frame number -  %i", frameNumber);
            for(int j=0;j<10;j++)

            {


                AudioUnitSampleType valuetoaddl=0;
                AudioUnitSampleType valuetoaddr=0;


                //valuetoadd = [remoteIOplayer getSample:j ];
                valuetoaddl = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:0 ];
                //valuetoaddl = [remoteIOplayer getSample:j];
                valuetoaddr = [remoteIOplayer getNonInterleavedSample:j currenttime:thetime channel:1 ];

                suml = suml+(valuetoaddl/10);
                sumr = sumr+(valuetoaddr/10);

            }


            outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
            outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


            remoteIOplayer.sampletime +=1;


        }

    return noErr;
}

My audio fetching function

-(AudioUnitSampleType) getNonInterleavedSample:(int) index currenttime:(int)time channel:(int)ch;

{

    AudioUnitSampleType returnvalue= 0;

    soundStruct snd=soundStructArray[index];    
    UInt64 sn= snd.frameCount;  
    UInt64 st=sampletime;
    UInt64 read= (UInt64)(st%sn);


    if(ch==0)
    {
        if (snd.sendvalue==1) {
            returnvalue = snd.audioDataLeft[read];

        }else {
            returnvalue=0;
        }

    }else if(ch==1)

    {
        if (snd.sendvalue==1) {
        returnvalue = snd.audioDataRight[read];
        }else {
            returnvalue=0;
        }

        soundStructArray[index].sampleNumber=read;
    }


    if(soundStructArray[index].sampleNumber >soundStructArray[index].frameCount)
    {
        soundStructArray[index].sampleNumber=0;

    }

    return returnvalue;


}

EDIT 1

In response to @andre I changed my callback to the following but it still did not help.

static OSStatus playbackCallback (

                                     void                        *inRefCon,      // A pointer to a struct containing the complete audio data 
                                     //    to play, as well as state information such as the  
                                     //    first sample to play on this invocation of the callback.
                                     AudioUnitRenderActionFlags  *ioActionFlags, // Unused here. When generating audio, use ioActionFlags to indicate silence 
                                     //    between sounds; for silence, also memset the ioData buffers to 0.
                                      AudioTimeStamp        *inTimeStamp,   // Unused here.
                                     UInt32                      inBusNumber,    // The mixer unit input bus that is requesting some new
                                     //        frames of audio data to play.
                                     UInt32                      inNumberFrames, // The number of frames of audio to provide to the buffer(s)
                                     //        pointed to by the ioData parameter.
                                     AudioBufferList             *ioData         // On output, the audio data to play. The callback's primary 
                                     //        responsibility is to fill the buffer(s) in the 
                                     //        AudioBufferList.
                                     ) {


    Engine *remoteIOplayer = (Engine *)inRefCon;
    AudioUnitSampleType *outSamplesChannelLeft;
    AudioUnitSampleType *outSamplesChannelRight;

    outSamplesChannelLeft                 = (AudioUnitSampleType *) ioData->mBuffers[0].mData;
     outSamplesChannelRight  = (AudioUnitSampleType *) ioData->mBuffers[1].mData;

    int thetime =0;
    thetime=remoteIOplayer.sampletime;


        for (int frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber)
        {
            // get NextPacket returns a 32 bit value, one frame.
            AudioUnitSampleType suml=0;
            AudioUnitSampleType sumr=0;

            //NSLog (@"frame number -  %i", frameNumber);
            for(int j=0;j<16;j++)

            {



                soundStruct snd=remoteIOplayer->soundStructArray[j];
                UInt64 sn= snd.frameCount;  
                UInt64 st=remoteIOplayer.sampletime;
                UInt64 read= (UInt64)(st%sn);

                suml+=  snd.audioDataLeft[read];
                suml+= snd.audioDataRight[read];


            }


            outSamplesChannelLeft[frameNumber]=(AudioUnitSampleType) suml;
            outSamplesChannelRight[frameNumber]=(AudioUnitSampleType) sumr;


            remoteIOplayer.sampletime +=1;


        }

    return noErr;
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

青春如此纠结 2024-10-12 01:21:22
  1. 就像 Andre 所说,回调中最好不要有任何 Objective-C 函数调用。您还应该将 inputProcRefCon 更改为 C-Struct 而不是 Objective-C 对象。

  2. 此外,看起来您可能会逐帧“手动”将数据复制到缓冲区中。相反,使用 memcopy 复制一大块数据。

  3. 此外,我很确定您没有在回调中执行磁盘 I/O,但如果您这样做,您也不应该这样做。

  1. Like Andre said, it's best not to have any Objective-C function calls in the callback. You should also change your inputProcRefCon to a C-Struct instead of an Objective-C object.

  2. Also, it looks like you might be going frame-by-frame 'manually' copying the data into the buffer. Instead, use memcopy to copy a chunk of data in.

  3. Also, I'm pretty sure you're not doing disk I/O in the callback, but if you are you shouldn't do that either.

遇到 2024-10-12 01:21:22

根据我的经验,尽量不要在 RemoteIO 回调中使用 Objective-C 函数调用。他们会减慢速度。尝试使用 C 结构在回调中移动“getNonInterleavedSample”函数来访问音频数据。

In my experience, try not to use Objective-C function calls in your RemoteIO Callback. They will slow it down. Try to move your "getNonInterleavedSample" function in the Callback using C structures for accessing the audio data.

回心转意 2024-10-12 01:21:22

我假设你的 CPU 有限;模拟器在处理速度方面比各种设备强大得多。

回调可能无法跟上调用它的频率。

编辑:您能否“预先计算”混合(提前或在另一个线程中进行),以便在回调触发时它已经被混合,并且回调要做的工作更少?

I assume you're CPU-limited; the simulator is much more powerful in terms of processing speed than the various devices.

The callback probably can't keep up with the frequency at which it's being called.

EDIT: Could you "precompute" the mixing (doing it ahead of time or in another thread), so that it's already been mixed when the callback fires, and the callback has less work to do?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文