在 C++ 中操作音频数据用于 DSP 目的

发布于 2024-10-07 04:22:29 字数 1221 浏览 0 评论 0原文

我希望这个问题不要太模糊。我正在尝试从这个 Xcode 项目中的音频缓冲区获取信息并使用它来执行一些 DSP。

Framebuffer 指向一个值数组,我想将其传递给函数,循环遍历并最终插入原始缓冲区。该方法的作用类似于声音过滤器或效果。

也许为了让我的问题尽可能清晰,我们可以举一个子例程的示例,该子例程将为缓冲区中的每个样本添加 0.25 吗?

这是到目前为止的代码:

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                             const AudioTimeStamp *inTimeStamp, 
                             UInt32 inBusNumber, 
                             UInt32 inNumberFrames, 
                             AudioBufferList *ioData) {   

EAGLView *remoteIOplayer = (EAGLView *)inRefCon;
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
    //get the buffer to be filled

    AudioBuffer buffer = ioData->mBuffers[i];
    short *frameBuffer = (short*)buffer.mData;


        for (int j = 0; j < inNumberFrames; j++){
            // get NextPacket returns a 32 bit value, one frame.
                frameBuffer[j] = [[remoteIOplayer inMemoryAudioFile] getNextPacket];    
        }

        EAGLView* thisView = [[EAGLView alloc] init];

        [thisView DoStuffWithTheRecordedAudio:ioData];
        [thisView release];
        }

return noErr;
}

I hope this question is not too vague. I'm trying to take info from an audio buffer in this Xcode project and use it to do some DSP.

framebuffer points to an array of values that I would like to pass to a function, loop through and finally plug into the original buffer. The method would act like a sound filter or effect.

Maybe to keep my question as clear as possible, could we get an example of a sub-routine that would add 0.25 to each sample in the buffer?

Here's the code so far:

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                             const AudioTimeStamp *inTimeStamp, 
                             UInt32 inBusNumber, 
                             UInt32 inNumberFrames, 
                             AudioBufferList *ioData) {   

EAGLView *remoteIOplayer = (EAGLView *)inRefCon;
for (int i = 0 ; i < ioData->mNumberBuffers; i++){
    //get the buffer to be filled

    AudioBuffer buffer = ioData->mBuffers[i];
    short *frameBuffer = (short*)buffer.mData;


        for (int j = 0; j < inNumberFrames; j++){
            // get NextPacket returns a 32 bit value, one frame.
                frameBuffer[j] = [[remoteIOplayer inMemoryAudioFile] getNextPacket];    
        }

        EAGLView* thisView = [[EAGLView alloc] init];

        [thisView DoStuffWithTheRecordedAudio:ioData];
        [thisView release];
        }

return noErr;
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

世态炎凉 2024-10-14 04:22:29

在 iOS 设备上尝试在音频回调中执行 UI 或 Open GL 内容是个坏主意。您需要使用队列或 fifo 等来解耦回调和 UI 执行。

就设备性能而言,尝试在实时音频的内循环内进行 Objective C 消息传递也可能是一个非常糟糕的主意。在性能关键的内部循环中,坚持使用普通 C/C++ 效果要好得多。

此外,向音频数据添加一个常量可能只会导致听不见的直流偏移。

Trying to do UI or Open GL stuff inside an audio callback is a bad idea on iOS devices. You need to decouple the callback and UI execution using queues or fifos, and the like.

Trying to do Objective C messaging inside the inner loop of real-time audio may also a very bad idea in term of device performance. Sticking to plain C/C++ works far better in performance critical inner loops.

Also, adding a constant to audio data will likely just result in an inaudible DC offset.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文