AudioUnit 中的输入缓冲区结构

发布于 2024-11-09 02:13:06 字数 1115 浏览 3 评论 0原文

我编写了一个简单的音频单元,它应该交换立体声源的左右声道。该代码的移植版本在 C 语言中对于使用 BASS 库的命令行程序运行良好,但我无法在 Xcode 中为音频单元使用相同的代码。

例如,对于 {1, 2, 3, 4, 5, 6} 的缓冲区输入,我期望立体声反转为 {2, 1, 4, 3, 6, 5}。

我的代码以这种方式正确地反转了样本,但我听到的只是某种低通滤波,而不是样本的立体声反转。

我的输入缓冲区中的前 4 个值是: 0.000104 0.000101 0.000080 0.000113

输出为: 0.000101 0.000104 0.000113 0.000080

我是否误解了输入/输出缓冲区的结构方式?

void        First::FirstKernel::Process(    const Float32   *inSourceP,
                                                Float32         *inDestP,
                                                UInt32          inSamplesToProcess,
                                                UInt32          inNumChannels, 
                                                bool            &ioSilence )
{


if (!ioSilence) {                                                 

    const Float32 *sourceP = inSourceP;  
    Float32  *destP = inDestP;  
    for (int i = inSamplesToProcess/2; i>0; --i) { 


        *(destP+1) = *sourceP;
        *destP = *(sourceP+1);

        sourceP = sourceP +2;
        destP = destP +2;
}   
}   
}

I've written a simple audiounit that should swap the left and right channels of a stereo source. A ported version of this code worked fine in C for a command-line program that used the BASS library, but I'm having trouble getting the same code to work in Xcode for an audiounit.

For a buffer input of, for example, {1, 2, 3, 4, 5, 6}, i would expect the stereo reversal to be {2, 1, 4, 3, 6, 5}.

My code correctly reverses the samples in this manner, but all I hear is some sort of low-pass filtering, rather than a stereo reversal of samples.

The first 4 values in my input buffer are:
0.000104
0.000101
0.000080
0.000113

The output is:
0.000101
0.000104
0.000113
0.000080

Have I misunderstood something about the way the input/output buffers are structured?

void        First::FirstKernel::Process(    const Float32   *inSourceP,
                                                Float32         *inDestP,
                                                UInt32          inSamplesToProcess,
                                                UInt32          inNumChannels, 
                                                bool            &ioSilence )
{


if (!ioSilence) {                                                 

    const Float32 *sourceP = inSourceP;  
    Float32  *destP = inDestP;  
    for (int i = inSamplesToProcess/2; i>0; --i) { 


        *(destP+1) = *sourceP;
        *destP = *(sourceP+1);

        sourceP = sourceP +2;
        destP = destP +2;
}   
}   
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

夕色琉璃 2024-11-16 02:13:06

这段代码不起作用的原因是因为您正在使用 AudioUnit 内核,它调用您的插件来处理单个通道的音频数据(如果我理解正确的话)。虽然内核在某些情况下非常方便,但它绝对不适用于进行相互依赖的立体声处理的插件。您正在回调中传递通道数——您检查过这个值吗?

无论如何,您应该从 AUEffectBase 类继承并重写 ProcessBufferLists() 方法。然后你会得到一个正确的 AudioBufferList 包含每个音频通道的逐行缓冲区的结构。与使用内核相比,它还可以让您更好地控制渲染过程。

编辑:好的,事实证明内核回调总是传递 1 个音频通道。另外,按照我最初的建议覆盖 Render() 并不是最好的方法。根据AUEffectBase.h源代码中的注释:

如果您的设备处理 N 到 N 个通道,并且通道之间没有交互,
它可以重写 NewKernel 来为每个通道创建一个单声道处理对象。否则,
不要重写 NewKernel,而是重写 ProcessBufferLists。

由于 AUEffectBase 不是“标准”AudioUnit 代码的一部分,因此您需要将 cpp/h 文件添加到您的项目中。它们可以在 AudioUnit SDK 根目录下的 AudioUnits/AUPublic/OtherBases 文件夹中找到。因此,对于您的插件,它看起来像这样:

MyEffect.h:

#include "AUEffectBase.h"

class MyEffect : public AUEffectBase {
public:
  // Constructor, other overridden methods, etc.
  virtual OSStatus ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,
                                      const AudioBufferList &inBuffer,
                                      AudioBufferList &outBuffer,
                                      UInt32 inFramesToProcess);


private:
  // Private member variables, methods
};

MyEffect.cpp:

// Other stuff ....

OSStatus MyEffect::ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,
                                      const AudioBufferList &inBuffer,
                                      AudioBufferList &outBuffer,
                                      UInt32 inFramesToProcess) {
  const float *srcBufferL = (Float32 *)inBuffer.mBuffers[0].mData;
  const float *srcBufferR = (Float32 *)inBuffer.mBuffers[1].mData;
  float *destBufferL = (Float32 *)outBuffer.mBuffers[0].mData;
  float *destBufferR = (Float32 *)outBuffer.mBuffers[1].mData;

  for(UInt32 frame = 0; frame < inFramesToProcess; ++frame) {
    *destBufferL++ = *srcBufferL++;
    *destBufferR++ = *srcBufferR++;
  }
}

The reason that this code isn't working is because you are using AudioUnit kernels, which call your plugin to process a single channel of audio data (if I understand correctly). While kernels can be quite convenient in some cases, it's definitely not going to work for a plugin which does interdependent stereo processing. You are being passed the number of channels in your callback -- have you checked this value?

Regardless, you should instead inherit from the AUEffectBase class and override the ProcessBufferLists() method. Then you will get a proper AudioBufferList structure which contains non-interlaced buffers for each audio channel. It will also give you much finer control over the rendering process than using kernels.

Edit: Ok, it turns out that the Kernel callback is always being passed 1 channel of audio. Also, overridding Render() as I originally suggested is not the best way to do this. According to a comment in the AUEffectBase.h source code:

If your unit processes N to N channels, and there are no interactions between channels,
it can override NewKernel to create a mono processing object per channel. Otherwise,
don't override NewKernel, and instead, override ProcessBufferLists.

As AUEffectBase isn't part of the "standard" AudioUnit code, you will need to add the cpp/h files into your project. They can be found under the AudioUnit SDK root in the AudioUnits/AUPublic/OtherBases folder. So for your plugin, that would look something like this:

MyEffect.h:

#include "AUEffectBase.h"

class MyEffect : public AUEffectBase {
public:
  // Constructor, other overridden methods, etc.
  virtual OSStatus ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,
                                      const AudioBufferList &inBuffer,
                                      AudioBufferList &outBuffer,
                                      UInt32 inFramesToProcess);


private:
  // Private member variables, methods
};

MyEffect.cpp:

// Other stuff ....

OSStatus MyEffect::ProcessBufferLists(AudioUnitRenderActionFlags &ioActionFlags,
                                      const AudioBufferList &inBuffer,
                                      AudioBufferList &outBuffer,
                                      UInt32 inFramesToProcess) {
  const float *srcBufferL = (Float32 *)inBuffer.mBuffers[0].mData;
  const float *srcBufferR = (Float32 *)inBuffer.mBuffers[1].mData;
  float *destBufferL = (Float32 *)outBuffer.mBuffers[0].mData;
  float *destBufferR = (Float32 *)outBuffer.mBuffers[1].mData;

  for(UInt32 frame = 0; frame < inFramesToProcess; ++frame) {
    *destBufferL++ = *srcBufferL++;
    *destBufferR++ = *srcBufferR++;
  }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文