AudioUnit 中的输入缓冲区结构
我编写了一个简单的音频单元,它应该交换立体声源的左右声道。该代码的移植版本在 C 语言中对于使用 BASS 库的命令行程序运行良好,但我无法在 Xcode 中为音频单元使用相同的代码。
例如,对于 {1, 2, 3, 4, 5, 6} 的缓冲区输入,我期望立体声反转为 {2, 1, 4, 3, 6, 5}。
我的代码以这种方式正确地反转了样本,但我听到的只是某种低通滤波,而不是样本的立体声反转。
我的输入缓冲区中的前 4 个值是: 0.000104 0.000101 0.000080 0.000113
输出为: 0.000101 0.000104 0.000113 0.000080
我是否误解了输入/输出缓冲区的结构方式?
void First::FirstKernel::Process( const Float32 *inSourceP,
Float32 *inDestP,
UInt32 inSamplesToProcess,
UInt32 inNumChannels,
bool &ioSilence )
{
if (!ioSilence) {
const Float32 *sourceP = inSourceP;
Float32 *destP = inDestP;
for (int i = inSamplesToProcess/2; i>0; --i) {
*(destP+1) = *sourceP;
*destP = *(sourceP+1);
sourceP = sourceP +2;
destP = destP +2;
}
}
}
I've written a simple audiounit that should swap the left and right channels of a stereo source. A ported version of this code worked fine in C for a command-line program that used the BASS library, but I'm having trouble getting the same code to work in Xcode for an audiounit.
For a buffer input of, for example, {1, 2, 3, 4, 5, 6}, i would expect the stereo reversal to be {2, 1, 4, 3, 6, 5}.
My code correctly reverses the samples in this manner, but all I hear is some sort of low-pass filtering, rather than a stereo reversal of samples.
The first 4 values in my input buffer are:
0.000104
0.000101
0.000080
0.000113
The output is:
0.000101
0.000104
0.000113
0.000080
Have I misunderstood something about the way the input/output buffers are structured?
void First::FirstKernel::Process( const Float32 *inSourceP,
Float32 *inDestP,
UInt32 inSamplesToProcess,
UInt32 inNumChannels,
bool &ioSilence )
{
if (!ioSilence) {
const Float32 *sourceP = inSourceP;
Float32 *destP = inDestP;
for (int i = inSamplesToProcess/2; i>0; --i) {
*(destP+1) = *sourceP;
*destP = *(sourceP+1);
sourceP = sourceP +2;
destP = destP +2;
}
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
这段代码不起作用的原因是因为您正在使用 AudioUnit 内核,它调用您的插件来处理单个通道的音频数据(如果我理解正确的话)。虽然内核在某些情况下非常方便,但它绝对不适用于进行相互依赖的立体声处理的插件。您正在回调中传递通道数——您检查过这个值吗?
无论如何,您应该从
AUEffectBase
类继承并重写ProcessBufferLists()
方法。然后你会得到一个正确的 AudioBufferList 包含每个音频通道的逐行缓冲区的结构。与使用内核相比,它还可以让您更好地控制渲染过程。编辑:好的,事实证明内核回调总是传递 1 个音频通道。另外,按照我最初的建议覆盖
Render()
并不是最好的方法。根据AUEffectBase.h
源代码中的注释:由于
AUEffectBase
不是“标准”AudioUnit 代码的一部分,因此您需要将 cpp/h 文件添加到您的项目中。它们可以在 AudioUnit SDK 根目录下的AudioUnits/AUPublic/OtherBases
文件夹中找到。因此,对于您的插件,它看起来像这样:MyEffect.h:
MyEffect.cpp:
The reason that this code isn't working is because you are using AudioUnit kernels, which call your plugin to process a single channel of audio data (if I understand correctly). While kernels can be quite convenient in some cases, it's definitely not going to work for a plugin which does interdependent stereo processing. You are being passed the number of channels in your callback -- have you checked this value?
Regardless, you should instead inherit from the
AUEffectBase
class and override theProcessBufferLists()
method. Then you will get a proper AudioBufferList structure which contains non-interlaced buffers for each audio channel. It will also give you much finer control over the rendering process than using kernels.Edit: Ok, it turns out that the Kernel callback is always being passed 1 channel of audio. Also, overridding
Render()
as I originally suggested is not the best way to do this. According to a comment in theAUEffectBase.h
source code:As
AUEffectBase
isn't part of the "standard" AudioUnit code, you will need to add the cpp/h files into your project. They can be found under the AudioUnit SDK root in theAudioUnits/AUPublic/OtherBases
folder. So for your plugin, that would look something like this:MyEffect.h:
MyEffect.cpp: