iOS LPCM 非交错 2 通道音频输入:不可能吗?
在 aurioTouch 示例应用程序中,RemoteIO 音频单元是配置为 8.24 定点格式的 2 通道非交错 LPCM。这是 iOS 平台上的首选格式,我认为这就是硬件 ADC 发出的格式。他们甚至对此发表了评论(来源):
// set our required format - Canonical AU format: LPCM non-interleaved 8.24 fixed point
outFormat.SetAUCanonical(2, false);
所以我希望当应用程序稍后收到音频缓冲区时,它将有两个通道的数据按某种顺序打包在其 mData 成员中。类似这样:
mData = [L1, L2, L3, L4, R1, R2, R3, R4];
其中 L 和 R 代表来自立体声麦克风左声道和右声道的数据。只是情况似乎不可能是这样,因为 SetAUCannonical()
没有设置足够的内存来保存附加通道:
void SetAUCanonical(UInt32 nChannels, bool interleaved)
{
mFormatID = kAudioFormatLinearPCM;
#if CA_PREFER_FIXED_POINT
mFormatFlags = kAudioFormatFlagsCanonical | (kAudioUnitSampleFractionBits << kLinearPCMFormatFlagsSampleFractionShift);
#else
mFormatFlags = kAudioFormatFlagsCanonical;
#endif
mChannelsPerFrame = nChannels;
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
if (interleaved)
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
else {
mBytesPerPacket = mBytesPerFrame = sizeof(AudioUnitSampleType);
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
}
如果 'interleaved' 为 false,则不会将 'mBytesPerPacket' 和 mBytesPerFrame' 相乘通过通道数量。帧中没有足够的位来存储额外的通道。
那么,当示例代码要求 2 个通道时,它是否有点误导?它是否应该只要求 1 个通道,因为这就是它无论如何都会返回的:
outFormat.SetAUCanonical(1, false);
我可以像这样“修复”SetAUCannonical 以使事情变得清晰吗?:
mChannelsPerFrame = nChannels;
if (!interleaved) {
mChannelsPerFrame = 1
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
或者还有其他原因为什么您会要求 2 个通道吗?我什至不认为麦克风是立体声麦克风。
In the aurioTouch sample app the RemoteIO audio unit is configured for 2 channel non-interleaved LPCM in the 8.24 fixed point format. This is the preferred format on the iOS platform and I assume thats what the hardware ADC is emitting. They even made a comment about this (source):
// set our required format - Canonical AU format: LPCM non-interleaved 8.24 fixed point
outFormat.SetAUCanonical(2, false);
So I would expect that when the application later receives an audio buffer it will have data for two channels packed in its mData member in some order. Something like this:
mData = [L1, L2, L3, L4, R1, R2, R3, R4];
Where L and R represent data from the left and right channel of a stereo microphone. Only it seems that cannot be the case because SetAUCannonical()
doesn't set up enough memmory to hold the additional channel:
void SetAUCanonical(UInt32 nChannels, bool interleaved)
{
mFormatID = kAudioFormatLinearPCM;
#if CA_PREFER_FIXED_POINT
mFormatFlags = kAudioFormatFlagsCanonical | (kAudioUnitSampleFractionBits << kLinearPCMFormatFlagsSampleFractionShift);
#else
mFormatFlags = kAudioFormatFlagsCanonical;
#endif
mChannelsPerFrame = nChannels;
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
if (interleaved)
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
else {
mBytesPerPacket = mBytesPerFrame = sizeof(AudioUnitSampleType);
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
}
If 'interleaved' is false it doesn't multiply the 'mBytesPerPacket' and mBytesPerFrame' by the number of channels. There wont be enough bits in the frame to store the extra channel.
So is the sample code just slightly misleading when it asks for 2 channels? Should it just be asking for 1 channel, since thats what its going to get back anyway:
outFormat.SetAUCanonical(1, false);
Can I just 'fix' SetAUCannonical like this to make things clear?:
mChannelsPerFrame = nChannels;
if (!interleaved) {
mChannelsPerFrame = 1
mFormatFlags |= kAudioFormatFlagIsNonInterleaved;
}
mFramesPerPacket = 1;
mBitsPerChannel = 8 * sizeof(AudioUnitSampleType);
mBytesPerPacket = mBytesPerFrame = nChannels * sizeof(AudioUnitSampleType);
Or is there some other reason why you would ask for 2 channels? I dont even think the microphone is a stereo mic.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
内置麦克风和耳机麦克风输入均为单声道。
相机连接套件可能允许在运行某些先前操作系统版本的某些较新 iOS 设备上从某些 USB 麦克风进行立体声音频输入,但我还没有看到任何有关此功能与当前操作系统版本配合使用的报告。
另外,检查 2 通道(立体声)非交错格式是否可能向 RemoteIO 回调返回 2 个缓冲区,而不是 1 个缓冲区中的串联数据。
The built-in mic and headset mic input are both mono.
The Camera Connection kit may have allowed stereo audio input from some USB mics on some newer iOS devices running some previous OS versions, but I haven't seen any reports of this working with the current OS release.
Also, check to see whether 2 channel (stereo) non-interleaved format might return 2 buffers to the RemoteIO callback, instead of concatenated data in 1 buffer.
我认为您混淆了“交错”和“非交错”以及 CoreAudio 如何为您提供 ABL 中的数据。 SetAUCanonical() 正在做正确的事情。 ABL 具有可变的缓冲区数组,其中在非交错情况下,每个缓冲区仅保存单个通道的数据。
I think you're confusing "Interleaved" and "Non-Interleaved" and how CoreAudio gives you that data in ABLs. SetAUCanonical() is doing the right thing. An ABL has an variable array of buffers where in the non-interleaved case each buffer only holds the data for a single channel.
问题在于(有时)误导性的变量名称。我也不喜欢它,但这里有一个对正在发生的事情的解释。
当
mFormatFlags
设置为NonInterleaved(任何形式)
时,mChannelsPerFrame
指定通道数,其余字段应指定所需的属性对于单个通道。因此,您不需要乘以通道数量。正确的值为:
The problem is the (sometimes) misleading variable names. I do not like it either, but here is an explanation to what is going on.
When
mFormatFlags
is set asNonInterleaved (of any form)
thenmChannelsPerFrame
specifies the number of channels and the rest of the fields should specify the desired properties for a single channel.Hence you will NOT need to multiple by the number of channels. The proper values will be: