在CoreAudio中,设置kAudioDevicePropertyBufferFrameSize有什么优点/缺点?
在 CoreAudio 中从麦克风录音时,kAudioDevicePropertyBufferFrameSize
有何用途? 文档说它是“一个 UInt32,其值指示 IO 缓冲区中的帧数”
。但是,这并没有说明您为什么要设置它。
kAudioDevicePropertyBufferFrameSizeRange 属性为您提供了缓冲区帧大小的有效最小值和最大值。将缓冲区帧大小设置为最大值是否会减慢速度?您什么时候想将其设置为默认值以外的其他值?
When recording from a microphone in CoreAudio, what is kAudioDevicePropertyBufferFrameSize
for? The docs say it's "A UInt32 whose value indicates the number of frames in the IO buffers"
. However, this doesn't give any indication of why you would want to set it.
The kAudioDevicePropertyBufferFrameSizeRange
property gives you a valid minimum and maximum for the bufferframe size. Does setting the bufferframe size to the max slow things down? When would you want to set it to something other than the default?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
以下是他们在 CoreAudio 列表上所说的话:
Here's what they had to say on the CoreAudio list:
通常您会将其保留为默认值,但如果处理链中有一个 AudioUnit 需要或针对特定缓冲区大小进行了优化,您可能需要更改缓冲区大小。
此外,通常,较大的缓冲区大小会导致录制和播放之间的延迟较高,而较小的缓冲区大小会增加正在录制的每个通道的 CPU 负载。
Usually you'd leave it at the default, but you might want to change the buffer size if you have an AudioUnit in the processing chain that expects or is optimized for a certain buffer size.
Also, generally, larger buffer sizes result in higher latency between recording and playback, while smaller buffer sizes increase the CPU load of each channel being recorded.