AudioQueue 吃掉了我的缓冲区(前 15 毫秒)
我正在以编程方式生成音频。我听到缓冲区之间一片寂静。当我将手机连接到示波器时,我发现每个缓冲区的前几个样本都丢失了,取而代之的是沉默。这种沉默的长度从几乎没有到长达 20 毫秒不等。
我的第一个想法是我原来的回调函数花费了太多时间。我用尽可能短的缓冲区替换它——它一遍又一遍地重新排队相同的缓冲区。我观察到同样的行为。
AudioQueueRef aq;
AudioQueueBufferRef aq_buffer;
AudioStreamBasicDescription asbd;
void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
OSStatus s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL);
}
void aq_init(void) {
OSStatus s;
asbd.mSampleRate = AUDIO_SAMPLES_PER_S;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBytesPerPacket = 1;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = 8;
asbd.mReserved = 0;
int PPM_PACKETS_PER_SECOND = 50;
// one buffer is as long as one PPM frame
int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame;
s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq);
s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer);
// put samples in the buffer
buffer_data(my_data, aq_buffer);
s = AudioQueueStart(aq, NULL);
s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL);
}
I am generating audio programmatically. I hear gaps of silence between my buffers. When I hook my phone to a scope, I see that the first few samples of each buffer are missing, and in their place is silence. The length of this silence varies from almost nothing to as much as 20 ms.
My first thought is that my original callback function takes too much time. I replace it with the shortest one possible--it re-renqueues the same buffer over and over. I observe the same behavior.
AudioQueueRef aq;
AudioQueueBufferRef aq_buffer;
AudioStreamBasicDescription asbd;
void aq_callback (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
OSStatus s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL);
}
void aq_init(void) {
OSStatus s;
asbd.mSampleRate = AUDIO_SAMPLES_PER_S;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
asbd.mBytesPerPacket = 1;
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = 1;
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = 8;
asbd.mReserved = 0;
int PPM_PACKETS_PER_SECOND = 50;
// one buffer is as long as one PPM frame
int BUFFER_SIZE_BYTES = asbd.mSampleRate/PPM_PACKETS_PER_SECOND*asbd.mBytesPerFrame;
s = AudioQueueNewOutput(&asbd, aq_callback, NULL, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aq);
s = AudioQueueAllocateBuffer(aq, BUFFER_SIZE_BYTES, &aq_buffer);
// put samples in the buffer
buffer_data(my_data, aq_buffer);
s = AudioQueueStart(aq, NULL);
s = AudioQueueEnqueueBuffer(aq, aq_buffer, 0, NULL);
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不熟悉 iPhone 音频 API,但它似乎与其他 API 类似,通常您会排队多个缓冲区,这样当系统处理完第一个缓冲区时,它可以立即开始处理下一个缓冲区(因为它已经排队)而第一个缓冲区上的完成回调正在执行。
像这样的东西:
I'm not familiar with the iPhone audio APIs but it appears to be similar to other ones where generally you would queue up more than one buffer, This way when the system is finished processing the first buffer, it can immediately start processing the next buffer (since it's already been queued up) while the completion callback on the 1st buffer is being executed.
Something like: