如何从 AVCaptureAudioDataOutput 播放音频样本缓冲区
我尝试制作的应用程序的主要目标是点对点视频流。 (有点像使用蓝牙/WiFi 的 FaceTime)。
使用 AVFoundation,我能够捕获视频/音频样本缓冲区。然后我发送视频/音频样本缓冲区数据。现在的问题是在接收端处理样本缓冲区数据。
至于视频样本缓冲区,我能够从样本缓冲区获取 UIImage。但对于音频样本缓冲区,我不知道如何处理它以便播放音频。
所以问题是如何处理/播放音频样本缓冲区?
现在我只是绘制波形,就像苹果的 Wavy 示例代码一样:
CMSampleBufferRef sampleBuffer;
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {
SInt16 subSet[numSamples / numSamplesToRead];
for (int j = 0; j < numSamples / numSamplesToRead; j++)
subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];
SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
double scaledSample = (double) ((audioSample / SINT16_MAX));
// plot waveform using scaledSample
[updateUI:scaledSample];
}
The main goal of the app Im trying to make is a peer-to-peer video streaming. (Sort of like FaceTime using bluetooth/WiFi).
Using AVFoundation, I was able to capture video/audio sample buffers. Then Im sending the video/audo sample buffer data. Now the problem is to process the sample buffer data in the receiving side.
As for the video sample buffer, I was able to get a UIImage from the sample buffer. But for the audio sample buffer, I dont know how to process it so I can play the audio.
So the question is how can I process/play the audio sample buffers?
Right now Im just plotting the waveform, just like in apple's Wavy sample code:
CMSampleBufferRef sampleBuffer;
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;
CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));
int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {
SInt16 subSet[numSamples / numSamplesToRead];
for (int j = 0; j < numSamples / numSamplesToRead; j++)
subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];
SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
double scaledSample = (double) ((audioSample / SINT16_MAX));
// plot waveform using scaledSample
[updateUI:scaledSample];
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
要显示视频,您可以使用
(这里是:获取ARGB图片并转换为Qt(诺基亚qt)QImage,您可以用其他图像替换)
将其放置到委托类
To show video you can use
(here is : getting of ARGB picture and converting to Qt (nokia qt) QImage you can replace by other image)
place it to delegate class