如何从 AVCaptureAudioDataOutput 播放音频样本缓冲区

发布于 2024-10-31 09:25:04 字数 1303 浏览 1 评论 0原文

我尝试制作的应用程序的主要目标是点对点视频流。 (有点像使用蓝牙/WiFi 的 FaceTime)。

使用 AVFoundation,我能够捕获视频/音频样本缓冲区。然后我发送视频/音频样本缓冲区数据。现在的问题是在接收端处理样本缓冲区数据。

至于视频样本缓冲区,我能够从样本缓冲区获取 UIImage。但对于音频样本缓冲区,我不知道如何处理它以便播放音频。

所以问题是如何处理/播放音频样本缓冲区

现在我只是绘制波形,就像苹果的 Wavy 示例代码一样:

CMSampleBufferRef sampleBuffer;

CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;

CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {

    SInt16 subSet[numSamples / numSamplesToRead];
    for (int j = 0; j < numSamples / numSamplesToRead; j++)
        subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];

    SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
    double scaledSample = (double) ((audioSample / SINT16_MAX));

    // plot waveform using scaledSample
    [updateUI:scaledSample];
}

The main goal of the app Im trying to make is a peer-to-peer video streaming. (Sort of like FaceTime using bluetooth/WiFi).

Using AVFoundation, I was able to capture video/audio sample buffers. Then Im sending the video/audo sample buffer data. Now the problem is to process the sample buffer data in the receiving side.

As for the video sample buffer, I was able to get a UIImage from the sample buffer. But for the audio sample buffer, I dont know how to process it so I can play the audio.

So the question is how can I process/play the audio sample buffers?

Right now Im just plotting the waveform, just like in apple's Wavy sample code:

CMSampleBufferRef sampleBuffer;

CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
NSUInteger channelIndex = 0;

CMBlockBufferRef audioBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t audioBlockBufferOffset = (channelIndex * numSamples * sizeof(SInt16));
size_t lengthAtOffset = 0;
size_t totalLength = 0;
SInt16 *samples = NULL;
CMBlockBufferGetDataPointer(audioBlockBuffer, audioBlockBufferOffset, &lengthAtOffset, &totalLength, (char **)(&samples));

int numSamplesToRead = 1;
for (int i = 0; i < numSamplesToRead; i++) {

    SInt16 subSet[numSamples / numSamplesToRead];
    for (int j = 0; j < numSamples / numSamplesToRead; j++)
        subSet[j] = samples[(i * (numSamples / numSamplesToRead)) + j];

    SInt16 audioSample = [Util maxValueInArray:subSet ofSize:(numSamples / numSamplesToRead)];
    double scaledSample = (double) ((audioSample / SINT16_MAX));

    // plot waveform using scaledSample
    [updateUI:scaledSample];
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

毁梦 2024-11-07 09:25:04

要显示视频,您可以使用
(这里是:获取ARGB图片并转换为Qt(诺基亚qt)QImage,您可以用其他图像替换)

将其放置到委托类

 - (void)captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer,0);

SVideoSample sample;

sample.pImage      = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width       = CVPixelBufferGetWidth(imageBuffer);
sample.height      = CVPixelBufferGetHeight(imageBuffer);

QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);

self->m_receiver->eventReceived(img);

CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];

To show video you can use
(here is : getting of ARGB picture and converting to Qt (nokia qt) QImage you can replace by other image)

place it to delegate class

 - (void)captureOutput:(AVCaptureOutput *)captureOutput
    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
           fromConnection:(AVCaptureConnection *)connection

NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

CVPixelBufferLockBaseAddress(imageBuffer,0);

SVideoSample sample;

sample.pImage      = (char *)CVPixelBufferGetBaseAddress(imageBuffer);
sample.bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
sample.width       = CVPixelBufferGetWidth(imageBuffer);
sample.height      = CVPixelBufferGetHeight(imageBuffer);

QImage img((unsigned char *)sample.pImage, sample.width, sample.height, sample.bytesPerRow, QImage::Format_ARGB32);

self->m_receiver->eventReceived(img);

CVPixelBufferUnlockBaseAddress(imageBuffer,0);
[pool drain];
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文