AVCaptureSession 仅获取视频缓冲区

发布于 2025-01-05 11:47:12 字数 9798 浏览 0 评论 0原文

我试图从 iPhone 摄像头捕获视频和音频,并由 avassetwriter 作为视频文件输出,但输出视频文件仅包含带有音频的第一帧。 我检查了 AVCaptureSession 委托方法,

 - (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 

似乎只有委托方法一开始只获得一个视频样本缓冲区,然后像跟踪日志一样一直接收音频样本缓冲区。

- Video SampleBuffer captured!
- Audio SampleBuffer captured!
- Audio SampleBuffer captured!
- Audio SampleBuffer captured!

以下是我如何设置音频/视频输入和输出的代码:

//Init Video and audio capture devices component NSError *错误= nil;

// Setup the video input
videoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];

// Setup the video output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
videoOutput.minFrameDuration = CMTimeMake(20, 600);
videoOutput.videoSettings =
[NSDictionary dictionaryWithObject:
 [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];     

// Setup the audio input
audioDevice     = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ];     
// Setup the audio output
audioOutput = [[AVCaptureAudioDataOutput alloc] init];

// Create the session
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:videoInput];
[captureSession addInput:audioInput];
[captureSession addOutput:videoOutput];
[captureSession addOutput:audioOutput];

captureSession.sessionPreset = AVCaptureSessionPreset640x480;     

// Setup the queue
dispatch_queue_t videoBufferQueue = dispatch_queue_create("videoBufferQueue", NULL);
// dispatch_queue_t audioBufferQueue = dispatch_get_global_queue("audioBufferQueue",0);
[videoOutput setSampleBufferDelegate:self queue:videoBufferQueue];
[audioOutput setSampleBufferDelegate:self queue:videoBufferQueue];
dispatch_release(videoBufferQueue);
//  dispatch_release(audioBufferQueue);

以下是我设置 AVAssetWriter 和 AssetWriterInput 的代码:

     NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

        // Add video input
        NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                               [NSNumber numberWithDouble:128.0*1024.0], AVVideoAverageBitRateKey,
                                               nil ];

        NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                       AVVideoCodecH264, AVVideoCodecKey,
                                       [NSNumber numberWithInt:480], AVVideoWidthKey,
                                       [NSNumber numberWithInt:320], AVVideoHeightKey,
                                       //videoCompressionProps, AVVideoCompressionPropertiesKey,
                                       nil];

        videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                               outputSettings:videoSettings];


        NSParameterAssert(videoWriterInput);
        videoWriterInput.expectsMediaDataInRealTime = YES;


        // Add the audio input
        AudioChannelLayout acl;
        bzero( &acl, sizeof(acl));
        acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;


        NSDictionary* audioOutputSettings = nil;          
       audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:                       
                                   [ NSNumber numberWithInt:kAudioFormatAppleLossless ], AVFormatIDKey,
                                   [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
                                   [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
                                   [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,                                      
                                   [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
                                   nil ];

        audioWriterInput = [AVAssetWriterInput 
                             assetWriterInputWithMediaType: AVMediaTypeAudio 
                             outputSettings: audioOutputSettings ];

        audioWriterInput.expectsMediaDataInRealTime = YES;

         NSError *error = nil;
        NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:videoURL];    
        unlink([betaCompressionDirectory UTF8String]);

        videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
                                                  fileType:AVFileTypeQuickTimeMovie
                                                     error:&error];

        if(error)
            NSLog(@"error = %@", [error localizedDescription]);


        // add input
        [videoWriter addInput:videoWriterInput];
        [videoWriter addInput:audioWriterInput];

的代码:

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                                           //[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], 
                                                           [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                           kCVPixelBufferPixelFormatTypeKey, nil];

    adaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                                                sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary] retain];

    NSLog(@"Adaptor init finished. Going to start capture Session...");

    /*We start the capture*/

    [self.captureSession startRunning]; 

启动捕获的代码 AVCaptureSession 委托 captureOutput 方法

lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
    if( !CMSampleBufferDataIsReady(sampleBuffer) )
    {
        NSLog( @"sample buffer is not ready. Skipping sample" );
        return;
    }
    if( isRecording == YES )
    {
        switch (videoWriter.status) {
            case AVAssetWriterStatusUnknown:
                NSLog(@"First time execute");
                if (CMTimeCompare(lastSampleTime, kCMTimeZero) == 0) {
                    lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                }

                [videoWriter startWriting];
                [videoWriter startSessionAtSourceTime:lastSampleTime];

                //Break if not ready, otherwise fall through.
                if (videoWriter.status != AVAssetWriterStatusWriting) {
                    break ;
                }

            case AVAssetWriterStatusWriting:
                if( captureOutput == audioOutput) {
                    NSLog(@"Audio Buffer capped!");
                    if( ![audioWriterInput isReadyForMoreMediaData]) { 
                        break;
                    }

                    @try {
                        if( ![audioWriterInput appendSampleBuffer:sampleBuffer] ) {
                            NSLog(@"Audio Writing Error");
                        } else {
                            [NSThread sleepForTimeInterval:0.03];
                        } 
                    }
                    @catch (NSException *e) {
                        NSLog(@"Audio Exception: %@", [e reason]);
                    }
                }
                else if( captureOutput == videoOutput ) {
                    NSLog(@"Video Buffer capped!");

                    if( ![videoWriterInput isReadyForMoreMediaData]) { 
                        break;
                    }

                    @try {
                        CVImageBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                        CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                        if (buffer)
                        {    
                            if([videoWriterInput isReadyForMoreMediaData])  
                                if(![adaptor appendPixelBuffer:buffer withPresentationTime:frameTime]) //CMTimeMake(frame, fps)
                                    NSLog(@"FAIL");
                                else {
                                    [NSThread sleepForTimeInterval:0.03];

                                  //  NSLog(@"Success:%d, Time diff with Zero: ", frame);
//                                    CMTimeShow(frameTime);
                                }
                                else 
                                    NSLog(@"video writer input not ready for more data, skipping frame");
                        }
                        frame++;
                    }
                    @catch (NSException *e) {
                        NSLog(@"Video Exception Exception: %@", [e reason]);
                    }
                }

                break;
            case AVAssetWriterStatusCompleted:
                return;
            case AVAssetWriterStatusFailed: 
                NSLog(@"Critical Error Writing Queues");
                // bufferWriter->writer_failed = YES ;
                // _broadcastError = YES;
                return;
            case AVAssetWriterStatusCancelled:
                break;
            default:
                break;
        }

    }

I am trying to capture the video and audio from iphone camera and output as a video file by avassetwriter, but the output video file only contains the first frame with audio.
I have inspected AVCaptureSession delegate method,

 - (void)captureOutput:(AVCaptureOutput *)captureOutput 
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
       fromConnection:(AVCaptureConnection *)connection 
{ 

it seems only the delegate method only got one video sample buffer at the first, then receive audio sample buffer all the time like follow log.

- Video SampleBuffer captured!
- Audio SampleBuffer captured!
- Audio SampleBuffer captured!
- Audio SampleBuffer captured!

Here are the code how i setup the audio/video input and output:

//Init Video and audio capture devices component
NSError *error = nil;

// Setup the video input
videoDevice = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeVideo];
// Create a device input with the device and add it to the session.
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];

// Setup the video output
videoOutput = [[AVCaptureVideoDataOutput alloc] init];
videoOutput.alwaysDiscardsLateVideoFrames = NO;
videoOutput.minFrameDuration = CMTimeMake(20, 600);
videoOutput.videoSettings =
[NSDictionary dictionaryWithObject:
 [NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];     

// Setup the audio input
audioDevice     = [AVCaptureDevice defaultDeviceWithMediaType: AVMediaTypeAudio];
AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioDevice error:&error ];     
// Setup the audio output
audioOutput = [[AVCaptureAudioDataOutput alloc] init];

// Create the session
captureSession = [[AVCaptureSession alloc] init];
[captureSession addInput:videoInput];
[captureSession addInput:audioInput];
[captureSession addOutput:videoOutput];
[captureSession addOutput:audioOutput];

captureSession.sessionPreset = AVCaptureSessionPreset640x480;     

// Setup the queue
dispatch_queue_t videoBufferQueue = dispatch_queue_create("videoBufferQueue", NULL);
// dispatch_queue_t audioBufferQueue = dispatch_get_global_queue("audioBufferQueue",0);
[videoOutput setSampleBufferDelegate:self queue:videoBufferQueue];
[audioOutput setSampleBufferDelegate:self queue:videoBufferQueue];
dispatch_release(videoBufferQueue);
//  dispatch_release(audioBufferQueue);

Here are the code i setup the AVAssetWriter and AssetWriterInput:

     NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

        // Add video input
        NSDictionary *videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
                                               [NSNumber numberWithDouble:128.0*1024.0], AVVideoAverageBitRateKey,
                                               nil ];

        NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                       AVVideoCodecH264, AVVideoCodecKey,
                                       [NSNumber numberWithInt:480], AVVideoWidthKey,
                                       [NSNumber numberWithInt:320], AVVideoHeightKey,
                                       //videoCompressionProps, AVVideoCompressionPropertiesKey,
                                       nil];

        videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
                                                               outputSettings:videoSettings];


        NSParameterAssert(videoWriterInput);
        videoWriterInput.expectsMediaDataInRealTime = YES;


        // Add the audio input
        AudioChannelLayout acl;
        bzero( &acl, sizeof(acl));
        acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;


        NSDictionary* audioOutputSettings = nil;          
       audioOutputSettings = [ NSDictionary dictionaryWithObjectsAndKeys:                       
                                   [ NSNumber numberWithInt:kAudioFormatAppleLossless ], AVFormatIDKey,
                                   [ NSNumber numberWithInt: 16 ], AVEncoderBitDepthHintKey,
                                   [ NSNumber numberWithFloat: 44100.0 ], AVSampleRateKey,
                                   [ NSNumber numberWithInt: 1 ], AVNumberOfChannelsKey,                                      
                                   [ NSData dataWithBytes: &acl length: sizeof( acl ) ], AVChannelLayoutKey,
                                   nil ];

        audioWriterInput = [AVAssetWriterInput 
                             assetWriterInputWithMediaType: AVMediaTypeAudio 
                             outputSettings: audioOutputSettings ];

        audioWriterInput.expectsMediaDataInRealTime = YES;

         NSError *error = nil;
        NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:videoURL];    
        unlink([betaCompressionDirectory UTF8String]);

        videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
                                                  fileType:AVFileTypeQuickTimeMovie
                                                     error:&error];

        if(error)
            NSLog(@"error = %@", [error localizedDescription]);


        // add input
        [videoWriter addInput:videoWriterInput];
        [videoWriter addInput:audioWriterInput];

The code of starting the capture

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                                           //[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], 
                                                           [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                           kCVPixelBufferPixelFormatTypeKey, nil];

    adaptor = [[AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                                                sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary] retain];

    NSLog(@"Adaptor init finished. Going to start capture Session...");

    /*We start the capture*/

    [self.captureSession startRunning]; 

Code from AVCaptureSession delegate captureOutput method:

lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
    if( !CMSampleBufferDataIsReady(sampleBuffer) )
    {
        NSLog( @"sample buffer is not ready. Skipping sample" );
        return;
    }
    if( isRecording == YES )
    {
        switch (videoWriter.status) {
            case AVAssetWriterStatusUnknown:
                NSLog(@"First time execute");
                if (CMTimeCompare(lastSampleTime, kCMTimeZero) == 0) {
                    lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                }

                [videoWriter startWriting];
                [videoWriter startSessionAtSourceTime:lastSampleTime];

                //Break if not ready, otherwise fall through.
                if (videoWriter.status != AVAssetWriterStatusWriting) {
                    break ;
                }

            case AVAssetWriterStatusWriting:
                if( captureOutput == audioOutput) {
                    NSLog(@"Audio Buffer capped!");
                    if( ![audioWriterInput isReadyForMoreMediaData]) { 
                        break;
                    }

                    @try {
                        if( ![audioWriterInput appendSampleBuffer:sampleBuffer] ) {
                            NSLog(@"Audio Writing Error");
                        } else {
                            [NSThread sleepForTimeInterval:0.03];
                        } 
                    }
                    @catch (NSException *e) {
                        NSLog(@"Audio Exception: %@", [e reason]);
                    }
                }
                else if( captureOutput == videoOutput ) {
                    NSLog(@"Video Buffer capped!");

                    if( ![videoWriterInput isReadyForMoreMediaData]) { 
                        break;
                    }

                    @try {
                        CVImageBufferRef buffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                        CMTime frameTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
                        if (buffer)
                        {    
                            if([videoWriterInput isReadyForMoreMediaData])  
                                if(![adaptor appendPixelBuffer:buffer withPresentationTime:frameTime]) //CMTimeMake(frame, fps)
                                    NSLog(@"FAIL");
                                else {
                                    [NSThread sleepForTimeInterval:0.03];

                                  //  NSLog(@"Success:%d, Time diff with Zero: ", frame);
//                                    CMTimeShow(frameTime);
                                }
                                else 
                                    NSLog(@"video writer input not ready for more data, skipping frame");
                        }
                        frame++;
                    }
                    @catch (NSException *e) {
                        NSLog(@"Video Exception Exception: %@", [e reason]);
                    }
                }

                break;
            case AVAssetWriterStatusCompleted:
                return;
            case AVAssetWriterStatusFailed: 
                NSLog(@"Critical Error Writing Queues");
                // bufferWriter->writer_failed = YES ;
                // _broadcastError = YES;
                return;
            case AVAssetWriterStatusCancelled:
                break;
            default:
                break;
        }

    }

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

土豪我们做朋友吧 2025-01-12 11:47:12

CaptureSession 无法获取输出音频样本缓冲区,当需要花费大量时间来处理视频输出时,这就是我的情况。视频和音频输出缓冲区会在同一个队列中发送给您,因此您需要在新缓冲区到来之前给予足够的时间来处理这两个缓冲区。

最有可能的是,这段代码是一个原因:
[NSThread sleepForTimeInterval:0.03];

CaptureSession does not get output audio sample buffer, when it takes to much time to handle video output, this was in my case. Video and audio output buffers go to you in same queue, so you need to give enough time for handle both, before new buffer will come.

Most likely, this code is a reason:
[NSThread sleepForTimeInterval:0.03];

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文