iPad/iPhone 上的 OpenGL ES 2.0 到视频

发布于 2024-11-17 10:41:22 字数 4801 浏览 2 评论 0 原文

尽管 StackOverflow 上有很好的信息,但我还是束手无策……

我正在尝试将 OpenGL 渲染缓冲区写入 iPad 2 上的视频(使用 iOS 4.3)。这更正是我正在尝试的:

A)设置一个 AVAssetWriterInputPixelBufferAdaptor

  1. 创建一个指向视频文件的 AVAssetWriter

  2. 使用适当的设置设置一个 AVAssetWriterInput

  3. AVAssetWriterInputPixelBufferAdaptor 以将数据添加到视频文件

B) 使用 AVAssetWriterInputPixelBufferAdaptor 将数据写入视频文件

  1. 将 OpenGL 代码渲染到屏幕

  2. 通过 glReadPixels 获取 OpenGL 缓冲区

  3. 从 OpenGL 数据创建 CVPixelBufferRef

  4. 使用appendPixelBuffer方法将该PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor

但是,我在执行此操作时遇到问题。我现在的策略是在按下按钮时设置 AVAssetWriterInputPixelBufferAdaptor。一旦 AVAssetWriterInputPixelBufferAdaptor 有效,我就会设置一个标志来通知 EAGLView 创建像素缓冲区,并通过appendPixelBuffer 将其附加到视频文件中给定的帧数。

现在,我的代码在尝试附加第二个像素缓冲区时崩溃,出现以下错误:

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0

这是我的 AVAsset 设置代码(很多是基于 Rudy Aramayo 的代码,该代码确实可以工作正常图像,但未设置纹理):

- (void) testVideoWriter {

  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480, 320);
  frameLength = CMTimeMake(1, 5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;

  NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;

  unlink([betaCompressionDirectory UTF8String]);

  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];

  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                                 [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                 [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

  //writerInput.expectsMediaDataInRealTime = NO;

  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];

  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];

  [videoWriter addInput:writerInput];

  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];

  VIDEO_WRITER_IS_READY = true;
}

好的,现在我的 videoWriter 和适配器已设置,我告诉我的 OpenGL 渲染器为每个帧创建一个像素缓冲区:

- (void) captureScreenVideo {

  if (!writerInput.readyForMoreMediaData) {
    return;
  }

  CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  GLuint *buffer = (GLuint *) malloc(myDataLength);
  glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);

  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);

  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d", currentFrame);
      currentTime = CMTimeAdd(currentTime, frameLength);
    }

   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }


  currentFrame++;

  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];

    [self moveVideoToSavedPhotos]; 
  }
}

最后,我将视频移动到相机胶卷:

- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];

  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL, NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
                                }
                              }];
  [library release];
}

然而,正如我说,我在调用appendPixelBuffer时崩溃了。

抱歉发送了这么多代码,但我真的不知道我做错了什么。更新将图像写入视频的项目似乎很简单,但我无法获取通过 glReadPixels 创建的像素缓冲区并将其附加。这让我发疯!如果有人有任何建议或 OpenGL 的工作代码示例 -->视频会很棒...谢谢!

I am at my wits end here despite the good information here on StackOverflow...

I am trying to write an OpenGL renderbuffer to a video on the iPad 2 (using iOS 4.3). This is more exactly what I am attempting:

A) set up an AVAssetWriterInputPixelBufferAdaptor

  1. create an AVAssetWriter that points to a video file

  2. set up an AVAssetWriterInput with appropriate settings

  3. set up an AVAssetWriterInputPixelBufferAdaptor to add data to the video file

B) write data to a video file using that AVAssetWriterInputPixelBufferAdaptor

  1. render OpenGL code to the screen

  2. get the OpenGL buffer via glReadPixels

  3. create a CVPixelBufferRef from the OpenGL data

  4. append that PixelBuffer to the AVAssetWriterInputPixelBufferAdaptor using the appendPixelBuffer method

However, I am having problems doings this. My strategy right now is to set up the AVAssetWriterInputPixelBufferAdaptor when a button is pressed. Once the AVAssetWriterInputPixelBufferAdaptor is valid, I set a flag to signal the EAGLView to create a pixel buffer and append it to the video file via appendPixelBuffer for a given number of frames.

Right now my code is crashing as it tries to append the second pixel buffer, giving me the following error:

-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0

Here is my AVAsset setup code (a lot of was based on Rudy Aramayo's code, which does work on normal images, but is not set up for textures):

- (void) testVideoWriter {

  //initialize global info
  MOVIE_NAME = @"Documents/Movie.mov";
  CGSize size = CGSizeMake(480, 320);
  frameLength = CMTimeMake(1, 5); 
  currentTime = kCMTimeZero;
  currentFrame = 0;

  NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
  NSError *error = nil;

  unlink([betaCompressionDirectory UTF8String]);

  videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];

  NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
                                 [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                 [NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
  writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

  //writerInput.expectsMediaDataInRealTime = NO;

  NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];

  adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput                                                                          sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
  [adaptor retain];

  [videoWriter addInput:writerInput];

  [videoWriter startWriting];
  [videoWriter startSessionAtSourceTime:kCMTimeZero];

  VIDEO_WRITER_IS_READY = true;
}

Ok, now that my videoWriter and adaptor are set up, I tell my OpenGL renderer to create a pixel buffer for every frame:

- (void) captureScreenVideo {

  if (!writerInput.readyForMoreMediaData) {
    return;
  }

  CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
  NSInteger myDataLength = esize.width * esize.height * 4;
  GLuint *buffer = (GLuint *) malloc(myDataLength);
  glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
  CVPixelBufferRef pixel_buffer = NULL;
  CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);

  /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
  //free(buffer);

  if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
      NSLog(@"FAIL");
    } else {
      NSLog(@"Success:%d", currentFrame);
      currentTime = CMTimeAdd(currentTime, frameLength);
    }

   free(buffer);
   CVPixelBufferRelease(pixel_buffer);
  }


  currentFrame++;

  if (currentFrame > MAX_FRAMES) {
    VIDEO_WRITER_IS_READY = false;
    [writerInput markAsFinished];
    [videoWriter finishWriting];
    [videoWriter release];

    [self moveVideoToSavedPhotos]; 
  }
}

And finally, I move the Video to the camera roll:

- (void) moveVideoToSavedPhotos {
  ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
  NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
  NSURL* fileURL = [NSURL fileURLWithPath:localVid];

  [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                              completionBlock:^(NSURL *assetURL, NSError *error) {
                                if (error) {   
                                  NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
                                }
                              }];
  [library release];
}

However, as I said, I am crashing in the call to appendPixelBuffer.

Sorry for sending so much code, but I really don't know what I am doing wrong. It seemed like it would be trivial to update a project which writes images to a video, but I am unable to take the pixel buffer I create via glReadPixels and append it. It's driving me crazy! If anyone has any advice or a working code example of OpenGL --> Video that would be amazing... Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

表情可笑 2024-11-24 10:41:22

我刚刚在我的开源 GPUImage 框架中得到了类似的东西,基于上面的代码,所以我想我会提供我的工作解决方案。就我而言,我能够按照 Srikumar 的建议使用像素缓冲池,而不是为每个帧手动创建像素缓冲区。

我首先配置要录制的影片:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

然后使用此代码使用 glReadPixels() 抓取每个渲染帧:

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

我注意到的一件事是,如果我尝试附加两个具有相同整数时间值的像素缓冲区(在提供的基础上),整个记录将失败,并且输入永远不会占用另一个像素缓冲区。同样,如果我在从池中检索失败后尝试附加像素缓冲区,它将中止记录。因此,上面的代码中的早期救助。

除了上述代码之外,我还使用颜色混合着色器将 OpenGL ES 场景中的 RGBA 渲染转换为 BGRA,以便通过 AVAssetWriter 进行快速编码。有了这个,我可以在 iPhone 4 上以 30 FPS 录制 640x480 视频。

同样,所有代码都可以在 GPUImage 存储库,位于 GPUImageMovieWriter 类下。

I just got something similar to this working in my open source GPUImage framework, based on the above code, so I thought I'd provide my working solution to this. In my case, I was able to use a pixel buffer pool, as suggested by Srikumar, instead of the manually created pixel buffers for each frame.

I first configure the movie to be recorded:

NSError *error = nil;

assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
if (error != nil)
{
    NSLog(@"Error: %@", error);
}


NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
[outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
[outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];


assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
assetWriterVideoInput.expectsMediaDataInRealTime = YES;

// You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                       [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                       [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                       nil];

assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];

[assetWriter addInput:assetWriterVideoInput];

then use this code to grab each rendered frame using glReadPixels():

CVPixelBufferRef pixel_buffer = NULL;

CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &pixel_buffer);
if ((pixel_buffer == NULL) || (status != kCVReturnSuccess))
{
    return;
}
else
{
    CVPixelBufferLockBaseAddress(pixel_buffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixel_buffer);
    glReadPixels(0, 0, videoSize.width, videoSize.height, GL_RGBA, GL_UNSIGNED_BYTE, pixelBufferData);
}

// May need to add a check here, because if two consecutive times with the same value are added to the movie, it aborts recording
CMTime currentTime = CMTimeMakeWithSeconds([[NSDate date] timeIntervalSinceDate:startTime],120);

if(![assetWriterPixelBufferInput appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) 
{
    NSLog(@"Problem appending pixel buffer at time: %lld", currentTime.value);
} 
else 
{
//        NSLog(@"Recorded pixel buffer at time: %lld", currentTime.value);
}
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);

CVPixelBufferRelease(pixel_buffer);

One thing I noticed is that if I tried to append two pixel buffers with the same integer time value (in the basis provided), the entire recording would fail and the input would never take another pixel buffer. Similarly, if I tried to append a pixel buffer after retrieval from the pool failed, it would abort the recording. Thus, the early bailout in the code above.

In addition to the above code, I use a color-swizzling shader to convert the RGBA rendering in my OpenGL ES scene to BGRA for fast encoding by the AVAssetWriter. With this, I'm able to record 640x480 video at 30 FPS on an iPhone 4.

Again, all of the code for this can be found within the GPUImage repository, under the GPUImageMovieWriter class.

熟人话多 2024-11-24 10:41:22

看起来这里要做一些事情 -

  1. 根据文档,创建像素缓冲区的推荐方法似乎是在 adaptor.pixelBufferPool 上使用 CVPixelBufferPoolCreatePixelBuffer
  2. 然后,您可以通过使用 CVPixelBufferLockBaseAddress 获取地址,然后使用 CVPixelBufferGetBaseAddress 获取地址并使用 CVPixelBufferUnlockBaseAddress 解锁内存,然后将其传递给适配器来填充缓冲区。
  3. 当 writerInput.readyForMoreMediaData 为 YES 时,像素缓冲区可以传递到输入。这意味着“等待准备就绪”。 usleep 直到它变为 YES 有效,但您也可以使用键值观察。

其余的东西都还好。有了这些,原始代码就生成了一个可播放的视频文件。

Looks like a few things to do here -

  1. According to the docs, it looks like the recommended way to create a pixel buffer is to use CVPixelBufferPoolCreatePixelBuffer on the adaptor.pixelBufferPool.
  2. You can then fill in the buffer by getting the address using CVPixelBufferLockBaseAddress followed by CVPixelBufferGetBaseAddress and unlocking the memory using CVPixelBufferUnlockBaseAddress before passing it to the adaptor.
  3. The pixel buffer can be passed to the input when writerInput.readyForMoreMediaData is YES. This means a "wait until ready". A usleep until it becomes YES works, but you can also use key-value observing.

The rest of the stuff is alright. With this much, the original code results in a playable video file.

空城旧梦 2024-11-24 10:41:22

“万一有人偶然发现了这个,我终于让它发挥作用了......并且现在比我更了解它了。我在上面的代码中遇到了一个错误,在调用appendPixelBuffer之前我释放了从glReadPixels填充的数据缓冲区。也就是说,我认为释放它是安全的,因为我已经创建了 CVPixelBufferRef。我已经编辑了上面的代码,因此像素缓冲区现在实际上有数据了! – Angus Forbes Jun 28 '11 at 5:58”

这是你崩溃的真正原因,我也遇到了这个问题。
即使您已经创建了 CVPixelBufferRef,也不要释放缓冲区。

“In case anyone stumbles across this, I got this to work finally... and understand a bit more about it now than I did. I had an error in the above code where I was freeing the data buffer filled from glReadPixels before calling appendPixelBuffer. That is, I thought it was safe to free it since I had already created the CVPixelBufferRef. I've edited the code above so the pixel buffer actual has data now! – Angus Forbes Jun 28 '11 at 5:58”

this is the real reason for your crash, i met this problem too.
Do not free the buffer even if you have created the CVPixelBufferRef.

峩卟喜欢 2024-11-24 10:41:22

似乎内存管理不当。该错误表明消息已发送到 __NSCFDictionary 而不是 AVAssetWriterInputPixelBufferAdaptor,这一事实非常可疑。

为什么需要手动保留适配器?这看起来很奇怪,因为 CocoaTouch 完全是 ARC

这是一个入门 确定内存问题。

Seems like improper memory management. The fact the error states that the message was sent to __NSCFDictionary instead of AVAssetWriterInputPixelBufferAdaptor is highly suspicious.

Why do you need to retain the adaptor manually? This looks hacky since CocoaTouch is fully ARC.

Here's a starter to nail down the memory issue.

吃不饱 2024-11-24 10:41:22

从您的错误消息 -[__NSCFDictionaryappendPixelBuffer:withPresentationTime:]: 无法识别的选择器发送到实例 0x131db0
看起来您的 PixelBufferAdapter 已发布,现在它指向字典。

from your error message -[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
Looks like a your pixelBufferAdapter was released and now its pointing to a dictionary.

书间行客 2024-11-24 10:41:22

我为此工作的唯一代码位于:

https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html

  // [_context presentRenderbuffer:GL_RENDERBUFFER];

dispatch_async(dispatch_get_main_queue(), ^{
    @autoreleasepool {
        // To capture the output to an OpenGL render buffer...
        NSInteger myDataLength = _backingWidth * _backingHeight * 4;
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
        glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

        // To swap the pixel buffer to a CoreGraphics context (as a CGImage)
        CGDataProviderRef provider;
        CGColorSpaceRef colorSpaceRef;
        CGImageRef imageRef;
        CVPixelBufferRef pixelBuffer;
        @try {
            provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
            int bitsPerComponent = 8;
            int bitsPerPixel = 32;
            int bytesPerRow = 4 * _backingWidth;
            colorSpaceRef = CGColorSpaceCreateDeviceRGB();
            CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
            CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
            imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        } @catch (NSException *exception) {
            NSLog(@"Exception: %@", [exception reason]);
        } @finally {
            if (imageRef) {
                // To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
                pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
                // To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
                imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
            }
            CGDataProviderRelease(provider);
            CGColorSpaceRelease(colorSpaceRef);
            CGImageRelease(imageRef);
        }

    }
});



用于释放 CGDataProvider 类实例中的数据的回调:

static void releaseDataCallback (void *info, const void *data, size_t size) {
    free((void*)data);
}

分别是 CVCGImageUtil 类接口和实现文件:

@import Foundation;
@import CoreMedia;
@import CoreGraphics;
@import QuartzCore;
@import CoreImage;
@import UIKit;

@interface CVCGImageUtil : NSObject

+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;

+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;

+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;

@end

#import "CVCGImageUtil.h"

@implementation CVCGImageUtil

+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
    // CVPixelBuffer to CoreImage
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
    CGPoint origin = [image extent].origin;
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];

    // CoreImage to CGImage via CoreImage context
    CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];

    // CGImage to UIImage (OPTIONAL)
    //UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
    //return (CGImageRef)uiImage.CGImage;

    return cgImage;
}

+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
    CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
                                  CGImageGetHeight(image));
    NSDictionary *options =
    [NSDictionary dictionaryWithObjectsAndKeys:
     [NSNumber numberWithBool:YES],
     kCVPixelBufferCGImageCompatibilityKey,
     [NSNumber numberWithBool:YES],
     kCVPixelBufferCGBitmapContextCompatibilityKey,
     nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status =
    CVPixelBufferCreate(
                        kCFAllocatorDefault, frameSize.width, frameSize.height,
                        kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
                        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(
                                                 pxdata, frameSize.width, frameSize.height,
                                                 8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                                 rgbColorSpace,
                                                 (CGBitmapInfo)kCGBitmapByteOrder32Little |
                                                 kCGImageAlphaPremultipliedFirst);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
    CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
    CMSampleBufferRef newSampleBuffer = NULL;
    CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
    CMVideoFormatDescriptionRef videoInfo = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(
                                                 NULL, pixelBuffer, &videoInfo);
    CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
                                       pixelBuffer,
                                       true,
                                       NULL,
                                       NULL,
                                       videoInfo,
                                       &timimgInfo,
                                       &newSampleBuffer);

    return newSampleBuffer;
}

@end

这完全回答了问题的 B 部分。 A 部分在一个单独的答案中......

The only code I've ever gotten to work for this is at:

https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html

  // [_context presentRenderbuffer:GL_RENDERBUFFER];

dispatch_async(dispatch_get_main_queue(), ^{
    @autoreleasepool {
        // To capture the output to an OpenGL render buffer...
        NSInteger myDataLength = _backingWidth * _backingHeight * 4;
        GLubyte *buffer = (GLubyte *) malloc(myDataLength);
        glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
        glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

        // To swap the pixel buffer to a CoreGraphics context (as a CGImage)
        CGDataProviderRef provider;
        CGColorSpaceRef colorSpaceRef;
        CGImageRef imageRef;
        CVPixelBufferRef pixelBuffer;
        @try {
            provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
            int bitsPerComponent = 8;
            int bitsPerPixel = 32;
            int bytesPerRow = 4 * _backingWidth;
            colorSpaceRef = CGColorSpaceCreateDeviceRGB();
            CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
            CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
            imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
        } @catch (NSException *exception) {
            NSLog(@"Exception: %@", [exception reason]);
        } @finally {
            if (imageRef) {
                // To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
                pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
                // To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
                imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
            }
            CGDataProviderRelease(provider);
            CGColorSpaceRelease(colorSpaceRef);
            CGImageRelease(imageRef);
        }

    }
});

.
.
.

The callback to free the data in the instance of the CGDataProvider class:

static void releaseDataCallback (void *info, const void *data, size_t size) {
    free((void*)data);
}

The CVCGImageUtil class interface and implementation files, respectively:

@import Foundation;
@import CoreMedia;
@import CoreGraphics;
@import QuartzCore;
@import CoreImage;
@import UIKit;

@interface CVCGImageUtil : NSObject

+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;

+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;

+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;

@end

#import "CVCGImageUtil.h"

@implementation CVCGImageUtil

+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
    // CVPixelBuffer to CoreImage
    CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
    CGPoint origin = [image extent].origin;
    image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];

    // CoreImage to CGImage via CoreImage context
    CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];

    // CGImage to UIImage (OPTIONAL)
    //UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
    //return (CGImageRef)uiImage.CGImage;

    return cgImage;
}

+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
    CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
                                  CGImageGetHeight(image));
    NSDictionary *options =
    [NSDictionary dictionaryWithObjectsAndKeys:
     [NSNumber numberWithBool:YES],
     kCVPixelBufferCGImageCompatibilityKey,
     [NSNumber numberWithBool:YES],
     kCVPixelBufferCGBitmapContextCompatibilityKey,
     nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status =
    CVPixelBufferCreate(
                        kCFAllocatorDefault, frameSize.width, frameSize.height,
                        kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
                        &pxbuffer);
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(
                                                 pxdata, frameSize.width, frameSize.height,
                                                 8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                                 rgbColorSpace,
                                                 (CGBitmapInfo)kCGBitmapByteOrder32Little |
                                                 kCGImageAlphaPremultipliedFirst);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
    CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
    CMSampleBufferRef newSampleBuffer = NULL;
    CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
    CMVideoFormatDescriptionRef videoInfo = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(
                                                 NULL, pixelBuffer, &videoInfo);
    CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
                                       pixelBuffer,
                                       true,
                                       NULL,
                                       NULL,
                                       videoInfo,
                                       &timimgInfo,
                                       &newSampleBuffer);

    return newSampleBuffer;
}

@end

That answers part B of your question, to-the-letter. Part A follows in a separate answer...

红ご颜醉 2024-11-24 10:41:22

我从来没有失败过使用这段代码读取视频文件并将其写入 iPhone 的情况;在您的实现中,您只需要将在实现方法末尾找到的 processFrame 方法中的调用替换为对将像素缓冲区作为参数传递给其等效项的任何方法的调用,否则修改该方法以返回根据上面的示例代码生成的像素缓冲区——这是基本的,所以你应该没问题:

//
//  ExportVideo.h
//  ChromaFilterTest
//
//  Created by James Alan Bush on 10/30/16.
//  Copyright © 2016 James Alan Bush. All rights reserved.
//

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import "GLKitView.h"

@interface ExportVideo : NSObject
{
    AVURLAsset                           *_asset;
    AVAssetReader                        *_reader;
    AVAssetWriter                        *_writer;
    NSString                             *_outputURL;
    NSURL                                *_outURL;
    AVAssetReaderTrackOutput             *_readerAudioOutput;
    AVAssetWriterInput                   *_writerAudioInput;
    AVAssetReaderTrackOutput             *_readerVideoOutput;
    AVAssetWriterInput                   *_writerVideoInput;
    CVPixelBufferRef                      _currentBuffer;
    dispatch_queue_t                      _mainSerializationQueue;
    dispatch_queue_t                      _rwAudioSerializationQueue;
    dispatch_queue_t                      _rwVideoSerializationQueue;
    dispatch_group_t                      _dispatchGroup;
    BOOL                                  _cancelled;
    BOOL                                  _audioFinished;
    BOOL                                  _videoFinished;
    AVAssetWriterInputPixelBufferAdaptor *_pixelBufferAdaptor;
}

@property (readwrite, retain) NSURL *url;
@property (readwrite, retain) GLKitView *renderer;

- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer;
- (void)startProcessing;
@end


//
//  ExportVideo.m
//  ChromaFilterTest
//
//  Created by James Alan Bush on 10/30/16.
//  Copyright © 2016 James Alan Bush. All rights reserved.
//

#import "ExportVideo.h"
#import "GLKitView.h"

@implementation ExportVideo

@synthesize url = _url;

- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer {
    NSLog(@"ExportVideo");
    if (!(self = [super init])) {
        return nil;
    }

    self.url = url;
    self.renderer = renderer;

    NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
    _mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

    NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
    _rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);

    NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
    _rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

    return self;
}

- (void)startProcessing {
    NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
    _asset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];
    NSLog(@"URL: %@", self.url);
    _cancelled = NO;
    [_asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{
        dispatch_async(_mainSerializationQueue, ^{
            if (_cancelled)
                return;
            BOOL success = YES;
            NSError *localError = nil;
            success = ([_asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
            if (success)
            {
                NSFileManager *fm = [NSFileManager defaultManager];
                NSString *localOutputPath = [self.url path];
                if ([fm fileExistsAtPath:localOutputPath])
                    //success = [fm removeItemAtPath:localOutputPath error:&localError];
                    success = TRUE;
            }
            if (success)
                success = [self setupAssetReaderAndAssetWriter:&localError];
            if (success)
                success = [self startAssetReaderAndWriter:&localError];
            if (!success)
                [self readingAndWritingDidFinishSuccessfully:success withError:localError];
        });
    }];
}


- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
    // Create and initialize the asset reader.
    _reader = [[AVAssetReader alloc] initWithAsset:_asset error:outError];
    BOOL success = (_reader != nil);
    if (success)
    {
        // If the asset reader was successfully initialized, do the same for the asset writer.
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        _outputURL = paths[0];
        NSFileManager *manager = [NSFileManager defaultManager];
        [manager createDirectoryAtPath:_outputURL withIntermediateDirectories:YES attributes:nil error:nil];
        _outputURL = [_outputURL stringByAppendingPathComponent:@"output.mov"];
        [manager removeItemAtPath:_outputURL error:nil];
        _outURL = [NSURL fileURLWithPath:_outputURL];
        _writer = [[AVAssetWriter alloc] initWithURL:_outURL fileType:AVFileTypeQuickTimeMovie error:outError];
        success = (_writer != nil);
    }

    if (success)
    {
        // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
        AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
        NSArray *audioTracks = [_asset tracksWithMediaType:AVMediaTypeAudio];
        if ([audioTracks count] > 0)
            assetAudioTrack = [audioTracks objectAtIndex:0];
        NSArray *videoTracks = [_asset tracksWithMediaType:AVMediaTypeVideo];
        if ([videoTracks count] > 0)
            assetVideoTrack = [videoTracks objectAtIndex:0];

        if (assetAudioTrack)
        {
            // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
            NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
            _readerAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
            [_reader addOutput:_readerAudioOutput];
            // Then, set the compression settings to 128kbps AAC and create the asset writer input.
            AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
            };
            NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
            NSDictionary *compressionAudioSettings = @{
                                                       AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                                                       AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                                                       AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                                                       AVChannelLayoutKey    : channelLayoutAsData,
                                                       AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
                                                       };
            _writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
            [_writer addInput:_writerAudioInput];
        }

        if (assetVideoTrack)
        {
            // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
            NSDictionary *decompressionVideoSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
                                                         };
            _readerVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
            [_reader addOutput:_readerVideoOutput];
            CMFormatDescriptionRef formatDescription = NULL;
            // Grab the video format descriptions from the video track and grab the first one if it exists.
            NSArray *formatDescriptions = [assetVideoTrack formatDescriptions];
            if ([formatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
            CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
            };
            // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
            if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
            else
                trackDimensions = [assetVideoTrack naturalSize];
            NSDictionary *compressionSettings = nil;
            // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
            if (formatDescription)
            {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                    cleanAperture = @{
                                      AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                                      AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                                      AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                                      AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                                      };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                    pixelAspectRatio = @{
                                         AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                                         AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                                         };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
                if (cleanAperture || pixelAspectRatio)
                {
                    NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                    if (cleanAperture)
                        [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                    if (pixelAspectRatio)
                        [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                    compressionSettings = mutableCompressionSettings;
                }
            }
            // Create the video settings dictionary for H.264.
            NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                                                                           AVVideoCodecKey  : AVVideoCodecH264,
                                                                           AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                                                                           AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
                                                                           };
            // Put the compression settings into the video settings dictionary if we were able to grab them.
            if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
            // Create the asset writer input and add it to the asset writer.
            _writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings];
            NSDictionary *pixelBufferAdaptorSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange),
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
                                                         (id)kCVPixelBufferWidthKey               : [NSNumber numberWithDouble:trackDimensions.width],
                                                         (id)kCVPixelBufferHeightKey              : [NSNumber numberWithDouble:trackDimensions.height]
                                                         };

            _pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_writerVideoInput sourcePixelBufferAttributes:pixelBufferAdaptorSettings];

            [_writer addInput:_writerVideoInput];
        }
    }
    return success;
}

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
    BOOL success = YES;
    // Attempt to start the asset reader.
    success = [_reader startReading];
    if (!success) {
        *outError = [_reader error];
        NSLog(@"Reader error");
    }
    if (success)
    {
        // If the reader started successfully, attempt to start the asset writer.
        success = [_writer startWriting];
        if (!success) {
            *outError = [_writer error];
            NSLog(@"Writer error");
        }
    }

    if (success)
    {
        // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
        _dispatchGroup = dispatch_group_create();
        [_writer startSessionAtSourceTime:kCMTimeZero];
        _audioFinished = NO;
        _videoFinished = NO;

        if (_writerAudioInput)
        {
            // If there is audio to reencode, enter the dispatch group before beginning the work.
            dispatch_group_enter(_dispatchGroup);
            // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
            [_writerAudioInput requestMediaDataWhenReadyOnQueue:_rwAudioSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (_audioFinished)
                    return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([_writerAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                    // Get the next audio sample buffer, and append it to the output file.
                    CMSampleBufferRef sampleBuffer = [_readerAudioOutput copyNextSampleBuffer];
                    if (sampleBuffer != NULL)
                    {
                        BOOL success = [_writerAudioInput appendSampleBuffer:sampleBuffer];
                        CFRelease(sampleBuffer);
                        sampleBuffer = NULL;
                        completedOrFailed = !success;
                    }
                    else
                    {
                        completedOrFailed = YES;
                    }
                }
                if (completedOrFailed)
                {
                    // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                    BOOL oldFinished = _audioFinished;
                    _audioFinished = YES;
                    if (oldFinished == NO)
                    {
                        [_writerAudioInput markAsFinished];
                    }
                    dispatch_group_leave(_dispatchGroup);
                }
            }];
        }

        if (_writerVideoInput)
        {
            // If we had video to reencode, enter the dispatch group before beginning the work.
            dispatch_group_enter(_dispatchGroup);
            // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
            [_writerVideoInput requestMediaDataWhenReadyOnQueue:_rwVideoSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (_videoFinished)
                    return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([_writerVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                    // Get the next video sample buffer, and append it to the output file.
                    CMSampleBufferRef sampleBuffer = [_readerVideoOutput copyNextSampleBuffer];

                    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                    _currentBuffer = pixelBuffer;
                    [self performSelectorOnMainThread:@selector(processFrame) withObject:nil waitUntilDone:YES];

                    if (_currentBuffer != NULL)
                    {
                        //BOOL success = [_writerVideoInput appendSampleBuffer:sampleBuffer];
                        BOOL success = [_pixelBufferAdaptor appendPixelBuffer:_currentBuffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
                        CFRelease(sampleBuffer);
                        sampleBuffer = NULL;
                        completedOrFailed = !success;
                    }
                    else
                    {
                        completedOrFailed = YES;
                    }
                }
                if (completedOrFailed)
                {
                    // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                    BOOL oldFinished = _videoFinished;
                    _videoFinished = YES;
                    if (oldFinished == NO)
                    {
                        [_writerVideoInput markAsFinished];
                    }
                    dispatch_group_leave(_dispatchGroup);
                }
            }];
        }
        // Set up the notification that the dispatch group will send when the audio and video work have both finished.
        dispatch_group_notify(_dispatchGroup, _mainSerializationQueue, ^{
            BOOL finalSuccess = YES;
            NSError *finalError = nil;
            // Check to see if the work has finished due to cancellation.
            if (_cancelled)
            {
                // If so, cancel the reader and writer.
                [_reader cancelReading];
                [_writer cancelWriting];
            }
            else
            {
                // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                if ([_reader status] == AVAssetReaderStatusFailed)
                {
                    finalSuccess = NO;
                    finalError = [_reader error];
                    NSLog(@"_reader finalError: %@", finalError);
                }
                // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                [_writer finishWritingWithCompletionHandler:^{
                    [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:[_writer error]];
                }];
            }
            // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.

        });
    }
    // Return success here to indicate whether the asset reader and writer were started successfully.
    return success;
}

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
    if (!success)
    {
        // If the reencoding process failed, we need to cancel the asset reader and writer.
        [_reader cancelReading];
        [_writer cancelWriting];
        dispatch_async(dispatch_get_main_queue(), ^{
            // Handle any UI tasks here related to failure.
        });
    }
    else
    {
        // Reencoding was successful, reset booleans.
        _cancelled = NO;
        _videoFinished = NO;
        _audioFinished = NO;
        dispatch_async(dispatch_get_main_queue(), ^{
            UISaveVideoAtPathToSavedPhotosAlbum(_outputURL, nil, nil, nil);
        });
    }
    NSLog(@"readingAndWritingDidFinishSuccessfully success = %@ : Error = %@", (success == 0) ? @"NO" : @"YES", error);
}

- (void)processFrame {

    if (_currentBuffer) {
        if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly))
        {
            [self.renderer processPixelBuffer:_currentBuffer];
            CVPixelBufferUnlockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly);
        } else {
            NSLog(@"processFrame END");
            return;
        }
    }
}

@end

I've never failed to read and write a video file to iPhone with this code; in your implementation, you will simply need to substitute the calls in the processFrame method, found at the end of the implementation method, to calls to whatever methods to which you pass pixel buffers as parameters to its equivalent, and otherwise modify that method to return the pixel buffer generated as per the sample code above--that's basic, so you should be okay:

//
//  ExportVideo.h
//  ChromaFilterTest
//
//  Created by James Alan Bush on 10/30/16.
//  Copyright © 2016 James Alan Bush. All rights reserved.
//

#import <Foundation/Foundation.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreMedia/CoreMedia.h>
#import "GLKitView.h"

@interface ExportVideo : NSObject
{
    AVURLAsset                           *_asset;
    AVAssetReader                        *_reader;
    AVAssetWriter                        *_writer;
    NSString                             *_outputURL;
    NSURL                                *_outURL;
    AVAssetReaderTrackOutput             *_readerAudioOutput;
    AVAssetWriterInput                   *_writerAudioInput;
    AVAssetReaderTrackOutput             *_readerVideoOutput;
    AVAssetWriterInput                   *_writerVideoInput;
    CVPixelBufferRef                      _currentBuffer;
    dispatch_queue_t                      _mainSerializationQueue;
    dispatch_queue_t                      _rwAudioSerializationQueue;
    dispatch_queue_t                      _rwVideoSerializationQueue;
    dispatch_group_t                      _dispatchGroup;
    BOOL                                  _cancelled;
    BOOL                                  _audioFinished;
    BOOL                                  _videoFinished;
    AVAssetWriterInputPixelBufferAdaptor *_pixelBufferAdaptor;
}

@property (readwrite, retain) NSURL *url;
@property (readwrite, retain) GLKitView *renderer;

- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer;
- (void)startProcessing;
@end


//
//  ExportVideo.m
//  ChromaFilterTest
//
//  Created by James Alan Bush on 10/30/16.
//  Copyright © 2016 James Alan Bush. All rights reserved.
//

#import "ExportVideo.h"
#import "GLKitView.h"

@implementation ExportVideo

@synthesize url = _url;

- (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer {
    NSLog(@"ExportVideo");
    if (!(self = [super init])) {
        return nil;
    }

    self.url = url;
    self.renderer = renderer;

    NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
    _mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);

    NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
    _rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);

    NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
    _rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);

    return self;
}

- (void)startProcessing {
    NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
    _asset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];
    NSLog(@"URL: %@", self.url);
    _cancelled = NO;
    [_asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{
        dispatch_async(_mainSerializationQueue, ^{
            if (_cancelled)
                return;
            BOOL success = YES;
            NSError *localError = nil;
            success = ([_asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
            if (success)
            {
                NSFileManager *fm = [NSFileManager defaultManager];
                NSString *localOutputPath = [self.url path];
                if ([fm fileExistsAtPath:localOutputPath])
                    //success = [fm removeItemAtPath:localOutputPath error:&localError];
                    success = TRUE;
            }
            if (success)
                success = [self setupAssetReaderAndAssetWriter:&localError];
            if (success)
                success = [self startAssetReaderAndWriter:&localError];
            if (!success)
                [self readingAndWritingDidFinishSuccessfully:success withError:localError];
        });
    }];
}


- (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
{
    // Create and initialize the asset reader.
    _reader = [[AVAssetReader alloc] initWithAsset:_asset error:outError];
    BOOL success = (_reader != nil);
    if (success)
    {
        // If the asset reader was successfully initialized, do the same for the asset writer.
        NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        _outputURL = paths[0];
        NSFileManager *manager = [NSFileManager defaultManager];
        [manager createDirectoryAtPath:_outputURL withIntermediateDirectories:YES attributes:nil error:nil];
        _outputURL = [_outputURL stringByAppendingPathComponent:@"output.mov"];
        [manager removeItemAtPath:_outputURL error:nil];
        _outURL = [NSURL fileURLWithPath:_outputURL];
        _writer = [[AVAssetWriter alloc] initWithURL:_outURL fileType:AVFileTypeQuickTimeMovie error:outError];
        success = (_writer != nil);
    }

    if (success)
    {
        // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
        AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
        NSArray *audioTracks = [_asset tracksWithMediaType:AVMediaTypeAudio];
        if ([audioTracks count] > 0)
            assetAudioTrack = [audioTracks objectAtIndex:0];
        NSArray *videoTracks = [_asset tracksWithMediaType:AVMediaTypeVideo];
        if ([videoTracks count] > 0)
            assetVideoTrack = [videoTracks objectAtIndex:0];

        if (assetAudioTrack)
        {
            // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
            NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
            _readerAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
            [_reader addOutput:_readerAudioOutput];
            // Then, set the compression settings to 128kbps AAC and create the asset writer input.
            AudioChannelLayout stereoChannelLayout = {
                .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                .mChannelBitmap = 0,
                .mNumberChannelDescriptions = 0
            };
            NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
            NSDictionary *compressionAudioSettings = @{
                                                       AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                                                       AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                                                       AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                                                       AVChannelLayoutKey    : channelLayoutAsData,
                                                       AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
                                                       };
            _writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
            [_writer addInput:_writerAudioInput];
        }

        if (assetVideoTrack)
        {
            // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
            NSDictionary *decompressionVideoSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
                                                         };
            _readerVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
            [_reader addOutput:_readerVideoOutput];
            CMFormatDescriptionRef formatDescription = NULL;
            // Grab the video format descriptions from the video track and grab the first one if it exists.
            NSArray *formatDescriptions = [assetVideoTrack formatDescriptions];
            if ([formatDescriptions count] > 0)
                formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
            CGSize trackDimensions = {
                .width = 0.0,
                .height = 0.0,
            };
            // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
            if (formatDescription)
                trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
            else
                trackDimensions = [assetVideoTrack naturalSize];
            NSDictionary *compressionSettings = nil;
            // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
            if (formatDescription)
            {
                NSDictionary *cleanAperture = nil;
                NSDictionary *pixelAspectRatio = nil;
                CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                if (cleanApertureFromCMFormatDescription)
                {
                    cleanAperture = @{
                                      AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                                      AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                                      AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                                      AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                                      };
                }
                CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                if (pixelAspectRatioFromCMFormatDescription)
                {
                    pixelAspectRatio = @{
                                         AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                                         AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                                         };
                }
                // Add whichever settings we could grab from the format description to the compression settings dictionary.
                if (cleanAperture || pixelAspectRatio)
                {
                    NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                    if (cleanAperture)
                        [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                    if (pixelAspectRatio)
                        [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                    compressionSettings = mutableCompressionSettings;
                }
            }
            // Create the video settings dictionary for H.264.
            NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                                                                           AVVideoCodecKey  : AVVideoCodecH264,
                                                                           AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                                                                           AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
                                                                           };
            // Put the compression settings into the video settings dictionary if we were able to grab them.
            if (compressionSettings)
                [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
            // Create the asset writer input and add it to the asset writer.
            _writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings];
            NSDictionary *pixelBufferAdaptorSettings = @{
                                                         (id)kCVPixelBufferPixelFormatTypeKey     : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange),
                                                         (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
                                                         (id)kCVPixelBufferWidthKey               : [NSNumber numberWithDouble:trackDimensions.width],
                                                         (id)kCVPixelBufferHeightKey              : [NSNumber numberWithDouble:trackDimensions.height]
                                                         };

            _pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_writerVideoInput sourcePixelBufferAttributes:pixelBufferAdaptorSettings];

            [_writer addInput:_writerVideoInput];
        }
    }
    return success;
}

- (BOOL)startAssetReaderAndWriter:(NSError **)outError
{
    BOOL success = YES;
    // Attempt to start the asset reader.
    success = [_reader startReading];
    if (!success) {
        *outError = [_reader error];
        NSLog(@"Reader error");
    }
    if (success)
    {
        // If the reader started successfully, attempt to start the asset writer.
        success = [_writer startWriting];
        if (!success) {
            *outError = [_writer error];
            NSLog(@"Writer error");
        }
    }

    if (success)
    {
        // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
        _dispatchGroup = dispatch_group_create();
        [_writer startSessionAtSourceTime:kCMTimeZero];
        _audioFinished = NO;
        _videoFinished = NO;

        if (_writerAudioInput)
        {
            // If there is audio to reencode, enter the dispatch group before beginning the work.
            dispatch_group_enter(_dispatchGroup);
            // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
            [_writerAudioInput requestMediaDataWhenReadyOnQueue:_rwAudioSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (_audioFinished)
                    return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([_writerAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                    // Get the next audio sample buffer, and append it to the output file.
                    CMSampleBufferRef sampleBuffer = [_readerAudioOutput copyNextSampleBuffer];
                    if (sampleBuffer != NULL)
                    {
                        BOOL success = [_writerAudioInput appendSampleBuffer:sampleBuffer];
                        CFRelease(sampleBuffer);
                        sampleBuffer = NULL;
                        completedOrFailed = !success;
                    }
                    else
                    {
                        completedOrFailed = YES;
                    }
                }
                if (completedOrFailed)
                {
                    // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                    BOOL oldFinished = _audioFinished;
                    _audioFinished = YES;
                    if (oldFinished == NO)
                    {
                        [_writerAudioInput markAsFinished];
                    }
                    dispatch_group_leave(_dispatchGroup);
                }
            }];
        }

        if (_writerVideoInput)
        {
            // If we had video to reencode, enter the dispatch group before beginning the work.
            dispatch_group_enter(_dispatchGroup);
            // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
            [_writerVideoInput requestMediaDataWhenReadyOnQueue:_rwVideoSerializationQueue usingBlock:^{
                // Because the block is called asynchronously, check to see whether its task is complete.
                if (_videoFinished)
                    return;
                BOOL completedOrFailed = NO;
                // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                while ([_writerVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                {
                    // Get the next video sample buffer, and append it to the output file.
                    CMSampleBufferRef sampleBuffer = [_readerVideoOutput copyNextSampleBuffer];

                    CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                    _currentBuffer = pixelBuffer;
                    [self performSelectorOnMainThread:@selector(processFrame) withObject:nil waitUntilDone:YES];

                    if (_currentBuffer != NULL)
                    {
                        //BOOL success = [_writerVideoInput appendSampleBuffer:sampleBuffer];
                        BOOL success = [_pixelBufferAdaptor appendPixelBuffer:_currentBuffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
                        CFRelease(sampleBuffer);
                        sampleBuffer = NULL;
                        completedOrFailed = !success;
                    }
                    else
                    {
                        completedOrFailed = YES;
                    }
                }
                if (completedOrFailed)
                {
                    // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                    BOOL oldFinished = _videoFinished;
                    _videoFinished = YES;
                    if (oldFinished == NO)
                    {
                        [_writerVideoInput markAsFinished];
                    }
                    dispatch_group_leave(_dispatchGroup);
                }
            }];
        }
        // Set up the notification that the dispatch group will send when the audio and video work have both finished.
        dispatch_group_notify(_dispatchGroup, _mainSerializationQueue, ^{
            BOOL finalSuccess = YES;
            NSError *finalError = nil;
            // Check to see if the work has finished due to cancellation.
            if (_cancelled)
            {
                // If so, cancel the reader and writer.
                [_reader cancelReading];
                [_writer cancelWriting];
            }
            else
            {
                // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                if ([_reader status] == AVAssetReaderStatusFailed)
                {
                    finalSuccess = NO;
                    finalError = [_reader error];
                    NSLog(@"_reader finalError: %@", finalError);
                }
                // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                [_writer finishWritingWithCompletionHandler:^{
                    [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:[_writer error]];
                }];
            }
            // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.

        });
    }
    // Return success here to indicate whether the asset reader and writer were started successfully.
    return success;
}

- (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
{
    if (!success)
    {
        // If the reencoding process failed, we need to cancel the asset reader and writer.
        [_reader cancelReading];
        [_writer cancelWriting];
        dispatch_async(dispatch_get_main_queue(), ^{
            // Handle any UI tasks here related to failure.
        });
    }
    else
    {
        // Reencoding was successful, reset booleans.
        _cancelled = NO;
        _videoFinished = NO;
        _audioFinished = NO;
        dispatch_async(dispatch_get_main_queue(), ^{
            UISaveVideoAtPathToSavedPhotosAlbum(_outputURL, nil, nil, nil);
        });
    }
    NSLog(@"readingAndWritingDidFinishSuccessfully success = %@ : Error = %@", (success == 0) ? @"NO" : @"YES", error);
}

- (void)processFrame {

    if (_currentBuffer) {
        if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly))
        {
            [self.renderer processPixelBuffer:_currentBuffer];
            CVPixelBufferUnlockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly);
        } else {
            NSLog(@"processFrame END");
            return;
        }
    }
}

@end
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文