iPad/iPhone 上的 OpenGL ES 2.0 到视频
尽管 StackOverflow 上有很好的信息,但我还是束手无策……
我正在尝试将 OpenGL 渲染缓冲区写入 iPad 2 上的视频(使用 iOS 4.3)。这更正是我正在尝试的:
A)设置一个 AVAssetWriterInputPixelBufferAdaptor
-
创建一个指向视频文件的 AVAssetWriter
-
使用适当的设置设置一个 AVAssetWriterInput
- AVAssetWriterInputPixelBufferAdaptor 以将数据添加到视频文件
B) 使用 AVAssetWriterInputPixelBufferAdaptor 将数据写入视频文件
将 OpenGL 代码渲染到屏幕
通过 glReadPixels 获取 OpenGL 缓冲区
从 OpenGL 数据创建 CVPixelBufferRef
-
使用appendPixelBuffer方法将该PixelBuffer附加到AVAssetWriterInputPixelBufferAdaptor
但是,我在执行此操作时遇到问题。我现在的策略是在按下按钮时设置 AVAssetWriterInputPixelBufferAdaptor。一旦 AVAssetWriterInputPixelBufferAdaptor 有效,我就会设置一个标志来通知 EAGLView 创建像素缓冲区,并通过appendPixelBuffer 将其附加到视频文件中给定的帧数。
现在,我的代码在尝试附加第二个像素缓冲区时崩溃,出现以下错误:
-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
这是我的 AVAsset 设置代码(很多是基于 Rudy Aramayo 的代码,该代码确实可以工作正常图像,但未设置纹理):
- (void) testVideoWriter {
//initialize global info
MOVIE_NAME = @"Documents/Movie.mov";
CGSize size = CGSizeMake(480, 320);
frameLength = CMTimeMake(1, 5);
currentTime = kCMTimeZero;
currentFrame = 0;
NSString *MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
NSError *error = nil;
unlink([betaCompressionDirectory UTF8String]);
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory] fileType:AVFileTypeQuickTimeMovie error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey, nil];
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
//writerInput.expectsMediaDataInRealTime = NO;
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey, nil];
adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[adaptor retain];
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
VIDEO_WRITER_IS_READY = true;
}
好的,现在我的 videoWriter 和适配器已设置,我告诉我的 OpenGL 渲染器为每个帧创建一个像素缓冲区:
- (void) captureScreenVideo {
if (!writerInput.readyForMoreMediaData) {
return;
}
CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
NSInteger myDataLength = esize.width * esize.height * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CVPixelBufferRef pixel_buffer = NULL;
CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);
/* DON'T FREE THIS BEFORE USING pixel_buffer! */
//free(buffer);
if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
NSLog(@"FAIL");
} else {
NSLog(@"Success:%d", currentFrame);
currentTime = CMTimeAdd(currentTime, frameLength);
}
free(buffer);
CVPixelBufferRelease(pixel_buffer);
}
currentFrame++;
if (currentFrame > MAX_FRAMES) {
VIDEO_WRITER_IS_READY = false;
[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];
[self moveVideoToSavedPhotos];
}
}
最后,我将视频移动到相机胶卷:
- (void) moveVideoToSavedPhotos {
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
NSURL* fileURL = [NSURL fileURLWithPath:localVid];
[library writeVideoAtPathToSavedPhotosAlbum:fileURL
completionBlock:^(NSURL *assetURL, NSError *error) {
if (error) {
NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
}
}];
[library release];
}
然而,正如我说,我在调用appendPixelBuffer时崩溃了。
抱歉发送了这么多代码,但我真的不知道我做错了什么。更新将图像写入视频的项目似乎很简单,但我无法获取通过 glReadPixels 创建的像素缓冲区并将其附加。这让我发疯!如果有人有任何建议或 OpenGL 的工作代码示例 -->视频会很棒...谢谢!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
我刚刚在我的开源 GPUImage 框架中得到了类似的东西,基于上面的代码,所以我想我会提供我的工作解决方案。就我而言,我能够按照 Srikumar 的建议使用像素缓冲池,而不是为每个帧手动创建像素缓冲区。
我首先配置要录制的影片:
然后使用此代码使用
glReadPixels()
抓取每个渲染帧:我注意到的一件事是,如果我尝试附加两个具有相同整数时间值的像素缓冲区(在提供的基础上),整个记录将失败,并且输入永远不会占用另一个像素缓冲区。同样,如果我在从池中检索失败后尝试附加像素缓冲区,它将中止记录。因此,上面的代码中的早期救助。
除了上述代码之外,我还使用颜色混合着色器将 OpenGL ES 场景中的 RGBA 渲染转换为 BGRA,以便通过 AVAssetWriter 进行快速编码。有了这个,我可以在 iPhone 4 上以 30 FPS 录制 640x480 视频。
同样,所有代码都可以在 GPUImage 存储库,位于 GPUImageMovieWriter 类下。
I just got something similar to this working in my open source GPUImage framework, based on the above code, so I thought I'd provide my working solution to this. In my case, I was able to use a pixel buffer pool, as suggested by Srikumar, instead of the manually created pixel buffers for each frame.
I first configure the movie to be recorded:
then use this code to grab each rendered frame using
glReadPixels()
:One thing I noticed is that if I tried to append two pixel buffers with the same integer time value (in the basis provided), the entire recording would fail and the input would never take another pixel buffer. Similarly, if I tried to append a pixel buffer after retrieval from the pool failed, it would abort the recording. Thus, the early bailout in the code above.
In addition to the above code, I use a color-swizzling shader to convert the RGBA rendering in my OpenGL ES scene to BGRA for fast encoding by the AVAssetWriter. With this, I'm able to record 640x480 video at 30 FPS on an iPhone 4.
Again, all of the code for this can be found within the GPUImage repository, under the GPUImageMovieWriter class.
看起来这里要做一些事情 -
adaptor.pixelBufferPool
上使用CVPixelBufferPoolCreatePixelBuffer
。CVPixelBufferLockBaseAddress
获取地址,然后使用CVPixelBufferGetBaseAddress
获取地址并使用CVPixelBufferUnlockBaseAddress
解锁内存,然后将其传递给适配器来填充缓冲区。usleep
直到它变为YES
有效,但您也可以使用键值观察。其余的东西都还好。有了这些,原始代码就生成了一个可播放的视频文件。
Looks like a few things to do here -
CVPixelBufferPoolCreatePixelBuffer
on theadaptor.pixelBufferPool
.CVPixelBufferLockBaseAddress
followed byCVPixelBufferGetBaseAddress
and unlocking the memory usingCVPixelBufferUnlockBaseAddress
before passing it to the adaptor.writerInput.readyForMoreMediaData
isYES
. This means a "wait until ready". Ausleep
until it becomesYES
works, but you can also use key-value observing.The rest of the stuff is alright. With this much, the original code results in a playable video file.
“万一有人偶然发现了这个,我终于让它发挥作用了......并且现在比我更了解它了。我在上面的代码中遇到了一个错误,在调用appendPixelBuffer之前我释放了从glReadPixels填充的数据缓冲区。也就是说,我认为释放它是安全的,因为我已经创建了 CVPixelBufferRef。我已经编辑了上面的代码,因此像素缓冲区现在实际上有数据了! – Angus Forbes Jun 28 '11 at 5:58”
这是你崩溃的真正原因,我也遇到了这个问题。
即使您已经创建了 CVPixelBufferRef,也不要释放缓冲区。
“In case anyone stumbles across this, I got this to work finally... and understand a bit more about it now than I did. I had an error in the above code where I was freeing the data buffer filled from glReadPixels before calling appendPixelBuffer. That is, I thought it was safe to free it since I had already created the CVPixelBufferRef. I've edited the code above so the pixel buffer actual has data now! – Angus Forbes Jun 28 '11 at 5:58”
this is the real reason for your crash, i met this problem too.
Do not free the buffer even if you have created the CVPixelBufferRef.
似乎内存管理不当。该错误表明消息已发送到
__NSCFDictionary
而不是AVAssetWriterInputPixelBufferAdaptor
,这一事实非常可疑。为什么需要手动
保留
适配器?这看起来很奇怪,因为 CocoaTouch 完全是 ARC。这是一个入门 确定内存问题。
Seems like improper memory management. The fact the error states that the message was sent to
__NSCFDictionary
instead ofAVAssetWriterInputPixelBufferAdaptor
is highly suspicious.Why do you need to
retain
the adaptor manually? This looks hacky since CocoaTouch is fully ARC.Here's a starter to nail down the memory issue.
从您的错误消息
-[__NSCFDictionaryappendPixelBuffer:withPresentationTime:]: 无法识别的选择器发送到实例 0x131db0
看起来您的 PixelBufferAdapter 已发布,现在它指向字典。
from your error message
-[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
Looks like a your pixelBufferAdapter was released and now its pointing to a dictionary.
我为此工作的唯一代码位于:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
。
。
。
用于释放 CGDataProvider 类实例中的数据的回调:
分别是 CVCGImageUtil 类接口和实现文件:
这完全回答了问题的 B 部分。 A 部分在一个单独的答案中......
The only code I've ever gotten to work for this is at:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
.
.
.
The callback to free the data in the instance of the CGDataProvider class:
The CVCGImageUtil class interface and implementation files, respectively:
That answers part B of your question, to-the-letter. Part A follows in a separate answer...
我从来没有失败过使用这段代码读取视频文件并将其写入 iPhone 的情况;在您的实现中,您只需要将在实现方法末尾找到的 processFrame 方法中的调用替换为对将像素缓冲区作为参数传递给其等效项的任何方法的调用,否则修改该方法以返回根据上面的示例代码生成的像素缓冲区——这是基本的,所以你应该没问题:
I've never failed to read and write a video file to iPhone with this code; in your implementation, you will simply need to substitute the calls in the processFrame method, found at the end of the implementation method, to calls to whatever methods to which you pass pixel buffers as parameters to its equivalent, and otherwise modify that method to return the pixel buffer generated as per the sample code above--that's basic, so you should be okay: