使用 OpenGL-ES 在 iOS 上播放视频
我正在尝试在 iOS 上播放视频 (MP4/H.263),但得到的结果非常模糊。 以下是初始化资产读取的代码:
mTextureHandle = [self createTexture:CGSizeMake(400,400)];
NSURL * url = [NSURL fileURLWithPath:file];
mAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];
NSArray * tracks = [mAsset tracksWithMediaType:AVMediaTypeVideo];
mTrack = [tracks objectAtIndex:0];
NSLog(@"Tracks: %i", [tracks count]);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = [[NSDictionary alloc] initWithObjectsAndKeys:value, key, nil];
mOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:mTrack outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:mAsset error:nil];
[mReader addOutput:mOutput];
对于读者初始化来说就这么多,现在是实际的纹理:
CMSampleBufferRef sampleBuffer = [mOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
glBindTexture(GL_TEXTURE_2D, mTextureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 600, 400, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress( pixelBuffer ));
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CFRelease(sampleBuffer);
一切正常......除了渲染的图像看起来像这样;切片和倾斜?
我什至尝试研究 AVAssetTrack
的首选转换矩阵,以无济于事,因为它总是返回CGAffineTransformIdentity
。
旁注:如果我将源切换到相机,图像就会渲染得很好。我错过了一些减压步骤吗?这不应该由资产读取器处理吗?
谢谢!
代码: https://github.com/shaded-enmity/objcpp-opengl-video< /a>
I'm trying to play a video (MP4/H.263) on iOS, but getting really fuzzy results.
Here's the code to initialize the asset reading:
mTextureHandle = [self createTexture:CGSizeMake(400,400)];
NSURL * url = [NSURL fileURLWithPath:file];
mAsset = [[AVURLAsset alloc] initWithURL:url options:NULL];
NSArray * tracks = [mAsset tracksWithMediaType:AVMediaTypeVideo];
mTrack = [tracks objectAtIndex:0];
NSLog(@"Tracks: %i", [tracks count]);
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary * settings = [[NSDictionary alloc] initWithObjectsAndKeys:value, key, nil];
mOutput = [[AVAssetReaderTrackOutput alloc]
initWithTrack:mTrack outputSettings:settings];
mReader = [[AVAssetReader alloc] initWithAsset:mAsset error:nil];
[mReader addOutput:mOutput];
So much for the reader init, now the actual texturing:
CMSampleBufferRef sampleBuffer = [mOutput copyNextSampleBuffer];
CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
glBindTexture(GL_TEXTURE_2D, mTextureHandle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 600, 400, 0, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress( pixelBuffer ));
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CFRelease(sampleBuffer);
Everything works well ... except the rendered image looks like this; sliced and skewed?
I've even tried looking into AVAssetTrack
's preferred transformation matrix, to no avail, since it always returns CGAffineTransformIdentity
.
Side-note: If I switch the source to camera, the image gets rendered fine. Am I missing some decompression step? Shouldn't that be handled by the asset reader?
Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我认为 CMSampleBuffer 使用填充是出于性能原因,因此您需要为纹理提供正确的宽度。
尝试使用以下命令设置纹理的宽度:CVPixelBufferGetBytesPerRow(pixelBuffer) / 4
(如果您的视频格式使用每像素 4 字节,则更改其他格式)
I think the CMSampleBuffer uses a padding for performance reason, so you need to have the right width for the texture.
Try to set width of the texture with : CVPixelBufferGetBytesPerRow(pixelBuffer) / 4
(if your video format uses 4 bytes per pixel, change if other)