是否可以在 iOS 中使用视频作为 GL 的纹理?
是否可以在 iOS 中使用视频(预渲染,用 H.264 压缩)作为 GL 的纹理?
如果可以的话,该怎么做呢?还有播放质量/帧速率或限制吗?
Is it possible using video (pre-rendered, compressed with H.264) as texture for GL in iOS?
If possible, how to do it? And any playback quality/frame-rate or limitations?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
从 iOS 4.0 开始,您可以使用 AVCaptureDeviceInput 获取相机作为设备输入,并将其连接到 AVCaptureVideoDataOutput 并将您喜欢的任何对象设置为委托。通过为相机设置 32bpp BGRA 格式,委托对象将以一种非常适合立即处理到
glTexImage2D
(或glTexSubImage2D
,如果设备不支持非二次幂纹理;我认为 MBX 设备属于此类)。有很多帧大小和帧速率选项;猜测你必须根据你想要使用 GPU 的其他用途来调整这些设置。我发现一个完全微不足道的场景,只有一个纹理四边形显示最新的帧,仅当新帧到达 iPhone 4 时才重新绘制,能够显示该设备的最大 720p 24fps 馈送,而没有任何明显的延迟。我还没有进行过比这更彻底的基准测试,所以希望其他人可以提供建议。
原则上,根据 API,帧返回时可以在扫描线之间进行一些内存中填充,这意味着在发布到 GL 之前对内容进行一些改组,因此您确实需要为此实现一个代码路径。实际上,纯粹从经验上来说,当前版本的 iOS 似乎永远不会以这种形式返回图像,因此这并不是真正的性能问题。
编辑:现在已经接近三年后了。在此期间,Apple 发布了 iOS 5、6 和 7。在 iOS 5 中,他们引入了
CVOpenGLESTexture
和CVOpenGLESTextureCache
,它们现在是将视频从捕获设备传输到 OpenGL 的智能方式。 Apple 在此处提供了示例代码,其中特别有趣的部分在RippleViewController.m
中,特别是它的setupAVCapture
和captureOutput:didOutputSampleBuffer:fromConnection:
— 请参阅第 196-329 行。遗憾的是,条款和条件禁止在不附加整个项目的情况下重复此处的代码,但分步设置是:收到每个样本缓冲区后:
As of iOS 4.0, you can use
AVCaptureDeviceInput
to get the camera as a device input and connect it to aAVCaptureVideoDataOutput
with any object you like set as the delegate. By setting a 32bpp BGRA format for the camera, the delegate object will receive each frame from the camera in a format just perfect for handing immediately toglTexImage2D
(orglTexSubImage2D
if the device doesn't support non-power-of-two textures; I think the MBX devices fall into this category).There are a bunch of frame size and frame rate options; at a guess you'll have to tweak those depending on how much else you want to use the GPU for. I found that a completely trivial scene with just a textured quad showing the latest frame, redrawn only exactly when a new frame arrives on an iPhone 4, was able to display that device's maximum 720p 24fps feed without any noticeable lag. I haven't performed any more thorough benchmarking than that, so hopefully someone else can advise.
In principle, per the API, frames can come back with some in-memory padding between scanlines, which would mean some shuffling of contents before posting off to GL so you do need to implement a code path for that. In practice, speaking purely empirically, it appears that the current version of iOS never returns images in that form so it isn't really a performance issue.
EDIT: it's now very close to three years later. In the interim Apple has released iOS 5, 6 and 7. With 5 they introduced
CVOpenGLESTexture
andCVOpenGLESTextureCache
, which are now the smart way to pipe video from a capture device into OpenGL. Apple supplies sample code here, from which the particularly interesting parts are inRippleViewController.m
, specifically itssetupAVCapture
andcaptureOutput:didOutputSampleBuffer:fromConnection:
— see lines 196–329. Sadly the terms and conditions prevent a duplication of the code here without attaching the whole project but the step-by-step setup is:CVOpenGLESTextureCacheCreate
and anAVCaptureSession
;AVCaptureDevice
for video;AVCaptureDeviceInput
with that capture device;AVCaptureVideoDataOutput
and tell it to call you as a sample buffer delegate.Upon receiving each sample buffer:
CVImageBufferRef
from it;CVOpenGLESTextureCacheCreateTextureFromImage
to get Y and UVCVOpenGLESTextureRef
s from the CV image buffer;使用 RosyWriter 获取更好的示例OpenGL 视频渲染。性能非常好,特别是如果降低帧率(1080P/30 时约 10%,1080P/15 时≥5%)。
Use RosyWriter for a MUCH better example of how to do OpenGL video rendering. Performance is very good, especially if you reduce the framerate (~10% at 1080P/30, >=5% at 1080P/15.