如何在 iOS 上使用 AVMutableComposition 和 CALayers
我计划使用 AV 可变组合在 iOS 上的视图中渲染内容。我想将来自 iPhone 摄像头之一的视频与在图层中创建的内容结合起来 - 可变合成似乎适合这里的要求,因为它可以将图层合成到视频内容中。
在录制视频时进行合成并不重要 - 我也很乐意将所需的数据混合到合成中,然后在初始视频录制完成后将其渲染(通过 AVExportSession)到文件中。
但我不明白的是,在 AV 框架的上下文中,图层应该如何知道在合成过程中的给定时间要绘制什么。
我的图层内容依赖于时间线,时间线描述了需要在图层内绘制的内容。因此,如果我将一个层嵌入到可变组合中,然后通过 AVExportSession 导出该组合 - CALayer 实例将如何知道它应该在什么时间生成内容?
I'm planning to render content in a view on iOS using AV mutable composition. I want to combine the video coming from one of the iPhone cameras with content created in a layer - mutable composition seems to fit the bill here as it can composite layers into the video content.
It's not critical that the compositing be done as video is being recorded - I'm also happy to mix the required data into a composition that is then rendered (via AVExportSession) to a file after initial video recording has been completed.
What I don't get though is how a [ca]layer is supposed to know what to draw at a given time during the composition, in the context of the AV framework.
My layer content is dependent on a timeline, the timeline describes what needs to be drawn within the layer. So if I embed a layer into the mutable composition and then export that composition via AVExportSession - how will the CALayer instance know what time its supposed to produce content for?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我也遇到过类似的事情。我建议您查看 WWDC 2010 AVEditDemo 应用程序源。那里有一个示例代码,它完全可以满足您的需要 - 将 CALayer 放置在视频轨道的顶部,并在其顶部制作动画。
您还可以在以下位置查看我在该主题上的努力: 将视频与使用 AVVideoCompositionCoreAnimationTool 在 CALayer 中生成静态图像
I've had similar thing going on. I would recommend you to check around the WWDC 2010 AVEditDemo application source. There is an example code there which does exactly what you need - placing a CALayer on top of a video track and also doing an animation on top of it.
You can also check my efforts on the subject at: Mix video with static image in CALayer using AVVideoCompositionCoreAnimationTool