iOS AVFoundation - 在视频上显示时间并导出
我想在视频上显示显示叠加并导出包含此显示的视频。我研究了 AVFoundation Framework、AVCompositions、AVAssets 等,但我仍然不知道如何实现这一点。有一个名为 AVSynchronizedLayer 的类,它可以让您对与视频同步的事物进行动画处理,但我不想动画化,我只是想将时间显示覆盖到视频的每个帧中。有什么建议吗?
问候
I want to show a display overlay over a video and export that video including this display. I had a look into the AVFoundation Framework, AVCompositions, AVAssets etc. but I still do not have an idea to achieve this. There is a class called AVSynchronizedLayer which lets you animate things synchrounous to the video, but I do not want to animate, I jsut want to overlay the time display into every single frame of the video. Any advice?
Regards
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
像这样的东西......
(注意:从一个更大的项目中挑选出来,所以我可能不小心包含了一些不必要的部分)。
您需要获取时钟/动画的 CALayer,并将其设置为 var myClockLayer(由动画工具使用 1/3)。
这还假设您传入的视频只有两个轨道 - 音频和视频。如果您有更多,则需要更仔细地在“asTrackID:2”中设置曲目 ID。
Something like this...
(NB: culled from a much larger project, so I may have included some unnecessary pieces by accident).
You'll need to grab the CALayer of your clock / animation, and set it to the var myClockLayer (used 1/3 of the way down by the andimation tool).
This also assumes your incoming video has just two tracks - audio and video. If you have more, you'll need to set the track id in "asTrackID:2" more carefully.
我认为你可以使用 AVCaptureVideoDataOutput 来处理每一帧并使用 AVAssetWriter 来记录处理后的帧。你可以参考这个答案
使用AVAssetWriterPixelBufferAdaptor的appendPixelBuffer:withPresentationTime:方法导出
我强烈建议使用 OpenCV 来处理帧。这是一个很好的教程
OpenCV 库非常棒。
I think you can use AVCaptureVideoDataOutput to process each frame and use AVAssetWriter to record the processed frame.You can refer to this answer
use AVAssetWriterPixelBufferAdaptor's appendPixelBuffer:withPresentationTime: method to export
And I strongly suggest using OpenCV to process frame. this is a nice tutorial
OpenCV library is very great.