将不同的视频场景流式传输到 iPad
作为背景:我正在为 iPad 开发一个应用程序,用户可以在其中浏览我们提供的视频。当用户选择一个视频时,它将启动一个 MPMoviePlayerController - 工作正常(此外我在前 10 秒内没有收到任何视频,我不知道为什么)。
现在,用户应该能够搜索特定场景 - 例如“foo 与 bar 对话”。我得到一个类似“视频 A,第 23-42 秒,视频 B,第 56 到 89 秒,F,第 1912-1989 秒”的列表。现在我想连续播放所有这些场景。
这些视频最初是 MPEG2 视频,我按照 Apple 的要求,在 MPEG2 容器中将其转码为 H.264,并通过 mediafilesegmenter 将它们分割为不同的块。
为了播放这些视频,我的第一个想法是通过包含我想要播放的各个视频块的 CGI 脚本动态生成 .m3u8 播放列表(长度超过 10 分钟的视频禁止 HTTP 渐进式流式传输)。不幸的是,这仅适用于第一个块 - 当播放第二个块时,音频和视频消失;我怀疑这是时间戳问题,因为这些段不连续。
我的下一个想法是完全在后端剪切和排列视频,将它们传递给 VLC(这将预先节省我从 MPEG2 到 H.264 的整个转码)并将它们传输到 mediastreamsegmenter。效果很好,缺点是用户无法在视频中查找。
最后,我尝试连续启动多个 MPMoviePlayerController - 每个视频一个。不幸的是,各个场景之间的缓冲等延迟太长——有时甚至超过了场景的长度。
如果有人知道如何解决这个问题(或者可以告诉我我正在尝试做的事情是否可能),我将不胜感激任何建议。
as a background: I am developing an application for the iPad where users can browse videos that are provided by us. When a user picks a video it will then launch an MPMoviePlayerController - which works fine (besides I get no video for the first 10 seconds, of which I have no idea why).
Now, users should be able to search for specific scenes - like, say, "foo talks to bar". I'm getting a list like "video A, seconds 23-42, video B, seconds, 56 to 89, F, seconds 1912-1989". Now I want to play all of these scenes in a row.
The videos are originally MPEG2 videos which I transcoded to H.264 in an MPEG2 container, like Apple demands it, and split them via the mediafilesegmenter to different chunks.
To play those videos my first idea was to dynamically generate an .m3u8 playlist (HTTP progressive streaming is forbidden for videos over ten minutes of length) via a CGI script that contains the chunks of the individual videos I want to play. Unfortunately, this only works for the first chunk - when the second chunk gets played audio and video disappear; I suspect that's a timestamp issue because the segments are not continous.
My next idea was to cut and arrange the videos entirely in the backend, passing them to like VLC (which would save me the whole transcoding from MPEG2 to H.264 beforehand) and pipe them to mediastreamsegmenter. That works well, the disadvantage is that the user cannot seek in the video.
Finally, I tried to start several MPMoviePlayerControllers in a row - one for each video. Unfortunately the delay for buffering, etc. between the individual scenes is way too long - it sometimes even exceeds the scene's length.
If anyone has an idea how to solve that (or could tell me whether what I'm trying to do is even possible) I'd appreciate any suggestions.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论