如何在不使用流媒体服务器的情况下实施 Adobe HTTP Streaming 规范
从 Flash 10.1 开始,他们添加了通过appendBytes 方法将字节添加到 NetStream 对象中的功能(此处描述 http://www.bytearray.org/?p=1689)。增加这一功能的主要原因是 Adobe 终于支持 HTTP 视频流了。这很棒,但似乎您需要使用 Adobe Media Streaming Server (http://www .adobe.com/products/httpdynamicstreaming/)从现有视频创建正确的视频块,以实现流畅的流式传输。
我过去曾尝试过进行 HTTP 流的黑客版本,其中我交换了 NetStream 对象(类似于这里 http://video.leizhu.com/video.html),但块之间总是有短暂的停顿。使用新的appendBytes,我尝试对前一个站点的视频的两个部分进行快速模拟,但即便如此,跳过仍然存在。
有谁知道需要如何格式化两个连续的 .FLV 文件,以便 NetStream 对象上的appendBytes 方法创建一个流畅的视频,而片段之间没有明显的跳跃?
As of Flash 10.1, they have added the ability to add bytes into the NetStream object via the appendBytes method (described here http://www.bytearray.org/?p=1689). The main reason for this addition is that Adobe is finally supporting HTTP streaming of video. This is great, but it seems that you need to use the Adobe Media Streaming Server (http://www.adobe.com/products/httpdynamicstreaming/) to create the correct video chunks from your existing video to allow for smooth streaming.
I have tried to do a hacked version of HTTP streaming in the past where I swap out the NetStream objects (similar to here http://video.leizhu.com/video.html), but there is always a momentary pause between the chunks. With the new appendBytes, I tried to do a quick mock up with the two sections of video from the preceding site, but even then, the skip still remains.
Does anyone know how the two consecutive .FLV files needs to be formated in order for the appendBytes method on the NetStream object to create a nice smooth video without a noticeable skip between the segments?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
我能够使用塞缪尔描述的 Adobe 的文件打包工具来完成此工作。我没有使用 NetStream 对象,但我使用了 OSMF Sample Player,我认为它在内部使用它。以下是不使用 FMS 的操作方法:
C:\Program Files\Adobe\Flash Media Server 4\tools\f4fpackager>
f4fpackager.exe --input-file="MyFile.mp4" --segment-duration=30
这将生成 30 秒长的 F4F 文件、F4X 和 F4M 文件。 F4F 文件是应该播放的正确分段(和碎片)的 MP4 文件。
如果您想使用 OSMF Player 进行测试,还可以执行以下操作:
所以回答原来的问题 Adobe 的 File Packager 是要使用的文件分割器,你不需要需要购买 FMS 才能使用它,它适用于 FLV 和 MP4/F4V 文件。
I was able to get this working using Adobe's File Packager Tool which Samuel described. I didn't use the NetStream object but I used the OSMF Sample Player which I assume uses this internally. Here's how to do with without using FMS:
C:\Program Files\Adobe\Flash Media Server 4\tools\f4fpackager>
f4fpackager.exe --input-file="MyFile.mp4" --segment-duration=30
This will result in 30 second long F4F files, also F4X and a F4M file. The F4F files are your correctly segmented (and fragmented) MP4 files that should play.
If you want to test this using the OSMF Player also do the following:
So to answer the original question Adobe's File Packager is the file splitter to use, you don't need to buy FMS to use it and it works for FLV and MP4/F4V files.
您不需要使用他们的服务器。 Wowza 支持 Adobe 版本的 HTTP Streaming,您可以通过正确分段视频并将所有分段加载到标准 HTTP 服务器上来自行实现。
Adobe HTTP Streaming 所有规范的链接如下:
http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS9463dbe8dbe45c4c-1ae425bf126054c4d3f-7fff.html
尝试破解客户端进行一些自定义样式的http流会更麻烦。
请注意,HTTP 流不支持流式传输多个不同的视频,但支持流式传输分成单独片段的单个文件。
打包器下载链接位于此处:下载 HTTP 动态流式处理的文件打包器
http ://www.adobe.com/products/httpdynamicstreaming/
You don't need to use their server. Wowza supports Adobe's version of HTTP Streaming and you can implement it yourself by segmenting the videos properly and loading all the segments on a standard HTTP server.
Links to all the specs for Adobe's HTTP Streaming are here:
http://help.adobe.com/en_US/HTTPStreaming/1.0/Using/WS9463dbe8dbe45c4c-1ae425bf126054c4d3f-7fff.html
Trying to hack the client to do some custom style http streaming will be a lot more troublesome.
Note that HTTP Streaming does not support streaming several different videos but streams a single file that was broken off into separate segments.
Packager download link is on right here: Download File Packager for HTTP Dynamic Streaming
http://www.adobe.com/products/httpdynamicstreaming/
您可以使用 F4Pack,它是一个围绕 Adobe 命令行工具的 GUI,可让您处理 flv /f4v 文件,以便它们可用于 HTTP 动态流。
You could use F4Pack, it's a GUI around the commandline-tool from Adobe, that lets you process your flv/f4v file so they can be used for HTTP Dynamic Streaming.
OSMF 代码中发生这种情况的地方是 HTTPNetStream 类实现内部的定时器触发的状态机……可能是一个信息丰富的读物。我想我在写这篇文章时甚至还写了一些有用的评论。
至于一般问题:
如果您将整个 FLV 文件读入 ByteArray 并将其传递给appendBytes,它就会播放。如果将该 FLV 文件分成两半,并将前半部分作为字节数组传递,然后将后半部分作为字节数组传递,那么也可以播放。
如果您希望能够在比特率之间无缝切换,则需要在匹配的关键帧点处拆分 FLV 文件...并记住,只有第一次调用appendBytes 才具有初始 FLV 文件头(“F”, 'L'、'V'、标志、偏移量)...其余的只是期待 FLV 字节序列的延续。
The place in the OSMF code where this happens is the timer-fired state machine inside of the HTTPNetStream class implementation... might be an informative read. I think I even put some helpful comments in there when I wrote it.
As far as the general question:
If you read an entire FLV file into a ByteArray and pass it to appendBytes, it will play. If you break that FLV file in half, and pass the first half as a byte array and then the second half as a byte array, that will play as well.
If you want to be able to switch around between bitrates without a gap, you need to split up your FLV files at matching keyframe points... and remember that only the first call to appendBytes has the initial FLV file header ('F', 'L', 'V', flags, offset)... the rest just expect a continuation of the FLV byte sequence.
我最近发现了一个类似的 Node.js 项目来实现 m3u8 转码(https://github.com/andrewschaaf/media-server),但除了 Wowza 之外,还没有听说过有哪个项目在 Apache 的 Origin 模块之外实现了这一点。由于有效负载几乎相同,因此您最好寻找一个好的 mp4 分段解决方案(有很多),而不是寻找 f4m 分段。问题是 moov 原子,尤其是较大的 mp4 视频,很难管理并放置在正确的初始位置(靠近文件开头)。即使使用最佳的 ffmpeg 设置和“qtfaststart”,您最终也会发现搜索速度明显变慢、带宽使用效率低下(通常是贪婪的),以及一些与 flv/f4v 播放时无法获得的清理/时间相关的小问题。
在我的播放器中,我已经或打算基于负载和实时日志解析 Apache 使用 awk/cron 在 HTTP 动态流 (HDS) 和 MP4 之间进行切换,而不是许可 Adobe 的 Access 产品进行流保护。两者都有独特的“onmetadata”处理程序。但最终我收到的排序时间/字节哈希值几乎相等。只是MP4比较慢。所以 mod_origin 只是 Flash 客户端的同步器/请求路由器(通过 http)。我仍在寻找加快基于 mp4 容器的播放速度的方法。我最近读到了一个令人难以置信的解决方案,并且对它感到相当敬畏 http://zehfernando .com/2011/flash-video-frame-time-woes/ 其中视频编辑器(人员)和 Flash 开发人员提出了他们自己的 mp4 时间编码解决方案,该解决方案实际上添加了(通过 Adobe Premiere 脚本)大约 50 像素每个视频帧的底部都有一个视觉“二进制”标记,如帧条形码……这些二进制值会转换为高精度的时间码值。因此,Flash 可以在绘制视频帧时对其进行分析(实时),并准确确定播放器的位置以及任何类型的 mp4 字节分段友好的网络服务器需要哪些字节。问题是(也许我错了)Flash 似乎会任意选择何时获取 moov 数据,尤其是在大型视频文件 (.5-1.5gigs) 上。即使您确保通过 MP4Box 运行 mp4(即 MP4Box -frag 10000 -inter 0 movie.mp4),我想这一直是一个问题 OSMF 和 HDS 已经很好地解决了
现在,尽管在我看来,您需要 Apache 和专有的闭源模块才能使用它,这很烦人。开源实现的到来可能只是时间问题,因为 HDS 只有 1-2 年的历史,它只需要像 Andrew Chaaf 那样进行一些逆向工程,使用 Node.js + mpegts 流(实时或非实时)。
最后,我可能会在我的 UI 下专门使用 OSMF,因为它似乎与 HDS 具有类似的优点,甚至更多,即 Strobe,如果您需要可扩展的 HDS 或 MP4 开放播放器平台来破解以实现您自己的自定义播放器。
I recently found a similar project for node.js to achieve m3u8 transcoding (https://github.com/andrewschaaf/media-server) but have yet to hear of one besides Wowza doing it outside of Origin module for Apache. Since the payloads are nearly identical you're better off looking for a good mp4 segmenting solution (plenty out there) than looking for f4m segmenting. The problem is moov atoms especially on larger mp4 video are difficult to manage and put in their proper initial (near beginning of file) location. Even using optimal ffmpeg settings and 'qtfaststart' you end up with noticeably slower seeking, inefficient bandwidth usage (usually greedy), and a few minor headaches relating to scrubbing/time that you don't get with flv/f4v playback.
In my player I have or intend to switch between HTTP Dynamic Streaming (HDS) and MP4 based on load and realtime log parsing Apache using awk/cron instead of licensing Adobe's Access product for stream protection .. both have unique 'onmetadata' handlers.. but in the end I receive sequenced time/byte hashes virtually equivalent. Just MP4 is slower. So mod_origin is just a synchronizer / request router for Flash clients (over http). I'm still looking for ways to speed up mp4-container-based playback. One incredible solution I read this recently and was rather awestruck by it http://zehfernando.com/2011/flash-video-frame-time-woes/ where a video editor (guy) and flash developer came up with their own mp4 timecoding solution that literally added (via Adobe Premiere script) about 50 pixels to the bottom of every video frame with a visual 'binary' stamp like a frame barcode.. and those binary values translate into highly-accurate timecode values. So Flash could analyze the video frames as they were painted (realtime) and determine precisely where the player was and what bytes were needed from any kind of mp4 byte-segmenting-friendly webserver. The thing is (and perhaps I'm wrong here) Flash seems to arbitrarily choose when it gets to moov data, especially on large video files (.5-1.5gigs). Even if you make sure to run your mp4 through MP4Box (i.e. MP4Box -frag 10000 -inter 0 movie.mp4) I guess this has been a problem OSMF and HDS have worked on quite well
now, though it is annoying that you need Apache and a proprietary closed-source module to use it imo. Its probably just a matter of time before open source implementations arrive as HDS is only 1-2 years old, and it just needs a little reverse engineering like that Andrew Chaaf guy with node.js + mpegts streaming (live or not).
In the end I may just end up using OSMF exclusively beneath my UI as it seems to have similar virtues to HDS if not more so i.e. Strobe if you need sick extensible HDS or MP4 open player platform to hack from to realize your own custom player.
Adobe 的 F4F 格式基于 MP4 文件,您可以使用 F4V 或 MP4 代替 FLV 文件吗?
周围有很多 MP4 文件分割器,但您需要确保文件中的时间戳是连续的,也许当它在文件内的音频或视频流中看到零时间戳时会发生暂停。
Adobe's F4F format is based on MP4 files, are you able to use F4V or MP4 instead of FLV files?
There are plenty of MP4 file splitters around but you would need to make sure the timestamps in the files are continuous, maybe the pause happens when it sees a zero timestamp within the audio or video stream inside the file.