使用 iPhone 作为 mediastreamsegmenter 的输入 - HTTP 实时流媒体

发布于 2024-10-21 04:03:19 字数 487 浏览 1 评论 0原文

我开始研究HTTP live Streaming协议,感觉很有趣。浏览了苹果提供的完整文档。

我按照开发者论坛之一中提到的步骤尝试了 Vedio On Demand 和 Live Streaming,并使用 VLC 播放器作为流媒体服务器,并且能够成功进行流媒体播放。

现在,我希望我的 iPhone 成为流媒体的来源,并希望使用另一部 iPhone 来观看该内容。

如前所述,mediastreamsegmenter 是一种通过 UDP 网络连接或从 sdtin 接收 MPEG-2 传输流的工具。

有人可以介绍一下如何开始使用我的 iPhone 作为流媒体服务器并能够流式传输内容吗?据我所知,我认为必须有一个客户端(iPhone)应用程序将内容发送到服务器,服务器又将流转换为 MPEG-2 传输流并将其发送到 mediastreamsegmenter。我希望剩下的部分与我为 VLC 播放器进行流式传输时所做的相同。

如果有人能帮助我了解如何开始这方面的工作,那就太好了。

I started working on HTTP live Streaming protocol and felt very interesting. Went through the complete document provided by Apple.

I tried Vedio On Demand and Live Streaming as well using VLC player as the streaming server following the steps mentioned in one of the developer forums and I'm able to stream it successfully.

Now I want my iPhone to be the source for streaming and want to use another iPhone to view that content.

As mentioned, mediastreamsegmenter is a tool which receives an MPEG-2 transport stream over a UDP network connection or from sdtin.

Can someone put some light as how to start to use my iPhone as the streaming server and able to stream the content. To my knowledge, I think there must be a client(iPhone) application which sends the content to the server which in turn converts the stream into MPEG-2 transport stream and sends it to the mediastreamsegmenter. I hope the remaining part is same as I did to stream for a VLC Player.

It would be great if someone can help me out as how to start on this.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

友欢 2024-10-28 04:03:19

这只是我的猜测。如果有肯定的答案,我尽量不发布此内容。然而,到现在为止,还没有答案。所以我写这篇文章只是为了分享我的观点。但我很抱歉这不是积极的。

如果您只想从 iOS 设备传输视频,这很简单。只需发送视频文件本身。所以我假设你想要的是实时视频流。广播我当前正在录制的内容。

几周前我确实考虑过这个问题,但我没有成功。问题不在于媒体分段器。 MPEG2 TS只是一个容器,分段只是分割视频,它可以通过准确的规范轻松实现(?)。

真正的问题是视频编码。 iOS 不提供来自摄像头输入的压缩实时流。 (也许还?)压缩流只能存储到磁盘中。应该有一种方法可以在内部获取压缩视频流。因为如果没有它,FaceTime 就不可能实现。但它不提供给第三方。

可以连续存储短视频并将其上传到服务器。但启动/完成视频会话需要很长时间。所以我放弃了这个方法。

另一种方式是,iOS 提供未压缩的视频流,因此您可以通过自己压缩原始视频流来制作它。使用 ffmpeg 或类似的东西。然而,Apple 的视频编码利用硬件功能来提高性能(这可以在较小的尺寸内提高视频质量)并节省能源。 ffmpeg 仅在软件中完成所有操作。当然,您可以自己制作硬件加速编码器,或者从某些供应商处购买。

市场上有一些具有实时视频流功能的应用程序。我没有使用它们。但也许其中之一。

  • 传输原始帧。 (无压缩或快速但弱压缩)
  • 基于 ffmpeg 编码器。
  • 独立的硬件加速编码器实现。

第一个需要太大的带宽,并且还消耗大量的能量。

第二个肯定很慢并且消耗大量能量。然而,这可能就足够了。

第三,对于小型应用程序来说,开发成本太高。但如果预算足够的话,这个是最好的。我不知道市场上是否存在 iOS 的预实现库。

我希望苹果能够发布这种压缩视频流。其中使用Apple的硬件加速编码器进行压缩。但也许这不会发生,除非苹果决定放弃保护设备之间的高质量实时视频流功能作为其杀手级功能……FaceTime。

不过,Apple 可能会在某个时候发布此 API。我不确定是否搜索了所有 API。可能我无法检查一些重要的事情。


更新

我发现类 AVAssetWriter 可以将视频数据压缩写入文件。这可能是制作此类应用程序的关键。

This is just my guess. I tried not to post this if there is a positive answer. However, still now, there is no answer. So I'm writing this just to share my opinion. However I'm sorry this is not positive.

If you want just transferring video from iOS devices, it's easy. Just send video file itself. So I assume what you want is live video streaming. Broadcasting what I'm currently recoding.

I did thought about this problem few weeks ago, however I couldn't succeeded. The problem is not a media segmentor. MPEG2 TS is just a container, and segmentation is just splitting video, it cab be implemented easily(?) with accurate specification.

Real problem is video encoding. iOS does not offer compressed live stream from camera input. (maybe yet?) Compressed stream only can be stored into disk. There should be a method to get compressed video stream internally. Because FaceTime is impossible without it. However it does not offered to 3rd parties.

It's possible storing short videos continually and upload them to server. But initiating/completing video session takes too long time. So I gave up this method.

As another way, iOS offers uncompressed video stream, so you can make it by compressing raw video stream yourself. With ffmpeg or such things. However Apple's video encoding utilizes hardware features to increase performance (this can make video quality better within lower size) and save energy. ffmpeg does all things only in software. Of course, you can make your hardware-accelerated encoder yourself, or purchase it from some vendor.

There is some live video streaming featured apps on the market. I didn't used them. But maybe one of these.

  • transfer raw frames. (no compression or fast but weak compression)
  • ffmpeg encoder based.
  • independent hardware-accelerated encoder implementation.

First one require too heavy bandwidth, and also consumes heavy energy.

Second one definitely slow and consumes heavy energy. However it may be just enough.

Third one needs too high development cost for small scale apps. But if you have enough budget, this is most fine. I don't about know pre-implemented library for the iOS is exist on the market.

I wish Apple will release this kind of compressed video stream. Which compressed with Apple's hardware accelerated encoder. But maybe it will not happen unless Apple decide to give up protecting high-quality live video streaming feature between devices as their killer feature... the FaceTime.

However it's possible Apple release this API at some time. And I'm not sure that I searched all of APIs. It's possible I couldn't checked something important.


Update

I found the class AVAssetWriter which can write video data into file with compression. This may be key to making this kind of app.

赴月观长安 2024-10-28 04:03:19

Video Encoding using AVAssetWriter - CRASHES 有一个很好的 AVAssetWriter 和 AVAssetReader 示例。您可以将阅读器部分发送到网络上收集的数据。

There's a good example of AVAssetWriter and AVAssetReader at Video Encoding using AVAssetWriter - CRASHES. You can the reader part to send gathered data on network.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文