同时视频上传

发布于 2024-11-12 16:27:38 字数 98 浏览 2 评论 0原文

像 YouTube 或 DailyMotion 这样的大型视频网站如何处理大量同时上传的视频。例如,为了能够处理数千个用户的带宽,在网络服务器、硬件等方面需要考虑哪些特殊因素?谢谢。

How does a large video site like YouTube or DailyMotion handle a large amount of simultaneous video uploads. For example, to be able to handle the bandwidth from 1000s of users, what special considerations need to be made in web servers, hardware, etc.? Thank you.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

小伙你站住 2024-11-19 16:27:38

这就是所谓的 c10k 问题——如何同时处理 10,000 个客户端。

Web 服务器软件必须编写良好,以便每个服务器可以同时与数百或数千个客户端通信。

存储后端必须经过精心设计,因此每个服务器都具有相对无争议的存储网络写入能力。存储网络必须是冗余的,因此失效的驱动器不会破坏用户数据。 (即使每年 0.01% 的故障率也意味着,当您拥有数百万个驱动器时,每年也会有数百个驱动器失效。)

数据库后端必须做好向外扩展的准备——插入新记录时没有什么比“锁定表”更愚蠢的事情了。

必须准备好路由框架,根据数据中心的实际流量成本将流量路由到附近的数据中心。涉及 ISP。 (视频上传穿过管道到达数据中心,让整个网络瘫痪是没有意义的。)

路由框架必须准备好处理整个站点立即离线,将失败的上传重定向到新的服务器中心,而无需太多疼痛。

数据中心必须能够处理电力故障、空调故障、网络故障、恶意或好奇的公众试图进入的破坏,通常通过拥有两个或更多电源,包括大规模的全站点不间断电源和发电机;多台独立空调,可运行整个场地;两个、三个、四个或更多网络资源;实体安全措施和可能的武装警卫。

Hurricane Electric 有一些他们的一些数据中心的漂亮照片。它们看起来令人印象深刻,但只是 Google、Facebook 或 Amazon 数据中心需求的一小部分。

This is what is known as the c10k problem -- how do you handle 10,000 clients simultaneously.

The web server software must be well-written, so each server can talk with hundreds or thousands of clients at once.

The storage backend must be well-designed, so each server has relatively uncontested write ability to the storage network. The storage network must be redundant, so dead drives don't destroy user data. (Even a failure rate of 0.01% / year means hundreds of dead drives a year when you've got millions of them.)

Database backends must be prepared to scale outwards incredibly -- nothing silly like "lock table" when inserting new records.

The routing framework must be prepared to route traffic to nearby datacenters, based on the actual traffic cost of the ISPs involved. (No sense bringing the whole network to its knees with video uploads crossing in the pipes on their way to datacenters.)

The routing framework must be prepared to handle entire sites going offline at once, to redirect failed uploads to new server centers without too much pain.

The data centers must be built to handle power failures, air conditioning failures, network failures, breaches by malicious or curious members of the public trying to get in, typically by having two or more power sources, including massive site-wide uninterruptible power supplies and generators; multiple independent air conditioning that can run the entire site; two, three, four, or more network sources; physical security measures and potentially armed guards.

Hurricane Electric has some nice photos up of some of their data centers. They look impressive but are a tiny fraction of Google, Facebook, or Amazon's datacenter needs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文