现代的http keep-alive

发布于 2024-10-01 06:56:46 字数 741 浏览 12 评论 0原文

所以根据 haproxy 作者的说法,谁对 http 略知一二:

Keep-alive的发明是为了减少CPU占用 CPU 为 100 时服务器上的使用情况 慢几倍。但没有说的是 持久连接消耗 大量内存但无法使用 除客户外的任何人 打开它们。 2009 年的今天,CPU 非常便宜,内存仍然有限 根据架构可达到几 GB 或价格。如果一个网站需要 keep-alive,有一个真正的问题。 高负载网站经常被禁用 keep-alive支持最大 同时客户端的数量。这 没有 keep-alive 的真正缺点 延迟略有增加 获取对象。浏览器加倍 并发连接数 非 keepalive 站点进行补偿 这个。

(来自 http://haproxy.1wt.eu/

这与其他人的经验一致吗?即没有保持活动状态 - 现在结果几乎不明显吗? (可能值得注意的是,对于 websockets 等 - 无论保持活动状态如何,连接都会保持“打开”状态 - 对于响应速度非常快的应用程序)。 对于远离服务器的人来说效果是否更大 - 或者如果加载页面时需要从同一主机加载许多工件? (我认为 CSS、图像和 JS 等内容越来越多地来自缓存友好的 CDN)。

想法?

(不确定这是否是 serverfault.com 的事情,但在有人告诉我将其移至那里之前我不会交叉发布)。

So according to the haproxy author, who knows a thing or two about http:

Keep-alive was invented to reduce CPU
usage on servers when CPUs were 100
times slower. But what is not said is
that persistent connections consume a
lot of memory while not being usable
by anybody except the client who
openned them. Today in 2009, CPUs are
very cheap and memory is still limited
to a few gigabytes by the architecture
or the price. If a site needs
keep-alive, there is a real problem.
Highly loaded sites often disable
keep-alive to support the maximum
number of simultaneous clients. The
real downside of not having keep-alive
is a slightly increased latency to
fetch objects. Browsers double the
number of concurrent connections on
non-keepalive sites to compensate for
this.

(from http://haproxy.1wt.eu/)

Is this in line with other peoples experience? ie without keep-alive - is the result barely noticable now? (its probably worth noting that with websockets etc - a connection is kept "open" regardless of keep-alive status anyway - for very responsive apps).
Is the effect greater for people who are remote from the server - or if there are many artifacts to load from the same host when loading a page? (I would think things like CSS, images and JS are increasingly coming from cache friendly CDNs).

Thoughts?

(not sure if this is a serverfault.com thing, but I won't cross post until someone tells me to move it there).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

蓝咒 2024-10-08 06:56:46

嘿,因为我是这篇引文的作者,所以我会回应:-)

大型网站有两个大问题:并发连接和延迟。并发连接是由缓慢的客户端(需要很长时间才能下载内容)以及空闲连接状态引起的。这些空闲连接状态是由连接重用以获取多个对象(称为保持活动)引起的,延迟会进一步增加这种状态。当客户端距离服务器非常近时,它可以密集使用连接并确保它几乎从不空闲。然而,当序列结束时,没有人关心快速关闭通道,并且连接长时间保持打开和未使用状态。这就是为什么许多人建议使用非常低的保持活动超时的原因。在某些服务器(例如 Apache)上,您可以设置的最低超时为一秒,而这通常对于维持高负载来说太多了:如果您面前有 20000 个客户端,并且它们平均每秒获取一个对象,那么您将永久建立这 20000 个连接。像 Apache 这样的通用服务器上的 20000 个并发连接是巨大的,需要 32 到 64 GB 的 RAM,具体取决于加载的模块,而且即使添加 RAM,您也可能不希望更高。实际上,对于 20000 个客户端,您甚至可能会在服务器上看到 40000 到 60000 个并发连接,因为如果浏览器有很多对象要获取,它们将尝试建立 2 到 3 个连接。

如果在每个对象之后关闭连接,并发连接数将急剧下降。事实上,它会根据对象之间的时间下降一个与下载对象的平均时间相对应的因子。如果您需要 50 毫秒来下载一个对象(一张微型照片、一个按钮等),并且如上所述平均每秒下载 1 个对象,那么每个客户端只有 0.05 个连接,即 1000 个连接20000 个客户端的并发连接。

现在是时候建立新的联系了。远程客户端将经历令人不快的延迟。过去,浏览器在禁用 keep-alive 时通常会使用大量并发连接。我记得 MSIE 上是 4,Netscape 上是 8。这确实会将每个对象的平均延迟除以那么多。现在,keep-alive 已随处可见,我们不再看到如此高的数字,因为这样做会进一步增加远程服务器的负载,而浏览器则负责保护互联网的基础设施。

这意味着,对于当今的浏览器,让非保持活动服务的响应能力与保持活动服务的响应能力更加困难。此外,某些浏览器(例如:Opera)使用启发式方法来尝试使用管道技术。管道是使用保持活动的有效方法,因为它通过发送多个请求而无需等待响应,几乎消除了延迟。我在一个有100张小照片的页面上尝试过,第一次访问的速度大约是没有keep-alive的两倍,但下一次访问的速度大约是8倍,因为响应太小了,只考虑延迟(仅“304”响应)。

我想说,理想情况下,我们应该在浏览器中进行一些可调设置,以使它们保持所获取的对象之间的连接处于活动状态,并在页面完成时立即删除它。但不幸的是我们没有看到这一点。

因此,一些需要在前端安装Apache等通用服务器并且需要支持大量客户端的站点通常必须禁用keep-alive。为了强制浏览器增加连接数,他们使用多个域名,以便可以并行下载。这在大量使用 SSL 的站点上尤其成问题,因为由于存在额外的往返,连接设置甚至更高。

现在比较常见的情况是,此类网站更喜欢安装haproxy或nginx等轻前端,这些前端处理几万到几十万的并发连接没有问题,它们在客户端启用keep-alive,在客户端禁用keep-alive。阿帕奇这边。在这一方面,建立连接的成本在 CPU 方面几乎为零,在时间方面根本不明显。这样,这提供了两全其美的优点:由于客户端的保持活动状态和非常低的超时而导致低延迟,以及服务器端的连接数量较少。每个人都很高兴:-)

一些商业产品通过重用前端负载均衡器和服务器之间的连接并复用它们上的所有客户端连接来进一步改进这一点。当服务器靠近LB时,增益不会比以前的解决方案高很多,但通常需要对应用程序进行调整,以确保不存在由于多个用户之间意外共享连接而导致用户之间会话交叉的风险。从理论上讲,这永远不应该发生。现实有很大不同:-)

Hey since I'm the author of this citation, I'll respond :-)

There are two big issues on large sites : concurrent connections and latency. Concurrent connection are caused by slow clients which take ages to download contents, and by idle connection states. Those idle connection states are caused by connection reuse to fetch multiple objects, known as keep-alive, which is further increased by latency. When the client is very close to the server, it can make an intensive use of the connection and ensure it is almost never idle. However when the sequence ends, nobody cares to quickly close the channel and the connection remains open and unused for a long time. That's the reason why many people suggest using a very low keep-alive timeout. On some servers like Apache, the lowest timeout you can set is one second, and it is often far too much to sustain high loads : if you have 20000 clients in front of you and they fetch on average one object every second, you'll have those 20000 connections permanently established. 20000 concurrent connections on a general purpose server like Apache is huge, will require between 32 and 64 GB of RAM depending on what modules are loaded, and you can probably not hope to go much higher even by adding RAM. In practice, for 20000 clients you may even see 40000 to 60000 concurrent connections on the server because browsers will try to set up 2 to 3 connections if they have many objects to fetch.

If you close the connection after each object, the number of concurrent connections will dramatically drop. Indeed, it will drop by a factor corresponding to the average time to download an object by the time between objects. If you need 50 ms to download an object (a miniature photo, a button, etc...), and you download on average 1 object per second as above, then you'll only have 0.05 connection per client, which is only 1000 concurrent connections for 20000 clients.

Now the time to establish new connections is going to count. Far remote clients will experience an unpleasant latency. In the past, browsers used to use large amounts of concurrent connections when keep-alive was disabled. I remember figures of 4 on MSIE and 8 on Netscape. This would really have divided the average per-object latency by that much. Now that keep-alive is present everywhere, we're not seeing that high numbers anymore, because doing so further increases the load on remote servers, and browsers take care of protecting the Internet's infrastructure.

This means that with todays browsers, it's harder to get the non-keep-alive services as much responsive as the keep-alive ones. Also, some browsers (eg: Opera) use heuristics to try to use pipelinining. Pipelining is an efficient way of using keep-alive, because it almost eliminates latency by sending multiple requests without waiting for a response. I have tried it on a page with 100 small photos, and the first access is about twice as fast as without keep-alive, but the next access is about 8 times as fast, because the responses are so small that only latency counts (only "304" responses).

I'd say that ideally we should have some tunables in the browsers to make them keep the connections alive between fetched objects, and immediately drop it when the page is complete. But we're not seeing that unfortunately.

For this reason, some sites which need to install general purpose servers such as Apache on the front side and which have to support large amounts of clients generally have to disable keep-alive. And to force browsers to increase the number of connections, they use multiple domain names so that downloads can be parallelized. It's particularly problematic on sites making intensive use of SSL because the connection setup is even higher as there is one additional round trip.

What is more commonly observed nowadays is that such sites prefer to install light frontends such as haproxy or nginx, which have no problem handling tens to hundreds of thousands of concurrent connections, they enable keep-alive on the client side, and disable it on the Apache side. On this side, the cost of establishing a connection is almost null in terms of CPU, and not noticeable at all in terms of time. That way this provides the best of both worlds : low latency due to keep-alive with very low timeouts on the client side, and low number of connections on the server side. Everyone is happy :-)

Some commercial products further improve this by reusing connections between the front load balancer and the server and multiplexing all client connections over them. When the servers are close to the LB, the gain is not much higher than previous solution, but it will often require adaptations on the application to ensure there is no risk of session crossing between users due to the unexpected sharing of a connection between multiple users. In theory this should never happen. Reality is much different :-)

零度° 2024-10-08 06:56:46

自从本文撰写(并发布在 stackoverflow 上)以来,我们现在拥有诸如 nginx 之类的服务器,它们越来越受欢迎。

例如,nginx 只需 2.5 MB(兆字节)RAM,即可在单个进程中保持打开的 10,000 个保持活动连接。事实上,用很少的 RAM 很容易保持打开数千个连接,并且您将遇到的唯一限制是其他限制,例如打开的文件句柄或 TCP 连接的数量。

Keep-alive 是一个问题,不是因为 keep-alive 规范本身有任何问题,而是因为 Apache 的基于进程的扩展模型以及 keep-alives 被黑客入侵到架构不适合它的服务器中。< /strong>

特别有问题的是 Apache Prefork + mod_php + keep-alives。在这种模型中,每个连接都将继续占用 PHP 进程占用的所有 RAM,即使它完全空闲并且仅作为保持活动状态保持打开状态。这是不可扩展的。但服务器不必以这种方式设计 - 没有什么特殊原因服务器需要将每个保持活动连接保持在单独的进程中(特别是当每个此类进程都有完整的 PHP 解释器时)。 PHP-FPM 和基于事件的服务器处理模型(例如 nginx 中的模型)完美地解决了这个问题。

更新 2015:

SPDY 和 HTTP/2 用更好的功能取代了 HTTP 的保持活动功能:不仅能够保持连接活动并通过其发出多个请求和响应,而且能够对它们进行多路复用,因此可以以任何顺序并行发送响应,而不仅仅是按照请求的顺序。这可以防止较慢的响应阻塞较快的响应,并消除浏览器对单个服务器打开多个并行连接的诱惑。这些技术进一步凸显了 mod_php 方法的不足之处以及基于事件(或者至少是多线程)的 Web 服务器与 PHP-FPM 等单独耦合的优点。

In the years since this was written (and posted here on stackoverflow) we now have servers such as nginx which are rising in popularity.

nginx for example can hold open 10,000 keep-alive connections in a single process with only 2.5 MB (megabytes) of RAM. In fact it's easy to hold open multiple thousands of connections with very little RAM, and the only limits you'll hit will be other limits such as the number of open file handles or TCP connections.

Keep-alive was a problem not because of any problem with the keep-alive spec itself but because of Apache's process-based scaling model and of keep-alives hacked into a server whose architecture wasn't designed to accommodate it.

Especially problematic is Apache Prefork + mod_php + keep-alives. This is a model where every single connection will continue to occupy all the RAM that a PHP process occupies, even if it's completely idle and only remains open as a keep-alive. This is not scalable. But servers don't have to be designed this way - there's no particular reason a server needs to keep every keep-alive connection in a separate process (especially not when every such process has a full PHP interpreter). PHP-FPM and an event-based server processing model such as that in nginx solve the problem elegantly.

Update 2015:

SPDY and HTTP/2 replace HTTP's keep-alive functionality with something even better: the ability not only to keep alive a connection and make multiple requests and responses over it, but for them to be multiplexed, so the responses can be sent in any order, and in parallel, rather than only in the order they were requested. This prevents slow responses blocking faster ones and removes the temptation for browsers to hold open multiple parallel connections to a single server. These technologies further highlight the inadequacies of the mod_php approach and the benefits of something like an event-based (or at the very least, multi-threaded) web server coupled separately with something like PHP-FPM.

梦罢 2024-10-08 06:56:46

我的理解是,它与CPU关系不大,而是与世界另一端打开重复套接字的延迟有关。即使您有无限的带宽,连接延迟也会减慢整个过程。如果您的页面有数十个对象,则会放大。即使持久连接也有请求/响应延迟,但当您有 2 个套接字时,延迟会减少,平均而言,一个应该是流数据,而另一个可能是阻塞的。此外,路由器在让您写入数据之前永远不会假设套接字已连接。它需要完整的往返握手。再说一次,我并不自称是专家,但这就是我一直以来的看法。真正酷的是完全异步协议(不,不是完全病态的协议)。

my understanding was that it had little to do with CPU, but the latency in opening of repeated sockets to the other side of the world. even if you have infinite bandwidth, connect latency will slow down the whole process. amplified if your page has dozens of objects. even a persistent connection has a request/response latency but its reduced when you have 2 sockets as on average, one should be streaming data while the other could be blocking. Also, a router is never going to assume a socket connects before letting you write to it. It needs the full round trip handshake. again, i dont claim to be an expert, but this is how i always saw it. what would really be cool is a fully ASYNC protocol (no, not a fully sick protocol).

老子叫无熙 2024-10-08 06:56:46

如果您使用的是“原始拉取”CDN(例如 CloudFront 或 CloudFlare),那么很长的保持活动时间可能会很有用。事实上,即使您提供完全动态的内容,这也比没有 CDN 更快。

如果您的保持活动时间很长,使得每个 PoP 基本上都与您的服务器有永久连接,那么用户第一次访问您的站点时,他们可以与本地 PoP 进行快速 TCP 握手,而不是与您进行缓慢握手。 (光本身通过光纤绕地球一周大约需要 100 毫秒,建立 TCP 连接需要来回传递三个数据包。 SSL 需要三次个往返。)

Very long keep-alives can be useful if you're using an "origin pull" CDN such as CloudFront or CloudFlare. In fact, this can work out to be faster than no CDN, even if you're serving completely dynamic content.

If you have long keep alives such that each PoP basically has a permanent connection to your server, then the first time users visit your site, they can do a fast TCP handshake with their local PoP instead of a slow handshake with you. (Light itself takes around 100ms to go half-way around the world via fiber, and establishing a TCP connection requires three packets to be passed back and forth. SSL requires three round-trips.)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文