apache2忽略MaxKeepAliveRequests,任意关闭连接
我们有一个 Tomcat 前端服务器,它代理我们的 Apache 2.2.11 应用服务器,在 64 位 Fedora 2.6.21.7 EC2 2xlarge 实例 (AKI aki-b51cf9dc) 上运行。 Apache 正在运行 mod_perl 并且不是线程化的。
我们试图让在另一个 EC2 实例上运行的 Tomcat 和 Apache 服务器之间的连接保持很长时间,同时不保持来自外部客户端直接进入 Apache 服务器的连接。我们的配置如下所示:
Listen 80
NameVirtualHost *:80
# used for external clients
<VirtualHost *:80>
ServerName xxx.yyy.com
ServerAlias *.yyy.com
DocumentRoot "/var/www/html"
KeepAlive Off
</VirtualHost>
# used from tomcat server on local network
<VirtualHost *:80>
ServerName ip-<Apache-server-local-IP>.ec2.internal
KeepAlive On
KeepAliveTimeout 3600
MaxKeepAliveRequests 0
DocumentRoot "/var/www/html"
</VirtualHost>
TimeOut 60
MinSpareServers 20
MaxSpareServers 30
StartServers 20
MaxClients 60
GracefulShutdownTimeout 90
我们尝试了 MaxKeepAliveRequests 和 KeepAliveTimeout 的各种值,服务器肯定会与 Tomcat 保持一段时间的连接,但它总是在几秒钟内关闭它,而此时只有几十个请求已处理。重要的是,我从未见过进程在使用 mod_status 进行观察时在套接字上维持 100 个或更多连接。
与非 Tomcat 客户端之间永远不会有任何持久连接,因此我们知道其中存在一些差异,并且 VirtualHost 配置肯定会在这两种情况下应用。
值得一提的是,Tomcat 发出的请求都是 POST,而其他请求则是 POST 和 GET 的混合。
当我使用 tcpdump 查看给定端口上的流量时,我可以清楚地看到许多 POST 正在正确处理,然后在返回良好回复(200,数据看起来很好)后的某个时刻,Apache 服务器立即关闭连接,发送一个FIN 到 Tomcat。这种情况发生在最后一个和倒数第二个请求或回复之间除了真实客户端 IP 等次要数据之外完全没有区别的情况下,因此在处理请求时没有服务器崩溃的迹象。当然,错误日志中没有任何可疑之处,并且 httpd 进程本身并没有死亡。
从 netstat 中,我们可以看到与 Tomcat 服务器的连接保持打开状态几秒钟,但在远程端口范围内快速循环,验证了我们在其他地方看到的情况。这几乎就像 Apache 试图公平地分配连接以防止持久连接导致其他连接挨饿——但它不会这样做,不是吗?
我只想被告知我们在这里做了一些愚蠢的事情!拜托,请告诉我我是个白痴,或者至少是近视眼......
We have a Tomcat front-end server that proxies to our Apache 2.2.11 app server, running on a 64-bit Fedora 2.6.21.7 EC2 2xlarge instance (AKI aki-b51cf9dc). Apache is running mod_perl and is not threaded.
We are trying to have the connections persist for a long time between Tomcat, running on another EC2 instance, and the Apache server, while not persisting connections from outside clients coming directly into the Apache server. Our config looks like this:
Listen 80
NameVirtualHost *:80
# used for external clients
<VirtualHost *:80>
ServerName xxx.yyy.com
ServerAlias *.yyy.com
DocumentRoot "/var/www/html"
KeepAlive Off
</VirtualHost>
# used from tomcat server on local network
<VirtualHost *:80>
ServerName ip-<Apache-server-local-IP>.ec2.internal
KeepAlive On
KeepAliveTimeout 3600
MaxKeepAliveRequests 0
DocumentRoot "/var/www/html"
</VirtualHost>
TimeOut 60
MinSpareServers 20
MaxSpareServers 30
StartServers 20
MaxClients 60
GracefulShutdownTimeout 90
We've tried all sorts of values for MaxKeepAliveRequests and KeepAliveTimeout, and the server definitely maintains a connection for a while with Tomcat, but it always closes it within a matter of seconds, when only some tens of requests have been processed. It may be significant that I've never seen a process maintain 100 or more connections on a socket while observing using mod_status.
There are never any persistent connections with the non-Tomcat clients, so we know that there is some difference going on there and the VirtualHost config is definitely being applied in both cases.
I should mention that the requests from Tomcat are all POSTs while the others are a mixture of POSTs and GETs.
When I look a the traffic on a given port with tcpdump I can clearly see a number of POSTs being processed correctly and then at some point after returning a good reply (200, data looks fine) the Apache server immediately closes the connection, sending a FIN to Tomcat. This happens in cases where there is absolutely no difference between the last and second-to-last requests or replies other than minor data like the real client's IP, so there's no indication of the server barfing while processing a request. And of course there's nothing suspicious in the error logs and the httpd processes themselves are not dying.
From netstat we can see the connections to the Tomcat server being held open for some seconds, but cycling pretty quickly through the range of remote ports, verifying what we see elsewhere. It's almost like Apache is trying to fairly allocate connections to prevent the persistent ones from starving the others--but it wouldn't do that, would it?!
I'd like nothing more than to be told that we're doing something dumb here! Please, please tell me I'm an idiot, or at least near-sighted...
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
Tomcat 上的
cat /proc/sys/net/ipv4/tcp_keepalive_time
的值是多少?是不是异常低?默认为7200(即2小时)
What is the value of
cat /proc/sys/net/ipv4/tcp_keepalive_time
on the Tomcat ?Is it unusually low? Default is 7200 (i.e. 2 hours)
ec2.internal 上的 MaxKeepAliveRequests 应为 -1,非零
MaxKeepAliveRequests should be -1, not ZERO on ec2.internal