haproxy 全局 maxconn 和服务器 maxconn 的区别
我有一个关于我的 haproxy 配置的问题:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 syslog emerg
maxconn 4000
quiet
user haproxy
group haproxy
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option abortonclose
option dontlognull
option httpclose
option httplog
option forwardfor
option redispatch
timeout connect 10000 # default 10 second time out if a backend is not found
timeout client 300000 # 5 min timeout for client
timeout server 300000 # 5 min timeout for server
stats enable
listen http_proxy localhost:81
balance roundrobin
option httpchk GET /empty.html
server server1 myip:80 maxconn 15 check inter 10000
server server2 myip:80 maxconn 15 check inter 10000
正如你所看到的,它很简单,但我对 maxconn 属性的工作原理有点困惑。
服务器上的监听块中有全局连接和最大连接。我的想法是这样的:全局的管理着 haproxy 作为一项服务一次性排队或处理的连接总数。如果数字超过这个值,它要么终止连接,要么在某个 linux 套接字中池化?我不知道如果数字高于 4000 会发生什么。
然后你将服务器 maxconn 属性设置为 15。首先,我将其设置为 15,因为我的 php-fpm,这是转发到单独的服务器上,只有它可以使用如此多的子进程,所以我确保我在这里汇集请求,而不是在 php-fpm 中。我认为哪个更快。
但回到这个主题,我关于这个数字的理论是这个块中的每个服务器一次只会发送 15 个连接。然后连接将等待打开的服务器。如果我打开了cookie,连接将等待正确的开放服务器。但我不这么认为。
那么问题是:
- 如果全球连接数超过 4000 会发生什么?他们会死吗?或者 Linux 中的池以某种方式?
- 除了服务器连接总数不能大于全局连接之外,全局连接是否与服务器连接相关?
- 在算出全局连接数的时候,不应该是服务器部分加起来的连接数,再加上一定的百分比用于池化吗?显然,您对连接还有其他限制,但实际上是您想发送给代理的数量?
先感谢您。
I have a question about my haproxy config:
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 syslog emerg
maxconn 4000
quiet
user haproxy
group haproxy
daemon
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option abortonclose
option dontlognull
option httpclose
option httplog
option forwardfor
option redispatch
timeout connect 10000 # default 10 second time out if a backend is not found
timeout client 300000 # 5 min timeout for client
timeout server 300000 # 5 min timeout for server
stats enable
listen http_proxy localhost:81
balance roundrobin
option httpchk GET /empty.html
server server1 myip:80 maxconn 15 check inter 10000
server server2 myip:80 maxconn 15 check inter 10000
As you can see it is straight forward, but I am a bit confused about how the maxconn properties work.
There is the global one and the maxconn on the server, in the listen block. My thinking is this: the global one manages the total number of connections that haproxy, as a service, will queue or process at one time. If the number gets above that, it either kills the connection, or pools in some linux socket? I have no idea what happens if the number gets higher than 4000.
Then you have the server maxconn property set at 15. First off, I set that at 15 because my php-fpm, this is forwarding to on a separate server, only has so many child processes it can use, so I make sure I am pooling the requests here, instead of in php-fpm. Which I think is faster.
But back on the subject, my theory about this number is each server in this block will only be sent 15 connections at a time. And then the connections will wait for an open server. If I had cookies on, the connections would wait for the CORRECT open server. But I don't.
So questions are:
- What happens if the global connections get above 4000? Do they die? Or pool in Linux somehow?
- Are the global connection related to the server connections, other than the fact you can't have a total number of server connections greater than global?
- When figuring out the global connections, shouldn't it be the amount of connections added up in the server section, plus a certain percentage for pooling? And obviously you have other restrains on the connections, but really it is how many you want to send to the proxies?
Thank you in advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
威利通过电子邮件给了我答复。我想我会分享它。他的答案是粗体的。
我有一个关于我的 haproxy 配置的问题:
正如你所看到的,它很简单,但我对如何配置感到有点困惑
maxconn 属性有效。
服务器上的监听块中有全局连接和最大连接。
监听块中还有另一个默认值
像2000年。
我的想法是这样的:全局管理连接总数
haproxy 作为一项服务,将同时进行查询或处理。
正确。这是每个进程的最大并发连接数。
如果该数量
超过这个值,它要么终止连接,要么在某些 linux 中进行池化
插座?
后者,它只是停止接受新连接,并且它们保留在
内核中的套接字队列。确定可排队套接字的数量
通过 (net.core.somaxconn, net.ipv4.tcp_max_syn_backlog 的最小值,以及
监听块的 maxconn)。
我不知道如果数量超过 4000 会发生什么。
多余的连接会等待另一个连接完成后再被处理。
公认。然而,只要内核的队列没有饱和,
客户端甚至没有注意到这一点,因为连接已在
TCP级别但不被处理。所以客户只会注意到一些延迟
处理请求。
但在实践中,监听块的 maxconn 更为重要,
因为默认情况下它比全局小。聆听的最大康恩
限制每个侦听器的连接数。一般来说,明智的做法是
将其配置为您想要的服务连接数,
并将全局 maxconn 配置为最大连接数
你让 haproxy 进程处理。当你只有一项服务时,
两者可以设置为相同的值。但当你有很多服务时,
你可以很容易地理解它会产生巨大的差异,因为你不知道
想要一个服务来获取所有连接并防止
其他的都无法工作。
然后你将服务器 maxconn 属性设置为 15。首先,我将其设置为
15 因为我的 php-fpm,这是转发到一个单独的服务器上,只有
它可以使用如此多的子进程,所以我确保我正在汇集请求
在这里,而不是在 php-fpm 中。我认为哪个更快。
是的,它不仅应该更快,而且允许 haproxy 找到另一个
尽可能使用可用的服务器,并且它还允许它杀死
如果客户端在连接之前点击“停止”,则请求在队列中
转发到服务器。
但回到主题,我关于这个数字的理论是这个中的每个服务器
块一次只会发送 15 个连接。然后是连接
将等待开放的服务器。如果我打开了cookie,连接就会等待
对于正确的开放服务器。但我不这么认为。
这正是原则。有一个每个代理队列和一个每个服务器队列
队列。带有持久性cookie的连接进入服务器队列并
其他连接进入代理队列。但是,因为在你的情况下没有
配置 cookie 后,所有连接都会进入代理队列。你可以看看
如果需要的话,可以在 haproxy 源中的图表 doc/queuing.fig 中进行解释
如何/在哪里做出决策。
所以问题是:
如果全局连接数超过 4000 会发生什么?他们会死吗?或者
Linux 中的池不知何故?
它们在 Linux 中排队。一旦你压垮了内核的队列,那么它们就会
放入内核中。
全局连接是否与服务器连接相关,除了
事实上,服务器连接总数不能大于
全球?
否,全局连接设置和服务器连接设置是独立的。
在计算全局连接时,不应该是
在服务器部分添加的连接数,加上一定的百分比
汇集?显然你对连接还有其他限制,但是
实际上是您想发送多少给代理?
你说得对。如果你的服务器的响应时间很短,那就没有什么
将数千个连接排队以一次只服务几个连接是错误的,
因为它大大减少了请求处理时间。几乎,
如今在千兆位上建立连接大约需要 5 微秒
局域网。所以让 haproxy 分配连接是很有意义的
尽可能快地从队列到具有非常小的 maxconn 的服务器。
我记得一个游戏网站排队的并发连接数超过 30000
每台服务器的队列为 30 个!这是一个 apache 服务器,并且
apache 在少量连接时比在大量连接时要快得多
数字。但为此你确实需要一个快速的服务器,因为你不需要
想让所有客户端排队等待连接槽,因为
例如,服务器正在等待数据库。
另外一个非常有效的方法是专用服务器。如果您的网站
有很多静态,您可以将静态请求定向到服务器池
(或缓存),这样您就不会在它们上排队静态请求,并且
静态请求不会占用昂贵的连接槽。
希望这有帮助,
威利
Willy got me an answer by email. I thought I would share it. His answers are in bold.
I have a question about my haproxy config:
As you can see it is straight forward, but I am a bit confused about how the
maxconn properties work.
There is the global one and the maxconn on the server, in the listen block.
And there is also another one in the listen block which defaults to something
like 2000.
My thinking is this: the global one manages the total number of connections
that haproxy, as a service, will que or process at one time.
Correct. It's the per-process max number of concurrent connections.
If the number
gets above that, it either kills the connection, or pools in some linux
socket?
The later, it simply stops accepting new connections and they remain in the
socket queue in the kernel. The number of queuable sockets is determined
by the min of (net.core.somaxconn, net.ipv4.tcp_max_syn_backlog, and the
listen block's maxconn).
I have no idea what happens if the number gets higher than 4000.
The excess connections wait for another one to complete before being
accepted. However, as long as the kernel's queue is not saturated, the
client does not even notice this, as the connection is accepted at the
TCP level but is not processed. So the client only notices some delay
to process the request.
But in practice, the listen block's maxconn is much more important,
since by default it's smaller than the global one. The listen's maxconn
limits the number of connections per listener. In general it's wise to
configure it for the number of connections you want for the service,
and to configure the global maxconn to the max number of connections
you let the haproxy process handle. When you have only one service,
both can be set to the same value. But when you have many services,
you can easily understand it makes a huge difference, as you don't
want a single service to take all the connections and prevent the
other ones from working.
Then you have the server maxconn property set at 15. First off, I set that at
15 because my php-fpm, this is forwarding to on a separate server, only has
so many child processes it can use, so I make sure I am pooling the requests
here, instead of in php-fpm. Which I think is faster.
Yes, not only it should be faster, but it allows haproxy to find another
available server whenever possible, and also it allows it to kill the
request in the queue if the client hits "stop" before the connection is
forwarded to the server.
But back on the subject, my theory about this number is each server in this
block will only be sent 15 connections at a time. And then the connections
will wait for an open server. If I had cookies on, the connections would wait
for the CORRECT open server. But I don't.
That's exactly the principle. There is a per-proxy queue and a per-server
queue. Connections with a persistence cookie go to the server queue and
other connections go to the proxy queue. However since in your case no
cookie is configured, all connections go to the proxy queue. You can look
at the diagram doc/queuing.fig in haproxy sources if you want, it explains
how/where decisions are taken.
So questions are:
What happens if the global connections get above 4000? Do they die? Or
pool in Linux somehow?
They're queued in linux. Once you overwhelm the kernel's queue, then they're
dropped in the kernel.
Are the global connection related to the server connections, other than
the fact you can't have a total number of server connections greater than
global?
No, global and server connection settings are independant.
When figuring out the global connections, shouldn't it be the amount of
connections added up in the server section, plus a certain percentage for
pooling? And obviously you have other restrains on the connections, but
really it is how many you want to send to the proxies?
You got it right. If your server's response time is short, there is nothing
wrong with queueing thousands of connections to serve only a few at a time,
because it substantially reduces the request processing time. Practically,
establishing a connection nowadays takes about 5 microseconds on a gigabit
LAN. So it makes a lot of sense to let haproxy distribute the connections
as fast as possible from its queue to a server with a very small maxconn.
I remember one gaming site queuing more than 30000 concurrent connections
and running with a queue of 30 per server ! It was an apache server, and
apache is much faster with small numbers of connections than with large
numbers. But for this you really need a fast server, because you don't
want to have all your clients queued waiting for a connection slot because
the server is waiting for a database for instance.
Also something which works very well is to dedicate servers. If your site
has many statics, you can direct the static requests to a pool of servers
(or caches) so that you don't queue static requests on them and that the
static requests don't eat expensive connection slots.
Hoping this helps,
Willy