如何限制 Jetty 接受的连接数量?
我正在运行 Jetty 7.2.2,并且想要限制它将处理的连接数量,这样当它达到限制(例如 5000)时,它将开始拒绝连接。
不幸的是,所有连接器似乎都只是尽可能快地接受传入连接并将它们分派到配置的线程池。
我的问题是我在受限环境中运行,并且只能访问 8K 文件描述符。如果我收到大量连接,我可能会很快用完文件描述符并进入不一致的状态。
我的一个选择是返回 HTTP 503 Service Unavailable
,但这仍然需要我接受并响应连接 - 并且我可能会在某处跟踪传入连接的数量,也许是通过编写 servlet 过滤器。
对此有更好的解决方案吗?
I'm running Jetty 7.2.2 and want to limit the number of connections it will handle, such that when it reaches a limit (eg 5000), it will start refusing connections.
Unfortunately, all the Connectors
appear to just go ahead and accept incoming connections as fast as they can and dispatch them to the configured thread pool.
My problem is that I'm running in a constrained environment, and I only have access to 8K file descriptors. If I get a bunch of connections coming in I can quickly run out of file descriptors and get into an inconsistent state.
One option I have is to return an HTTP 503 Service Unavailable
, but that still requires me to accept and respond to the connection - and I'd have keep track of the number of incoming connections somewhere, perhaps by writing a servlet filter.
Is there a better solution to this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
我最终采用了一个解决方案,该解决方案可以跟踪请求数量并在负载过高时发送 503。这并不理想,正如您所看到的,我必须添加一种方法来始终让延续请求通过,这样它们就不会挨饿。非常适合我的需求:
I ended up going with a solution which keeps track of the number of requests and sends a 503 when the load is too high. It's not ideal, and as you can see I had to add a way to always let continuation requests through so they didn't get starved. Works well for my needs:
线程池有一个与之关联的队列。默认情况下,它是无界的。但是,在创建线程池时,您可以提供一个有界队列作为其基础。例如:
这似乎已经解决了我的问题。否则,由于无界队列,服务器在重负载下启动时会耗尽文件句柄。
The thread pool has a queue associated with it. By default, it is unbounded. However, when creating a thread pool you can provide a bounded queue to base it on. For example:
This appears to have resolved the problem for me. Otherwise, with the unbounded queue the server ran out of file handles as it started up under heavy load.
我还没有为我的应用程序部署 Jetty。然而,使用 Jetty 与其他一些开源项目进行部署。根据该经验:
连接器的配置如下:
acceptors:专用于接受传入连接的线程数量。
AcceptQueueSize :在操作系统开始发送拒绝之前可以排队的连接请求数。
http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors
您需要将它们添加到以下块中你的配置
I have not deployed Jetty for my application. However used Jetty with some other opensource projects for deployment. Based on that experience:
There are configuration for connector as below:
acceptors : The number of thread dedicated to accepting incoming connections.
acceptQueueSize : Number of connection requests that can be queued up before the operating system starts to send rejections.
http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors
You need to add them to below block in your configuration
,这是一个较低级别的 TCP 设置,它控制当服务器应用程序以比传入连接的速率更慢的速率执行 Accept() 时将跟踪的传入连接数。请参阅 http://download.oracle.com/javase/6/docs/api/java/net/ServerSocket.html#ServerSocket(int,%20int)
这与排队的请求数量完全不同码头排队线程池。排队的请求已经完全连接,并且正在等待池中的线程变得可用,之后可以开始处理。
我有类似的问题。我有一个受 CPU 限制的 servlet(几乎没有 I/O 或等待,因此异步无济于事)。我可以轻松限制 Jetty 池中的最大线程数,从而避免线程切换开销。但是,我似乎无法限制排队请求的长度。这意味着随着负载的增长,响应时间也会相应增长,这不是我想要的。
我希望如果所有线程都很忙并且排队的请求数量达到 N,则为所有进一步的请求返回 503 或其他一些错误代码,而不是永远增加队列。
我知道我可以通过使用负载均衡器(例如haproxy)来限制对jetty服务器的并发请求数量,但是可以单独使用Jetty来完成吗?
聚苯乙烯
写完这篇文章后,我发现了 Jetty DoS 过滤器,并且似乎可以将其配置为在超出预配置的并发级别时拒绝 503 传入请求:-)
If I understand correctly, this is a lower level TCP setting, that controls the number of incoming connections that will be tracked when the server app does accept() at a slower rate than the rate if incoming connections. See the second argument to http://download.oracle.com/javase/6/docs/api/java/net/ServerSocket.html#ServerSocket(int,%20int)
This is something entirely different from the number of requests queued in the Jetty QueuedThreadPool. The requests queued there are already fully connected, and are waiting for a thread to become available in the pool, after which their processing can start.
I have a similar problem. I have a CPU-bound servlet (almost no I/O or waiting, so async can't help). I can easily limit the maximum number of threads in the Jetty pool so that thread switching overhead is kept at bay. I cannot, however, seem to be able to limit the length of the queued requests. This means that as the load grows, the response times grow respectively, which is not what I want.
I want if all threads are busy and the number of queued requests reaches N, then to return 503 or some other error code for all further requests, instead of growing the queue forever.
I'm aware that I can limit the number of simultaneous requests to the jetty server by using a load balancer (e.g. haproxy), but can it be done with Jetty alone?
P.S.
After writing this, I discovered the Jetty DoS filter, and it seems it can be configured to reject incoming requests with 503 if a preconfigured concurrency level is exceeded :-)
限制输入连接的更现代的方法是
ConnectionLimit
类。如果您想在连接到达应用程序/servlet 级别之前对其进行限制,那么它会很有用。引用自 JavaDoc:
代码示例:
The more modern way to restrict input connections is
ConnectionLimit
class. It can be useful if you want to restrict connections before they come to application/servlet level.Quote from JavaDoc:
Code example: