如何限制 Jetty 接受的连接数量?

发布于 2024-11-02 07:52:53 字数 329 浏览 5 评论 0原文

我正在运行 Jetty 7.2.2,并且想要限制它将处理的连接数量,这样当它达到限制(例如 5000)时,它将开始拒绝连接。

不幸的是,所有连接器似乎都只是尽可能快地接受传入连接并将它们分派到配置的线程池。

我的问题是我在受限环境中运行,并且只能访问 8K 文件描述符。如果我收到大量连接,我可能会很快用完文件描述符并进入不一致的状态。

我的一个选择是返回 HTTP 503 Service Unavailable,但这仍然需要我接受并响应连接 - 并且我可能会在某处跟踪传入连接的数量,也许是通过编写 servlet 过滤器。

对此有更好的解决方案吗?

I'm running Jetty 7.2.2 and want to limit the number of connections it will handle, such that when it reaches a limit (eg 5000), it will start refusing connections.

Unfortunately, all the Connectors appear to just go ahead and accept incoming connections as fast as they can and dispatch them to the configured thread pool.

My problem is that I'm running in a constrained environment, and I only have access to 8K file descriptors. If I get a bunch of connections coming in I can quickly run out of file descriptors and get into an inconsistent state.

One option I have is to return an HTTP 503 Service Unavailable, but that still requires me to accept and respond to the connection - and I'd have keep track of the number of incoming connections somewhere, perhaps by writing a servlet filter.

Is there a better solution to this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

野味少女 2024-11-09 07:52:53

我最终采用了一个解决方案,该解决方案可以跟踪请求数量并在负载过高时发送 503。这并不理想,正如您所看到的,我必须添加一种方法来始终让延续请求通过,这样它们就不会挨饿。非常适合我的需求:

public class MaxRequestsFilter implements Filter {

    private static Logger cat   = Logger.getLogger(MaxRequestsFilter.class.getName());

    private static final int DEFAULT_MAX_REQUESTS = 7000;
    private Semaphore requestPasses;

    @Override
    public void destroy() {
        cat.info("Destroying MaxRequestsFilter");
    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {

        long start = System.currentTimeMillis();
        cat.debug("Filtering with MaxRequestsFilter, current passes are: " + requestPasses.availablePermits());
        boolean gotPass = requestPasses.tryAcquire();
        boolean resumed = ContinuationSupport.getContinuation(request).isResumed();
        try {
            if (gotPass || resumed ) {
                chain.doFilter(request, response);
            } else {
                ((HttpServletResponse) response).sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE);
            }
        } finally {
            if (gotPass) {
                requestPasses.release();
            }
        }
        cat.debug("Filter duration: " + (System.currentTimeMillis() - start) + " resumed is: " + resumed);
    }

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {

        cat.info("Creating MaxRequestsFilter");

        int maxRequests = DEFAULT_MAX_REQUESTS;
        requestPasses = new Semaphore(maxRequests, true);
    }

}

I ended up going with a solution which keeps track of the number of requests and sends a 503 when the load is too high. It's not ideal, and as you can see I had to add a way to always let continuation requests through so they didn't get starved. Works well for my needs:

public class MaxRequestsFilter implements Filter {

    private static Logger cat   = Logger.getLogger(MaxRequestsFilter.class.getName());

    private static final int DEFAULT_MAX_REQUESTS = 7000;
    private Semaphore requestPasses;

    @Override
    public void destroy() {
        cat.info("Destroying MaxRequestsFilter");
    }

    @Override
    public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException {

        long start = System.currentTimeMillis();
        cat.debug("Filtering with MaxRequestsFilter, current passes are: " + requestPasses.availablePermits());
        boolean gotPass = requestPasses.tryAcquire();
        boolean resumed = ContinuationSupport.getContinuation(request).isResumed();
        try {
            if (gotPass || resumed ) {
                chain.doFilter(request, response);
            } else {
                ((HttpServletResponse) response).sendError(HttpServletResponse.SC_SERVICE_UNAVAILABLE);
            }
        } finally {
            if (gotPass) {
                requestPasses.release();
            }
        }
        cat.debug("Filter duration: " + (System.currentTimeMillis() - start) + " resumed is: " + resumed);
    }

    @Override
    public void init(FilterConfig filterConfig) throws ServletException {

        cat.info("Creating MaxRequestsFilter");

        int maxRequests = DEFAULT_MAX_REQUESTS;
        requestPasses = new Semaphore(maxRequests, true);
    }

}
难如初 2024-11-09 07:52:53

线程池有一个与之关联的队列。默认情况下,它是无界的。但是,在创建线程池时,您可以提供一个有界队列作为其基础。例如:

Server server = new Server();
LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>(maxQueueSize);
ExecutorThreadPool pool = new ExecutorThreadPool(minThreads, maxThreads, maxIdleTime, TimeUnit.MILLISECONDS, queue);
server.setThreadPool(pool);

这似乎已经解决了我的问题。否则,由于无界队列,服务器在重负载下启动时会耗尽文件句柄。

The thread pool has a queue associated with it. By default, it is unbounded. However, when creating a thread pool you can provide a bounded queue to base it on. For example:

Server server = new Server();
LinkedBlockingQueue<Runnable> queue = new LinkedBlockingQueue<Runnable>(maxQueueSize);
ExecutorThreadPool pool = new ExecutorThreadPool(minThreads, maxThreads, maxIdleTime, TimeUnit.MILLISECONDS, queue);
server.setThreadPool(pool);

This appears to have resolved the problem for me. Otherwise, with the unbounded queue the server ran out of file handles as it started up under heavy load.

呆° 2024-11-09 07:52:53

我还没有为我的应用程序部署 Jetty。然而,使用 Jetty 与其他一些开源项目进行部署。根据该经验:
连接器的配置如下:

acceptors:专用于接受传入连接的线程数量。

AcceptQueueSize :在操作系统开始发送拒绝之前可以排队的连接请求数。

http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors

您需要将它们添加到以下块中你的配置

<Call name="addConnector">
  <Arg>
      <New class="org.mortbay.jetty.nio.SelectChannelConnector">
        <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>
        <Set name="maxIdleTime">30000</Set>
        <Set name="Acceptors">20</Set>
        <Set name="confidentialPort">8443</Set>
      </New>
  </Arg>
</Call>

I have not deployed Jetty for my application. However used Jetty with some other opensource projects for deployment. Based on that experience:
There are configuration for connector as below:

acceptors : The number of thread dedicated to accepting incoming connections.

acceptQueueSize : Number of connection requests that can be queued up before the operating system starts to send rejections.

http://wiki.eclipse.org/Jetty/Howto/Configure_Connectors

You need to add them to below block in your configuration

<Call name="addConnector">
  <Arg>
      <New class="org.mortbay.jetty.nio.SelectChannelConnector">
        <Set name="port"><SystemProperty name="jetty.port" default="8080"/></Set>
        <Set name="maxIdleTime">30000</Set>
        <Set name="Acceptors">20</Set>
        <Set name="confidentialPort">8443</Set>
      </New>
  </Arg>
</Call>
向地狱狂奔 2024-11-09 07:52:53

接受队列大小

,这是一个较低级别的 TCP 设置,它控制当服务器应用程序以比传入连接的速率更慢的速率执行 Accept() 时将跟踪的传入连接数。请参阅 http://download.oracle.com/javase/6/docs/api/java/net/ServerSocket.html#ServerSocket(int,%20int)

这与排队的请求数量完全不同码头排队线程池。排队的请求已经完全连接,并且正在等待池中的线程变得可用,之后可以开始处理。

我有类似的问题。我有一个受 CPU 限制的 servlet(几乎没有 I/O 或等待,因此异步无济于事)。我可以轻松限制 Jetty 池中的最大线程数,从而避免线程切换开销。但是,我似乎无法限制排队请求的长度。这意味着随着负载的增长,响应时间也会相应增长,这不是我想要的。

我希望如果所有线程都很忙并且排队的请求数量达到 N,则为所有进一步的请求返回 503 或其他一些错误代码,而不是永远增加队列。

我知道我可以通过使用负载均衡器(例如haproxy)来限制对jetty服务器的并发请求数量,但是可以单独使用Jetty来完成吗?

聚苯乙烯
写完这篇文章后,我发现了 Jetty DoS 过滤器,并且似乎可以将其配置为在超出预配置的并发级别时拒绝 503 传入请求:-)

acceptQueueSize

If I understand correctly, this is a lower level TCP setting, that controls the number of incoming connections that will be tracked when the server app does accept() at a slower rate than the rate if incoming connections. See the second argument to http://download.oracle.com/javase/6/docs/api/java/net/ServerSocket.html#ServerSocket(int,%20int)

This is something entirely different from the number of requests queued in the Jetty QueuedThreadPool. The requests queued there are already fully connected, and are waiting for a thread to become available in the pool, after which their processing can start.

I have a similar problem. I have a CPU-bound servlet (almost no I/O or waiting, so async can't help). I can easily limit the maximum number of threads in the Jetty pool so that thread switching overhead is kept at bay. I cannot, however, seem to be able to limit the length of the queued requests. This means that as the load grows, the response times grow respectively, which is not what I want.

I want if all threads are busy and the number of queued requests reaches N, then to return 503 or some other error code for all further requests, instead of growing the queue forever.

I'm aware that I can limit the number of simultaneous requests to the jetty server by using a load balancer (e.g. haproxy), but can it be done with Jetty alone?

P.S.
After writing this, I discovered the Jetty DoS filter, and it seems it can be configured to reject incoming requests with 503 if a preconfigured concurrency level is exceeded :-)

眼角的笑意。 2024-11-09 07:52:53
<Configure id="Server" class="org.eclipse.jetty.server.Server">
    <Set name="ThreadPool">
      <New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
        <!-- specify a bounded queue -->
        <Arg>
           <New class="java.util.concurrent.ArrayBlockingQueue">
              <Arg type="int">6000</Arg>
           </New>
      </Arg>
        <Set name="minThreads">10</Set>
        <Set name="maxThreads">200</Set>
        <Set name="detailedDump">false</Set>
      </New>
    </Set>
</Configure>
<Configure id="Server" class="org.eclipse.jetty.server.Server">
    <Set name="ThreadPool">
      <New class="org.eclipse.jetty.util.thread.QueuedThreadPool">
        <!-- specify a bounded queue -->
        <Arg>
           <New class="java.util.concurrent.ArrayBlockingQueue">
              <Arg type="int">6000</Arg>
           </New>
      </Arg>
        <Set name="minThreads">10</Set>
        <Set name="maxThreads">200</Set>
        <Set name="detailedDump">false</Set>
      </New>
    </Set>
</Configure>
半透明的墙 2024-11-09 07:52:53

限制输入连接的更现代的方法是 ConnectionLimit 类。如果您想在连接到达应用程序/servlet 级别之前对其进行限制,那么它会很有用。

引用自 JavaDoc

此侦听器对连接数施加限制,超出时会调用 AbstractConnector.setAccepting(boolean) 以防止接收更多连接。

代码示例:

   Server server = new Server();
   server.addBean(new Connection Limit(5000,server));
   ...
   server.start();

The more modern way to restrict input connections is ConnectionLimit class. It can be useful if you want to restrict connections before they come to application/servlet level.

Quote from JavaDoc:

This listener applies a limit to the number of connections, which when exceeded results in a call to AbstractConnector.setAccepting(boolean) to prevent further connections being received.

Code example:

   Server server = new Server();
   server.addBean(new Connection Limit(5000,server));
   ...
   server.start();
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文