如何处理 2000+ tomcat 上的每秒请求数?
我正在用 Java 开发 SMS 应用程序。我的客户端通过 SMS 发送查询,这些查询将通过 SMS 网关以 http 请求的形式转发到我的服务器。现在,我的应用程序处理请求并通过 SMS 网关再次将响应发送回客户端。最多仅发送 300 个字符作为响应。我预计流量会非常高(2000 个请求/秒)。我想通过一些网络托管公司托管我的应用程序(考虑 mochahost)。在托管之前我应该考虑哪些因素(RAM、CPU 等)以及主要瓶颈是什么?如果调整得当,专用的tomcat服务器可以处理如此高的流量吗?您有什么建议?
没有数据库交互(我只使用Java堆内存)。我用 JMeter 进行了测试(100 个请求/秒)。我的堆内存使用量为 35MB,平均响应时间为 532ms。而且我没有使用任何会话变量。
I am developing an SMS application in Java. My clients send queries via SMS which will be forwarded to my server in the form of http requests through SMS Gateway. Now my app processes the requests and sends back responses to clients again through SMS Gateway. Maximum only 300 characters are sent as response. I'm expecting very high traffic (2000 requests/sec). I wanted to host my application with some webhosting company (considering mochahost). What factors should I consider before hosting (interms of RAM, CPU, etc) and also what shall be the major bottlenecks? Can dedicated tomcat server handle such high traffic if tuned properly? What are your suggestions?
There is no database interaction (I'm only using Java heap memory). I ran a test with JMeter(100 requests/sec). My heap memory usage was 35MB and average response time was 532ms.And also i'm not using any session variables.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
如果不知道您在 servlet 中做什么,就很难回答您的问题。但简短的回答是,它确实与 tomcat 没有任何关系。
目前,我们的 Tomcat 服务器使用 Dell R410s(双四核、32G 内存)。对于与后端 membase 集群通信的 REST 服务,我们可以在单个服务器上轻松处理约 15k 请求/秒(这是使用 Jersey JAX-RS 实现)。目前,我们在 F5 负载均衡器后面有 4 个这样的服务器。每个请求的平均处理时间约为 10 毫秒。
它真正归结为并发性;您的 servlet 需要多长时间才能完成对请求所需的操作。每个并发请求都有一个线程,因此如果您尝试每秒处理 2000 个请求,而单个请求需要 500 毫秒处理……您将需要一些硬件。问题不在于 tomcat,而在于 servlet 的可用资源之一。
It's difficult to answer your question without knowing what you're doing in your servlet. But the short answer is that it really doesn't have anything to do with tomcat.
We current use Dell R410s (dual quad core, 32G ram) for our Tomcat servers. For a REST service that talks to a membase cluster on the back end we can easily process ~ 15k req/second on a single server (this is using the Jersey JAX-RS implementation). We currently have 4 of these behind an F5 load balancer. Each of these requests is serviced in about 10ms on average.
What it really comes down to is the concurrency; How long does it take your servlet to do what it needs to do with a request. You've got a thread going for every concurrent request, so if you're trying to 2000 req/sec and a single request takes 500ms to process ... you're going to need a bit of hardware. The issue isn't tomcat, but one of available resources for your servlet.
在普通硬件上使用默认设置的单个 Tomcat 服务器应该可以轻松处理每秒 2k 请求,假设每个请求没有太多工作要做。如果处理一个请求需要 500 毫秒以上,您可能需要增加线程池中的线程数,并且您可能会开始突破限制。或者,如果您可以将部分工作卸载到其他线程,它将加快响应时间,并且您可以保留默认的 200 个线程。那么问题就在于您的工作线程是否可以跟上传入的请求。这取决于您的负载是恒定的还是突发的,以及您在处理过程中可以接受多少延迟。这甚至没有解决 HA、DR 以及您可接受的停机时间。这都是一个巨大的平衡行为,变量太多,无法给出一个现成的答案。
A single Tomcat server with default settings on modest hardware should easily handle 2k requests/second, assuming it doesn't have too much work to do per request. If processing one request takes 500+ ms, you'll probably need to bump up the number of threads in the thread pool, and you might start pushing the limits. Alternately, if you can offload some of that work to some other thread(s), it will speed up the response times, and you could keep the default 200 threads. Then it's just a question of whether your worker thread(s) can keep up with incoming requests. That would depend on whether your load is constant or bursty and how much delay you can accept in processing. This doesn't even address HA, DR, and what your acceptable downtime is. It's all a big balancing act, and there are far too many variables to just give a cut-and-dried answer.
看起来您可能必须实施集群/负载平衡方法。查看此作为示例。
It looks like you may have to implement a cluster / load balancing approach. Take a look at this for an example.