JBoss CPU利用率问题
我正在使用 JBoss AS 4.2.3 以及 Seam 框架。我的 CPU 使用率随着用户数量的增加而增加,仅 80 个用户就达到 99%。我们还使用 Hibernate、EJB3 和 Apache 以及 mod_jk 进行负载平衡。
当我进行线程转储时,所有可运行线程都在执行相同的活动,并具有以下跟踪:
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.apache.coyote.ajp.AjpProcessor.read(AjpProcessor.java:1012)
at org.apache.coyote.ajp.AjpProcessor.readMessage(AjpProcessor.java:1091)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:384)
at org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:366)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446)
at java.lang.Thread.run(Thread.java:662)
我无法使用堆栈跟踪来解释这一点。我还发现,即使用户已注销,CPU 利用率仍然与处于相同状态的线程相同。
I am using JBoss AS 4.2.3 along with the seam framework. My CPU usage increases as the number of users increase and it hits 99% for just 80 users. We also use Hibernate, EJB3 and Apache with mod_jk for loadbalancing.
When I took the thread dump all the runnable threads are doing a same activity with the following trace:
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:129)
at org.apache.coyote.ajp.AjpProcessor.read(AjpProcessor.java:1012)
at org.apache.coyote.ajp.AjpProcessor.readMessage(AjpProcessor.java:1091)
at org.apache.coyote.ajp.AjpProcessor.process(AjpProcessor.java:384)
at org.apache.coyote.ajp.AjpProtocol$AjpConnectionHandler.process(AjpProtocol.java:366)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:446)
at java.lang.Thread.run(Thread.java:662)
I am not able to interpret this with the stack trace. Also I find that even when the users have logged out, the CPU utilization still continues to be the same with threads in the same state.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这些线程正在尝试从 Socket 连接读取数据。在这种情况下,它们正在等待从 Apache 中的
mod_jk
发送到服务器的下一个请求。这是很正常的,它们可能不是您的 CPU 使用率的原因。此时,您确实需要通过分析器运行您的应用程序。
如果您无法在系统上运行探查器(即它是一个生产设备),那么下一个最好的办法是开始每隔几秒进行多次堆栈转储,然后手动检查它们以匹配线程 ID。您需要查找正在运行代码并且在转储之间似乎没有更改的线程。
这是一项非常乏味的任务,并不总能得到清晰的结果,但如果没有分析器或某种仪器,您将无法找到所有 CPU 的去向。
These threads are attempting to read from a Socket connection. In this case they are waiting for the next request to be sent to the server from
mod_jk
in Apache. This is quite normal and they probably are not the reason for your CPU usage.At this point you really need to go and run your application through a profiler.
If you are unable to run a profiler on the system (i.e. it's a production box) the next best thing is to start to take many stack dumps each a couple of seconds apart and then go though them by hand matching up the thread IDs. You need to look for the threads that are running your code and don't seem to have changed between dumps.
It is a very tedious task and doesn't always get clear results, but without a profiler or some sort of instrumentation you won't be able to find where all that CPU is going.
检查 Apache 和 Jboss 之间的 AJP 配置,如 https://developer.jboss.org/wiki/OptimalModjk12Configuration 中所述
问题
但如此大量的线程可能来自另一个来源。如此处所述:
无论您遇到什么问题,首先要做的就是检查您的超时配置!
你能做什么?
您需要对 Jboss 和 Apache 进行一些配置。
JBoss端
Apache 端的
worker.properties 文件
阿帕奇配置
在链接中,您可以找到另一个配置来进一步优化此场景。
Review your AJP configuration between Apache and Jboss, as described in https://developer.jboss.org/wiki/OptimalModjk12Configuration
The problem
But this high number of threads could be from another source. As described here:
Whatever was your problem, the first thing to do is review your timeout configuration!
What you can do?
You need to do some configuration for Jboss and Apache.
JBoss side
Apache side
worker.properties file
Apache configuration
In the link you can find another configuration to optimize even more this scenario.