Apache 与使用 AJP (mod_jk) 的 JBOSS 导致线程数激增

发布于 2024-08-13 15:59:34 字数 1010 浏览 3 评论 0原文

我们使用 Apache 和 JBOSS 来托管我们的应用程序,但我们发现了一些与 mod_jk 的线程处理相关的问题。

我们的网站属于低流量网站,在网站活动高峰期间最多有 200-300 个并发用户。随着流量的增长(不是指并发用户数,而是指到达我们服务器的累积请求数),服务器长时间停止服务请求,尽管它没有崩溃,但直到 20 分钟才能服务请求。 JBOSS 服务器控制台显示两台服务器上有 350 个线程繁忙,尽管有足够的可用内存,例如超过 1-1.5 GB(使用了 2 个用于 JBOSS 的服务器,它们是 64 位,为 JBOSS 分配了 4 GB RAM)

为了检查我们使用 JBOSS 和 Apache Web 控制台时遇到的问题,我们看到线程显示为 S 状态长达几分钟,尽管我们的页面需要大约 4-5 秒的时间才能提供服务。

我们获取了线程转储,发现线程大部分处于 WAITING 状态,这意味着它们无限期地等待。这些线程不属于我们的应用程序类,而是属于 AJP 8009 端口。

有人可以帮助我吗,因为其他人也可能遇到这个问题并以某种方式解决它。如果需要更多信息,请告诉我。

mod_proxy 是否比使用 mod_jk 更好,或者 mod_proxy 还存在其他一些问题,如果我切换到 mod__proxy,这些问题对我来说可能是致命的?

我使用的版本如下:

Apache 2.0.52
JBOSS: 4.2.2
MOD_JK: 1.2.20
JDK: 1.6
Operating System: RHEL 4

感谢您的帮助。

伙计们!!!!我们终于找到了上述配置的解决方法。它是 APR 的使用,在这里提到: http://community.jboss.org/thread/153737< /a>.许多人在下面的答案中正确提到了它的问题,即连接器问题。早些时候,我们通过配置休眠和增加响应时间做了临时解决方法。完整修复是 APR。

We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk.

Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS)

In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served.

We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port.

Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know.

Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy?

The versions I used are as follows:

Apache 2.0.52
JBOSS: 4.2.2
MOD_JK: 1.2.20
JDK: 1.6
Operating System: RHEL 4

Thanks for the help.

Guys!!!! We finally found the workaround with the configuration mentioned above. It is use of APR and is mentioned here: http://community.jboss.org/thread/153737. Its issue as correctly mentioned by many people in answers below i.e. connector issue. Earlier we made temporary workaround by configuring hibernate and increasing response time. The full fix is APR.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

把时间冻结 2024-08-20 15:59:34

我们也遇到类似的问题。我们仍在研究解决方案,但看起来可以在这里找到很多答案:

http:// /www.jboss.org/community/wiki/OptimalModjk12Configuration

祝你好运!

We are experiencing similar issues. We are still working on solutions, but it looks like alot of answers can be found here:

http://www.jboss.org/community/wiki/OptimalModjk12Configuration

Good luck!

伪心 2024-08-20 15:59:34

将 Apache 原生 APR 部署在 jboss/bin/native 下。

编辑 jboss run.sh 以确保它在正确的文件夹中查找本机库。

这将强制 jboss 使用本机 AJP 连接器 trhead,而不是默认的纯 java trhead。

Deploy the Apache native APR under jboss/bin/native.

Edit your jboss run.sh to make sure it is looking for the native libs in the right folder.

This will force jboss to use native AJP connector trheads rather than the default pure-java ones.

丿*梦醉红颜 2024-08-20 15:59:34

您还应该查看 JBoss Jira 问题,标题为“AJP Connector Threads Hung in CLOSE_WAIT Status”:

https://jira.jboss.org/jira/browse/JBPAPP-366

You should also take a look at the JBoss Jira issue, titled "AJP Connector Threads Hung in CLOSE_WAIT Status":

https://jira.jboss.org/jira/browse/JBPAPP-366

为人所爱 2024-08-20 15:59:34

我们为解决此问题所做的工作如下:

 <property name="hibernate.cache.use_second_level_cache">false</property>


 <property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
    <property name="hibernate.search.Rules.directory_provider">
        org.hibernate.search.store.RAMDirectoryProvider 
    </property>

    <property name="hibernate.search.default.indexBase">/usr/local/lucene/indexes</property>

    <property name="hibernate.search.default.indexwriter.batch.max_merge_docs">1000</property>
    <property name="hibernate.search.default.indexwriter.transaction.max_merge_docs">10</property>

    <property name="hibernate.search.default.indexwriter.batch.merge_factor">20</property>
    <property name="hibernate.search.default.indexwriter.transaction.merge_factor">10</property>

 <property name ="hibernate.search.reader.strategy">not-shared</property>   
 <property name ="hibernate.search.worker.execution">async</property>   
 <property name ="hibernate.search.worker.thread_pool.size">100</property>  
 <property name ="hibernate.search.worker.buffer_queue.max">300</property>  

 <property name ="hibernate.search.default.optimizer.operation_limit.max">1000</property>   
 <property name ="hibernate.search.default.optimizer.transaction_limit.max">100</property>  

 <property name ="hibernate.search.indexing_strategy">manual</property> 

上述参数确保工作线程不会被 lucene 和 hibernate 搜索阻塞。 hibernate的默认优化器让我们的生活变得轻松,因此我认为这个设置非常重要。

还删除了 C3P0 连接池并使用内置的 JDBC 连接池,因此我们在下面的部分进行了评论。

 <!--For JDBC connection pool (use the built-in)-->


 <property   name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
    <!-- DEPRECATED very expensive property name="c3p0.validate>-->
    <!-- seconds -->

完成所有这些之后,我们能够显着减少 AJP 线程服务请求所花费的时间,并且线程在服务请求后开始进入 R 状态,即处于 S 状态。

What we did for sorting this issue out is as follows:

 <property name="hibernate.cache.use_second_level_cache">false</property>


 <property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
    <property name="hibernate.search.Rules.directory_provider">
        org.hibernate.search.store.RAMDirectoryProvider 
    </property>

    <property name="hibernate.search.default.indexBase">/usr/local/lucene/indexes</property>

    <property name="hibernate.search.default.indexwriter.batch.max_merge_docs">1000</property>
    <property name="hibernate.search.default.indexwriter.transaction.max_merge_docs">10</property>

    <property name="hibernate.search.default.indexwriter.batch.merge_factor">20</property>
    <property name="hibernate.search.default.indexwriter.transaction.merge_factor">10</property>

 <property name ="hibernate.search.reader.strategy">not-shared</property>   
 <property name ="hibernate.search.worker.execution">async</property>   
 <property name ="hibernate.search.worker.thread_pool.size">100</property>  
 <property name ="hibernate.search.worker.buffer_queue.max">300</property>  

 <property name ="hibernate.search.default.optimizer.operation_limit.max">1000</property>   
 <property name ="hibernate.search.default.optimizer.transaction_limit.max">100</property>  

 <property name ="hibernate.search.indexing_strategy">manual</property> 

Above parameters ensured that the worker threads are not blocked by lucene and hibernate search. Default optimizer of hibernate made our life easy, thus I consider this setting very important.

Also removed the C3P0 connection pooling and used inbuilt JDBC connection pooling, thus we commented below section.

 <!--For JDBC connection pool (use the built-in)-->


 <property   name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
    <!-- DEPRECATED very expensive property name="c3p0.validate>-->
    <!-- seconds -->

After doing all this, we were able to reduce considerably the time which an AJP thread was taking to serve a request and threads started coming to R state after serving the request i.e. in S state.

栖迟 2024-08-20 15:59:34

最近提交的 tomcat 6 中存在一个错误。这是关于 HTTP 连接器的,但症状听起来是一样的。

https://issues.apache.org/bugzilla/show_bug.cgi? id=48843#c1

There is a bug in tomcat 6 that was filed recently. It's in regards to the HTTP connector but the symptoms sound the same.

https://issues.apache.org/bugzilla/show_bug.cgi?id=48843#c1

趁年轻赶紧闹 2024-08-20 15:59:34

我们在 Jboss 5 环境中遇到了这个问题。原因是 Web 服务的响应时间超出了 Jboss/Tomcat 允许的时间。这将导致 AJP 线程池最终耗尽其可用线程。然后它就会停止响应。我们的解决方案是调整 Web 服务以使用请求/确认模式而不是请求/响应模式。这使得 Web 服务每次都能在超时时间内做出响应。当然,这并不能解决 Jboss 的底层配置问题,但对我们来说,在我们的上下文中比调整 jboss 更容易。

We were having this issue in a Jboss 5 environment. The cause was a web service that took longer to respond than Jboss/Tomcat allowed. This would cause the AJP thread pool to eventually exhaust its available threads. It would then stop responding. Our solution was to adjust the web service to use a Request/Acknowledge pattern rather than a Request/Respond pattern. This allowed the web service to respond within the timeout period every time. Granted this doesn't solve the underlying configuration issue of Jboss, but it was easier for us to do in our context than tuning jboss.

那些过往 2024-08-20 15:59:34

有一个与 AJP 连接器执行器泄漏线程相关的错误,此处解释了解决方案 Jboss AJP线程池未释放空闲线程.综上所述,AJP 线程池连接默认没有超时,一旦建立就会永久保留。希望这有帮助,

There is a bug related to AJP connector executor leaking threads and the solution is explained here Jboss AJP thread pool not released idle threads. In summary, AJP thread-pool connections by default have no timeout and will persist permanently once established. Hope this helps,

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文