需要在集群环境中工作的 Java Web 应用程序的关键考虑因素/陷阱?
我们有一个 Java Web 应用程序,它使用 Spring 和 Hibernate,并且具有相当标准的架构。目前,除了通过 AFM/HTTP 与 BlazeDS 后端通信的 Flex GUI 客户端之外,该应用程序还支持基于 SOAP 的客户端。目前,该应用程序仅在 Tomcat 中运行,但即将支持 JBoss 和 Websphere。
我们现在正在确保应用程序可以在集群环境中运行,以实现可扩展性和故障转移。这个问题主要是关于Java应用程序服务器层(而不是数据库层)。除了由 Spring Security 管理的登录会话信息之外,应用程序是无状态的。
在支持集群环境时我们需要考虑什么?
寻找有关日志记录、JNDI、配置、文件 I/O、登录会话等的任何提示 - 任何内容!
We have a Java web application that uses Spring and Hibernate and has a fairly standard architecture. Currently the application supports SOAP based clients in addition to a Flex GUI client that communicates via AFM/HTTP to a BlazeDS backend. Today the application runs only in Tomcat, but support for JBoss and Websphere is forthcoming.
We are now being to ensure that the application can run in a clustered environment for the purposes of scalability and failover. This question is primarily about the Java application server tier (rather than the database tier). Other than login session information that is managed by Spring Security, the application is stateless.
What do we need to consider when supporting a clustered environment?
Looking for any tips around logging, JNDI, configuration, file I/O, login sessions, etc. -- anything!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这取决于很多事情,给出的最有用的建议几乎总是最具体的建议,但我想您在这里寻找的是一般经验/建议。 :-)
尽管有点过时,这篇文章解释了 Java 集群的基础知识EE还算不错。
关于个人经验,在我之前的一个项目中,我们没有使用容器提供的内置会话复制/故障转移,而是为我们的应用程序实现了自己的会话功能。这样做的好处是能够以与服务器/应用程序无关的方式访问用户特定数据。我们的会话存储基本上是一个由内存数据网格库 Hazelcast 支持的分布式哈希图,并且效果很好。我确实在此处写了一些相关内容。
启用集群最重要的部分是“测试”集群是否有效。我知道这听起来很明显,但常常被忽视。确保进行彻底的性能和回归测试,以确保:
由于每个服务器现在都有自己的连接池,因此请确保重新访问连接池配置鉴于您的负载现在将分布在“n”个服务器实例之间。
哦,顺便说一句,祝您的“Websphere”集成好运...;-)
That depends on a lot of things and the most useful advice given would almost always be the most specific one, but I suppose you are looking for general experiences/suggestions here. :-)
Though a bit outdated, this article explains the basics of clustering a Java EE fairly well.
Regarding personal experiences, in one of my previous projects, rather than going with the in-built session replication/fail-over offered by containers, we implemented our own session capability for our application. The benefit of this was the ability to access user specific data in a server/application agnostic manner. Our session store was basically a distributed hash map backed by the in-memory data grid library Hazelcast and it worked out well. I did write a bit about it here.
The most important part of going live with clustering would be "testing" whether the clustering works. I know this sounds so obviously obvious but this is often overlooked. Make sure you do thorough performance and regression tests to ensure that:
Since each server would now have its own connection pool, make sure you revisit the connection pool configuration in light of the fact that your load would be now distributed between 'n' server instances.
Oh and BTW, good luck with your "Websphere" integration... ;-)