具有 Servlet 3 的较新 Web 容器可以扩展 BlazeDS 最大并发用户数吗?
BlazeDS 是作为 servlet 实现的,因此仅限于大约数百个同时用户。
我想知道是否可以使用支持 Servlet 3 的最新 Web 容器(Tomcat 7、GlassFish/Grizzly、Jetty 等)来创建 NIO 端点,以将并发用户数量增加到数千?
这是一个有效且实用的解决方案吗?有人在生产中这样做吗?
像这样的成熟版本: http://flex.sys-con.com/node/720304 如果这在当时非常重要,那么为什么现在(当 Servlet 3 广泛可用时)没有人尝试实现 NIO 端点呢? (注意,我是这里的新手,所以如果我遗漏了什么,请随时指出明显的内容)
NIO 的好处:http://www.javalobby.org/java/forums/t92965.html
如果没有,是一个负载均衡器和多个应用程序服务器,每个服务器都有一个 BlazeDS 实例,推荐解决方案(除了 LCDS 等)?
BlazeDS is implemented as a servlet and thus limited to roughly hundreds of simultaneous users.
I wonder if the more recent web containers (Tomcat 7, GlassFish/Grizzly, Jetty, etc.) supporting Servlet 3 could be used to create NIO endpoints to increase the number of simultaneous users to the thousands?
Is this a valid and practical solution? Anyone do this in production?
Something like a mature version of this: http://flex.sys-con.com/node/720304
If this was of great importance back then, why now (when Servlet 3 is widely available) has there been no effort to try to implement NIO endpoints? (note, I'm a newbie here so feel free to state the obvious if I'm missing something)
Benefit of NIO: http://www.javalobby.org/java/forums/t92965.html
If not, is a load balancer and multiple application servers, each having an instance of BlazeDS, the recommended solution (outside of going to LCDS, etc.)?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
花岗岩 DS &异步 Servlet
据我所知,GraniteDS 是唯一实现异步 Servlet 以实现实时消息传递的解决方案,即。数据推送。此功能不仅适用于 Servlet 3 容器(Tomcat 7、JBoss 7、Jetty 8、GlassFish 3 等),还适用于较旧的容器或具有特定异步支持的其他容器(例如 Tomcat 6/CometProcessor、WebLogic 9+/AbstractAsyncServlet)等)
其他解决方案没有此功能(BlazeDS)或使用 RTMP(LCDS、WebORB 和 Clear Toolkit 的最新版本)。关于 RTMP 实现我不能说太多,但 BlazeDS 显然缺少可扩展的实时消息传递实现,因为它仅使用同步 servlet 模型。
如果您需要处理数千个并发用户,您甚至可以创建一个 GraniteDS 服务器集群,以进一步提高可扩展性和稳健性(请参阅此视频 例如)。
异步 Servlet 性能
异步 Servlet 与传统 Servlet 的可扩展性已经经过多次基准测试,并给出了令人印象深刻的结果。例如,请参阅 Jetty 博客上的这篇文章:
经典同步模型:
Comet异步模型:
这种比率可以从其他异步实现(不是 Jetty)中大致预期,并且使用 Flex/AMF3 而不是纯文本 HTTP 请求不会改变太多结果。
为什么使用异步 Servlet?
当每个请求立即处理时,经典(同步)Servlet 模型是可以接受的:
数据推送的问题在于 HTTP 协议不存在真正的“数据推送”:服务器无法向客户端发起调用发送数据,它必须响应请求。这就是 Comet 实现依赖于不同模型的原因:
通过同步 servlet 处理,每个请求都由一个专用服务器线程处理。然而,在数据推送处理的上下文中,该线程大多数时间只是等待可用数据,并且不执行任何操作,同时消耗大量服务器资源。
异步处理的全部目的是让 servlet 容器使用这些(通常)空闲线程来处理其他传入请求,这就是为什么当您的应用程序需要实时消息传递功能时,您可以期待可伸缩性方面的显着改进。
您可以在网上找到许多其他资源来解释此机制,只需在 Comet 上 google 即可。
GraniteDS & Asynchronous Servlets
GraniteDS is, as far as I know, the only solution that implements asynchronous servlets for real-time messaging, ie. data push. This feature is not only available for Servlet 3 containers (Tomcat 7, JBoss 7, Jetty 8, GlassFish 3, etc.) but also for older or other containers with specific asynchronous support (eg. Tomcat 6/CometProcessor, WebLogic 9+/AbstractAsyncServlet, etc.)
Other solutions don't have this feature (BlazeDS) or use RTMP (LCDS, WebORB and the last version of Clear Toolkit). I can't say much about RTMP implementations but BlazeDS is clearly missing a scalable real-time messaging implementation as it uses only a synchronous servlet model.
If you need to handle many thousands concurrent users, you can even create a cluster of GraniteDS servers in order to further improve scalability and robustness (see this video for example).
Asynchronous Servlets Performance
The scalabily of asynchronous servlets vs. classical servlets has been benchmarked several times and gives impressive results. See, for example, this post on the Jetty blog:
Classical synchronous model:
Comet asynchronous model:
This kind of ratio can be roughly expected from other asynchronous implementations (not Jetty) and using Flex/AMF3 instead of plain text HTTP request shouldn't change much of the result.
Why Asynchronous Servlets?
The classical (synchronous) servlet model is acceptable when each request is processed immediately:
The problem with data push is that there is no such thing as a true "data push" with the HTTP protocol: the server cannot initiate a call to the client to send data, it has to answer a request. That's why Comet implementations rely on a different model:
With synchronous servlet processing, each request is handled by one dedicated server thread. However, in the context of data push processing, this thread is most of the time just waiting for available data and does nothing while consuming significant server resources.
The all purpose of asynchronous processing is to let the servlet container use these (often) idle threads in order to process other incoming requests and that's why you can expect dramatic improvements in terms of scalability when your application requires real-time messaging features.
You can find many other resources on the Web explaining this mechanism, just google on Comet.