为 Java RIA 客户端应用程序配置线程池的最佳方法
我有一个 Java 客户端,它通过 HTTP 访问我们的服务器端,发出几个小请求来加载每个新的数据页面。我们维护一个线程池来处理所有非 UI 处理,因此任何后台客户端任务以及任何想要连接到服务器的任务。我一直在研究一些性能问题,但不确定我们是否已尽可能好地设置线程池。目前我们使用核心池大小为 8 的 ThreadPoolExecutor,我们使用 LinkedBlockingQueue 作为工作队列,因此最大池大小被忽略。毫无疑问,没有简单的在所有情况下都做这件事的答案,但是有没有最佳实践。我目前的想法是
1)我将改用 SynchronousQueue 而不是 LinkedBlockingQueue,以便池可以增长到最大池大小。 2)我将最大池大小设置为无限制。
基本上,我目前担心的是,服务器端偶尔出现的性能问题会导致不相关的客户端处理由于线程池大小的上限而停止。我对无限制的担心是对客户端管理这些线程的额外打击,可能只是两个弊端中的一个。
有什么建议、最佳实践或有用的参考吗? 干杯, 罗宾
I've a Java client which accesses our server side over HTTP making several small requests to load each new page of data. We maintain a thread pool to handle all non UI processing, so any background client side tasks and any tasks which want to make a connection to the server. I've been looking into some performance issues and I'm not certain we've got our threadpool set up as well as possible. Currently we use a ThreadPoolExecutor with a core pool size of 8, we use a LinkedBlockingQueue for the work queue so the max pool size is ignored. No doubt there's no simple do this certain thing in all situations answer, but are there any best practices. My thinking at the moment is
1) I'll switch to using a SynchronousQueue instead of a LinkedBlockingQueue so the pool can grow to the max pool size figure.
2) I'll set the max pool size to be unlimited.
Basically my current fear is that occasional performance issues on the server side are causing unrelated client side processing to halt due to the upper limit on the thread pool size. My fear with unbounding it is the additional hit on managing those threads on the client, possibly just the better of 2 evils.
Any suggestions, best practices or useful references?
Cheers,
Robin
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
听起来您可能最好限制队列大小:当有许多请求排队时,您的应用程序是否仍然正常运行(所有任务都长时间排队是否可以接受,有些任务对其他任务更重要)?如果仍有排队任务并且用户退出应用程序,会发生什么情况?如果队列变得非常大,服务器是否有可能(很快)赶上来向用户完全隐藏问题?
我想说的是,为需要其响应来更新用户界面的请求创建一个队列,并使其队列保持非常小。如果此队列变得太大,请通知用户。
对于真正的后台任务,请保留一个单独的池,具有更长的队列,但不是无限的。为该池定义优雅的行为,当它增长时或者当用户想要退出但还有剩余任务时,应该发生什么?
It sounds like you'd probably be better of limiting the queue size: does your application still behave properly when there are many requests queued (is it acceptable for all task to be queued for a long time, are some more important to others)? What happens if there are still queued tasks left and the user quits the application? If the queue growing very large, is there a chance that the server will catch-up (soon enough) to hide the problem completely from the user?
I'd say create one queue for requests whose response is needed to update the user interface, and keep its queue very small. If this queue gets too big, notify the user.
For real background tasks keep a separate pool, with a longer queue, but not infinite. Define graceful behavior for this pool when it grows or when the user wants to quit but there are tasks left, what should happen?
一般来说,网络延迟很容易比客户端内存分配或线程管理方面可能发生的任何情况高出几个数量级。因此,作为一般规则,如果您遇到性能瓶颈,请首先查看网络链接。
如果问题是您的服务器根本无法跟上来自客户端的请求,那么增加客户端的线程不会有任何帮助:您只会从等待 8 个线程等待响应转向更多的线程。线程等待(您甚至可能会由于管理的连接数量增加而增加负载,从而加剧服务器端问题)。
JDK中的两个并发队列都是高性能的;选择实际上归结为使用语义。如果您有非阻塞管道,那么使用非阻塞队列会更自然。如果不这样做,那么使用阻塞队列更有意义。 (您始终可以指定 Integer.MAX_VALUE 作为限制)。如果不需要 FIFO 处理,请确保不指定公平排序,因为这将导致性能大幅下降。
In general, network latencies are easily orders of magnitude higher than anything that can be happening in regards to memory allocation or thread management on the client side. So, as a general rule, if you are running into a performance bottle neck, look first and foremost to the networking link.
If the issue is that your server simply can not keep up with the requests from the clients, bumping up the threads on the client side is not going to help matters: you'll simply progress from having 8 threads waiting to get a response to more threads waiting (and you may even aggravate the server side issues by increasing its load due to higher number of connections it is managing).
Both of the concurrent queues in JDK are high performers; the choice really boils down to usage semantics. If you have non-blocking plumbing, then it is more natural to use the non-blocking queue. IF you don't, then using the blocking queues makes more sense. (You can always specify Integer.MAX_VALUE as the limit). If FIFO processing is not a requirement, make sure you do not specify fair ordering as that will entail a substantial performance hit.
正如 alphazero 所说,如果您遇到瓶颈,则无论您使用哪种方法,客户端等待作业的数量都将继续增长。
真正的问题是你想如何处理瓶颈。或者更准确地说,您希望用户如何处理瓶颈。
如果您使用无界队列,那么您不会收到出现瓶颈的反馈。在某些应用程序中,这很好:如果用户正在启动异步任务,则无需报告积压(假设它最终被清除)。但是,如果用户需要等待响应才能执行下一个客户端任务,这是非常糟糕的。
如果您在有界队列上使用
LinkedBlockingQueue.offer()
,那么您将立即收到一条响应,表明队列已满,并且可以采取一些操作,例如禁用某些应用程序功能、弹出对话框、任何。然而,这将需要您做更多的工作,特别是如果请求可以从多个地方提交的话。我建议,如果您还没有的话,您可以在服务器队列上创建一个 GUI 感知层以提供常见行为。当然,永远不要从事件线程调用
LinkedBlockingQueue.put()
(除非您不介意挂起的客户端)。As alphazero said, if you've got a bottleneck, your number of client side waiting jobs will continue to grow regardless of what approach you use.
The real question is how you want to deal with the bottleneck. Or more correctly, how you want your users to deal with the bottleneck.
If you use an unbounded queue, then you don't get feedback that the bottleneck has occurred. And in some applications, this is fine: if the user is kicking off asynchronous tasks, then there's no need to report a backlog (assuming it eventually clears). However, if the user needs to wait for a response before doing the next client-side task, this is very bad.
If you use
LinkedBlockingQueue.offer()
on a bounded queue, then you'll immediately get a response that says the queue is full, and can take action such as disabling certain application features, popping a dialog, whatever. This will, however, require more work on your part, particularly if requests can be submitted from multiple places. I'd suggest, if you don't have it already, you create a GUI-aware layer over the server queue to provide common behavior.And, of course, never ever call
LinkedBlockingQueue.put()
from the event thread (unless you don't mind a hung client, that is).为什么不创建一个无界队列,而是在队列达到一定大小时拒绝任务(甚至可能通知用户服务器正忙(取决于应用程序!))?然后,您可以记录此事件并找出服务器端发生的情况以进行备份。此外,除非您连接到多个远程服务器,否则池中拥有多个线程可能没有多大意义,尽管这确实取决于您的应用程序及其功能以及与谁交谈。
拥有无界池通常是危险的,因为它通常不会优雅地降级。最好记录问题,发出警报,防止进一步的操作排队,并找出如何扩展服务器端(如果问题存在),以防止再次发生这种情况。
Why not create an unbounded queue, but reject tasks (and maybe even inform the user that the server is busy (app dependent!)) when the queue reaches a certain size? You can then log this event and find out what happened on the server side for the backup to occur, Additionally, unless you are connecting to a multiple remote servers there is probably not much point having more than a couple of threads in the pool, although this does depend on your app and what it does and who it talks to.
Having an unbounded pool is usually dangerous as it generally doesn't degrade gracefully. Better to log the problem, raise an alert, prevent further actions being queued and figure out how to scale the server side, if the problem is there, to prevent this happening again.