什么时候在 ThreadPoolExecutor 中指定单独的核心和最大池大小是个好主意?
我试图理解为 Java 5 的 ThreadPoolExecutor 指定单独的核心和最大池大小的要点。我的理解是,线程数只有在队列已满时才会增加,这似乎有点晚了(至少对于较大的队列)。
难道我不乐意为任务分配更多数量的线程,在这种情况下我可能只是增加核心池大小;或者我不太愿意这样做,在这种情况下我应该有更大的队列?单独的核心和最大池大小在什么情况下有用?
I'm trying to understand the point in specifying separate core and maximum pool sizes for Java 5's ThreadPoolExecutor. My understanding is that the number of threads is only increased once the queue is full, which seems a bit late (at least with larger queues).
Isn't it that I'm either happy to allocate a larger number of threads to the tasks, in which case I might just increase the core pool size; or I am not really willing to do so, in which case I should rather have a larger queue? What is a scenario where separate core and maximum pool sizes are useful?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
此处对此进行了讨论。
There is a discussion of this here.
不同之处在于,如果低于核心池大小,则每个新任务都会创建一个新线程,而不管池中的空闲线程如何。仅当队列已满且已满足核心池大小但仍低于最大值时,线程数才会增加。
一个完美的例子是,当您有一个系统时,您不知道它到底有多少并发负载(例如网络服务器)。此函数允许您指定一组核心线程,可能基于您的计算机拥有的核心数量,但允许比您预期更多的负载。
如果您的 I/O 负载比您预期的要多,并且池中的线程花费大量时间进行阻塞,那么这尤其有用。在这种情况下,您的队列可以轻松填满,而无需大量并发负载,并且可以通过添加几个新线程来服务更多并发请求来轻松修复该问题。
The difference is that if you are below the core pool size, each new task creates a new thread regardless of the idle threads in the pool. The number of threads is only increased once the queue is full when you already have met the core pool size but are still under the maximum.
The perfect example of this is when you have a system that you don't know exactly how much concurrent load it will have (e.g. a webserver). This function allows you to specify a core set of threads, perhaps based on the number of cores your machine has, but allow for more load than you anticipated.
This is specifically useful if you have more I/O load than you expected, and the threads in your pool spend a lot of time blocking. Your queue can easily fill up without having a large concurrent load in this scenario, and it is easily fixed by adding a couple of new threads to service some more concurrent requests.