这可以在 Spring Boot (TOMCAT) 中应用分桶吗
我公开了 2 个 api 的
/endpoint/A 和 /endpoint/B 。
@GetMapping("/endpoint/A")
public ResponseEntity<ResponseA> controllerA() throws InterruptedException {
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
@GetMapping("/endpoint/B")
public ResponseEntity<ResponseA> controllerB() throws InterruptedException {
ResponseA responseB = serviceB.responseClient();
return ResponseEntity.ok().body(responseB);
}
实施的服务关于端点 A 内部调用 /endpoint/C 和端点 B 内部调用 /endpoint/D。
由于外部服务 /endpoint/D 需要更多时间,即从 /endpoint/A 获取响应需要花费更多时间更多时间,因此整个线程被卡住,影响 /endpoint/B。
我尝试使用具有以下实现的执行程序服务来解决此问题
@Bean(name = "serviceAExecutor")
public ThreadPoolTaskExecutor serviceAExecutor(){
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(100);
taskExecutor.setMaxPoolSize(120);
taskExecutor.setQueueCapacity(50);
taskExecutor.setKeepAliveSeconds(120);
taskExecutor.setThreadNamePrefix("serviceAExecutor");
return taskExecutor;
}
即使在实现此操作之后,如果我同时在 /endpoint/A 上收到超过 200 个请求(大于默认最大值Tomcat 服务器中的线程数),然后我没有从 /endpoint/B 获得响应,因为所有线程都忙于从端点 A 或队列中获取响应。
有人可以建议有什么办法吗在每个公开的端点级别上应用分桶化,并且一次只允许处理有限的请求。将剩余的内容放入存储桶/队列中,以便其他端点上的请求可以正常工作?
编辑:- 以下是解决方法
@GetMapping("/endpoint/A")
public CompletableFuture<ResponseEntity<ResponseA>> controllerA() throws InterruptedException {
return CompletableFuture.supplyAsync(()->controllerHelperA());
}
@GetMapping("/endpoint/B")
public CompletableFuture<ResponseEntity<ResponseB>> controllerB() throws InterruptedException {
return CompletableFuture.supplyAsync(()->controllerHelperB());
}
private ResponseEntity<ResponseA> controllerHelperA(){
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
private ResponseEntity<ResponseB> controllerHelperB(){
ResponseB responseB = serviceB.responseClient();
return ResponseEntity.ok().body(responseB);
}
I exposed 2 api's
/endpoint/A and /endpoint/B .
@GetMapping("/endpoint/A")
public ResponseEntity<ResponseA> controllerA() throws InterruptedException {
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
@GetMapping("/endpoint/B")
public ResponseEntity<ResponseA> controllerB() throws InterruptedException {
ResponseA responseB = serviceB.responseClient();
return ResponseEntity.ok().body(responseB);
}
Services implemented regarding endpoint A internally call /endpoint/C and endpoint B internally call /endpoint/D.
As external service /endpoint/D taking more time i.e getting response from /endpoint/A takes more time hence whole threads are stucked that is affecting /endpoint/B.
I tried to solve this using executor service having following implementation
@Bean(name = "serviceAExecutor")
public ThreadPoolTaskExecutor serviceAExecutor(){
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setCorePoolSize(100);
taskExecutor.setMaxPoolSize(120);
taskExecutor.setQueueCapacity(50);
taskExecutor.setKeepAliveSeconds(120);
taskExecutor.setThreadNamePrefix("serviceAExecutor");
return taskExecutor;
}
Even after implementing this if I received more than 200 request on /endpoint/A simultaneously (greater than default max number of threads in Tomcat server) then I am not getting responses from /endpoint/B as all threads are busy for getting response from endpoint A or in queue.
Can someone plz suggest is there any way to apply bucketization on each exposed endpoint level and allow only limited request to process at a time & put remaining into bucket/queue so that request on other endpoints can work properly ?
Edit:- following is solution approach
@GetMapping("/endpoint/A")
public CompletableFuture<ResponseEntity<ResponseA>> controllerA() throws InterruptedException {
return CompletableFuture.supplyAsync(()->controllerHelperA());
}
@GetMapping("/endpoint/B")
public CompletableFuture<ResponseEntity<ResponseB>> controllerB() throws InterruptedException {
return CompletableFuture.supplyAsync(()->controllerHelperB());
}
private ResponseEntity<ResponseA> controllerHelperA(){
ResponseA responseA = serviceA.responseClient();
return ResponseEntity.ok().body(responseA);
}
private ResponseEntity<ResponseB> controllerHelperB(){
ResponseB responseB = serviceB.responseClient();
return ResponseEntity.ok().body(responseB);
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Spring MVC 支持 Servlet API 3.0 中引入的异步 servlet API。为了让控制器返回
Callable
、CompletableFuture
或DeferredResult
时更容易,它将在后台线程中运行并释放请求处理线程以进行进一步操作加工。现在这将在后台线程中执行。根据您的 Spring Boot 版本,如果您配置了自己的
TaskExecutor
,它将SimpleAsycnTaskExecutor
(这将在日志中发出警告),spring.task.execution
命名空间进行配置TaskExecutor
,但需要额外配置。如果您没有定义自定义
TaskExecutor
并且使用的是相对较新版本的 Spring Boot 2.1 或更高版本 (IIRC),则可以使用以下属性来配置TaskExecutor
。通常,这将用于在后台执行 Spring MVC 任务以及常规
@Async
任务。如果您想显式配置用于 Web 处理的
TaskExecutor
,您可以创建一个WebMvcConfigurer
并实现configureAsyncSupport
方法。您可以在构造函数参数上使用@Qualifier 来指定要使用的TaskExecutor。
Spring MVC supports the async servlet API introduced in Servlet API 3.0. To make it easier when your controller returns a
Callable
,CompletableFuture
orDeferredResult
it will run in a background thread and free the request handling thread for further processing.Now this will be executed in a background thread. Depending on your version of Spring Boot and if you have configured your own
TaskExecutor
it will eitherSimpleAsycnTaskExecutor
(which will issue a warning in your logs),ThreadPoolTaskExecutor
which is configurable through thespring.task.execution
namespaceTaskExecutor
but requires additional configuration.If you don't have a custom
TaskExecutor
defined and are on a relatively recent version of Spring Boot 2.1 or up (IIRC) you can use the following properties to configure theTaskExecutor
.Generally this will be used to execute Spring MVC tasks in the background as well as regular
@Async
tasks.If you want to explicitly configure which
TaskExecutor
to use for your web processing you can create aWebMvcConfigurer
and implement theconfigureAsyncSupport
method.You could use an
@Qualifier
on the constructor argument to specify whichTaskExecutor
you want to use.