以反应性的方式协调多个传出请求

发布于 2025-01-31 10:55:04 字数 500 浏览 4 评论 0原文

这更多是一个最佳实践问题。 在我当前的系统(Monolith)中,单个传入的HTTP API请求可能需要从几个后端来源收集类似结构化的数据,然后将其汇总,然后将数据返回到API响应中的客户端。

在当前实现中,我只需使用ThreadPool并行将所有请求发送到后端源,并且倒计时锁存锁定,以了解返回的所有请求。

我试图找出使用vert.x/quarkus等Reactice堆栈转换上述描述的最佳实践。我想保留接受此API调用的服务的反应性,通过HTTP调用多个(类似)后端源,汇总数据。

我可以粗略地猜测我可以使用静止的反应性来作为传入请求,也许是MP HTTP客户端的后端请求(不确定其反应性),但我不确定什么可以替换我的线程池以并行执行和执行。汇总返回数据的最佳方法是什么?

我认为,使用HTTP反应端客户端,我可以在循环中调用所有后端源,并且由于其反应性将“感觉到”像Parralel的工作。也许返回的数据应该通过流API加重(以加入数据流)?但是我不确定。 我知道这是一个漫长的问题,但有些指针会很棒。

谢谢!

this is more of a best practice question.
in my current system (monolith), a single incoming http api request might need to gather similarly structured data from to several backend sources, aggregate it and only then return the data to the client in the reponse of the API.

in the current implementation I simply use a threadpool to send all requests to the backend sources in parallel and a countdown latch of sorts to know all requests returned.

i am trying to figure out the best practice for transforming the described above using reactice stacks like vert.x/quarkus. i want to keep the reactiveness of the service that accepts this api call, calls multiple (similar) backend source via http, aggregates the data.

I can roughly guess I can use things like rest-easy reactive for the incoming request and maybe MP HTTP client for the backend requests (not sure its its reactive) but I am not sure what can replace my thread pool to execute things in parallel and whats the best way to aggregate the data that returns.

I assume that using a http reactive client I can invoke all the backend sources in a loop and because its reactive it will 'feel' like parralel work. and maybe the returned data should be aggragated via the stream API (to join streams of data)? but TBH I am not sure.
I know its a long long question but some pointers would be great.

thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

猫瑾少女 2025-02-07 10:55:04

您可以放下线程池,不需要它并行调用后端服务。

是的,MP RESTCLIENT是反应性的。假设您有这项服务,它援引了后端来获得漫画小人:

@RegisterRestClient(configKey = "villain-service")
public interface VillainService {

    @GET
    @Path("/")
    @NonBlocking
    @CircuitBreaker
    Uni<Villain> getVillain();

}

英雄的类似服务,heroservice。您可以将它们注入端点课上,取回小人和英雄,然后计算战斗:

@Path("/api")
public class Api {

    @RestClient
    VillainService villains;

    @RestClient
    HeroService heroes;

    @Inject
    FightService fights;

    @GET
    public Uni<Fight> fight() {
        Uni<Villain> villain = villains.getVillain();
        Uni<Hero> hero = heroes.getRandomHero();

        return Uni.combine().all().unis(hero, villain).asTuple()
                .chain(tuple -> {
                    Hero h = tuple.getItem1();
                    Villain v = tuple.getItem2();

                    return fights.computeResult(h, v);
                });
    }
}

You can drop the thread pool, you don't need it to invoke your backend services in parallel.

Yes, the MP RestClient is reactive. Let's say you have this service which invokes a backend to get a comic villain:

@RegisterRestClient(configKey = "villain-service")
public interface VillainService {

    @GET
    @Path("/")
    @NonBlocking
    @CircuitBreaker
    Uni<Villain> getVillain();

}

And a similar one for heroes, HeroService. You can inject them in your endpoint class, retrieve a villain and a hero, and then compute the fight:

@Path("/api")
public class Api {

    @RestClient
    VillainService villains;

    @RestClient
    HeroService heroes;

    @Inject
    FightService fights;

    @GET
    public Uni<Fight> fight() {
        Uni<Villain> villain = villains.getVillain();
        Uni<Hero> hero = heroes.getRandomHero();

        return Uni.combine().all().unis(hero, villain).asTuple()
                .chain(tuple -> {
                    Hero h = tuple.getItem1();
                    Villain v = tuple.getItem2();

                    return fights.computeResult(h, v);
                });
    }
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文