我可以限制分布式应用程序发出的请求吗?

发布于 2024-08-23 05:20:04 字数 334 浏览 4 评论 0原文

我的应用程序发出 Web 服务请求;提供商处理的请求有最大速率,因此我需要限制它们。

当应用程序在单个服务器上运行时,我曾经在应用程序级别执行此操作:一个对象跟踪到目前为止已发出的请求数量,并在当前请求超出允许的最大负载时等待。

现在,我们正在从单个服务器迁移到集群,因此有两个应用程序副本正在运行。

  • 我无法继续检查应用程序代码的最大负载,因为两个节点的组合可能会超出允许的负载。
  • 我不能简单地减少每个服务器上的负载,因为如果另一个节点空闲,第一个节点可以发出更多请求。

这是一个 JavaEE 5 环境。限制应用程序发出的请求的最佳方法是什么?

My application makes Web Service requests; there is a max rate of requests the provider will handle, so I need to throttle them down.

When the app ran on a single server, I used to do it at the application level: an object that keeps track of how many requests have been made so far, and waits if the current request makes it exceeds the maximum allowed load.

Now, we're migrating from a single server to a cluster, so there are two copies of the application running.

  • I can't keep checking for the max load at the application code, because the two nodes combined might exceed the allowed load.
  • I can't simply reduce the load on each server, because if the other node is idle, the first node can send out more requests.

This is a JavaEE 5 environment. What is the best way to throttle the requests the application sends out ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

简单 2024-08-30 05:20:04

由于您已经处于 Java EE 环境中,因此您可以创建一个 MDB 来处理基于 JMS 队列的 Web 服务的所有请求。应用程序的实例只需将它们的请求发布到队列中,MDB 将接收它们并调用 Web 服务。

实际上可以为队列配置适当数量的会话,这将限制对 Web 服务的并发访问,因此您的限制是通过队列配置来处理的。

结果可以通过另一个队列(甚至每个应用程序实例的队列)返回。

Since you are already in a Java EE environment, you can create an MDB that handles all requests to the webservice based on a JMS queue. The instances of the application can simply post their requests to the queue and the MDB will recieve them and call the webservice.

The queue can actually be configured with the appropriate number of sessions that will limit the concurrent access to you webservice, thus your throttling is handled via the queue config.

The results can be returned via another queue (or even a queue per application instance).

青巷忧颜 2024-08-30 05:20:04

N个节点需要进行通信。有多种策略:

  • 广播:每个节点都会向其他所有人广播它正在拨打电话,所有其他节点都会考虑到这一点。节点是平等的并维护单独的全局计数(每个节点都知道其他节点的调用)。
  • 主节点:一个节点比较特殊,它是主节点,所有其他节点在进行调用之前都需要获得主节点的许可。大师是唯一知道全局计数的人。
  • 专用主控:与主控相同,但“主控”本身不进行呼叫,只是一个跟踪呼叫的服务。

根据您预计以后的扩展程度,一种或另一种策略可能是最好的。对于 2 个节点,最简单的一种是广播,但随着节点数量的增加,问题开始增多(您将花费更多的时间来广播和响应广播,而不是实际执行 WS 请求)。

节点如何通信取决于您。你可以打开一个 TCP 管道,你可以广播 UDP,你可以单独为此目的做一个成熟的 WS,你可以使用文件共享协议。无论您做什么,您现在都不再处于进程内,因此所有分布式计算的谬论都适用。

The N nodes need to communicate. There are various strategies:

  • broadcast: each node will broadcast to everybody else that it's macking a call, and all other nodes will take that into account. Nodes are equal and maintain individial global count (each node know about every other node's call).
  • master node: one node is special, its the master and all other nodes ask permission from the master before making a call. The master is the only one that know the global count.
  • dedicated master: same as master, but the 'master' doesn't do calls on itslef, is just a service that keep track of calls.

Depending on how high do you anticipate to scale later, one or the other strategy may be best. For 2 nodes the simplest one is broadcast, but as the number of nodes increases the problems start to mount (you'll be spending more time broadcasting and responding to broadcats than actually doing WS requests).

How the nodes communicate, is up to you. You can open a TCP pipe, you can broadcats UDP, you can do a fully fledged WS for this purpose alone, you can use a file share protocol. Whatever you do, you are now no longer inside a process so all the fallacies of distributed computing apply.

浪菊怪哟 2024-08-30 05:20:04

执行此操作的方法有很多:您可能有一个“协调代理”,负责将“令牌”传递给服务器。每个“令牌”代表执行任务等的权限。每个应用程序都需要请求“令牌”才能发出呼叫。

一旦应用程序耗尽其令牌,它必须在再次访问 Web 服务之前请求更多令牌。

当然,当由于 Web 服务的并发性而对每个应用程序进行的每次调用的时间有要求时,这一切都会变得复杂。

您可以依赖 RabbitMQ 作为消息传递框架:Java 绑定可用。

Many ways of doing this: you might have a "Coordination Agent" which is responsible of handing "tokens" to the servers. Each "token" represents a permission to perform a task etc. Each application needs to request "tokens" in order to place calls.

Once an application depletes its tokens, it must ask for some more before proceeding to hit the Web Service again.

Of course, this all gets complicated when there are requirements with regards to the timing of each calls each application makes because of concurrency towards the Web Service.

You could rely on RabbitMQ as Messaging framework: Java bindings are available.

羅雙樹 2024-08-30 05:20:04

我建议使用 beanstalkd 定期将一组请求(作业)注入到管(队列),每个都有适当的延迟。任意数量的“工作”线程或进程将等待下一个请求可用,如果工作人员提前完成,它可以接收下一个请求。缺点是工作人员之间没有任何明确的负载平衡,但我发现队列外的请求分配已经得到了很好的平衡。

I recommend using beanstalkd to periodically pump a collection of requests (jobs) into a tube (queue), each with an appropriate delay. Any number of "worker" threads or processes will wait for the next request to be available, and if a worker finishes early it can pick up the next request. The down side is that there isn't any explicit load balancing between workers, but I have found that distribution of requests out of the queue has been well balanced.

今天小雨转甜 2024-08-30 05:20:04

这是一个有趣的问题,解决方案的难度在一定程度上取决于你想要对限制有多严格。

我通常的解决方案是 JBossCache,部分原因是它与 JBoss AppServer 一起打包,但也因为它处理任务相当好。您可以将其用作一种分布式哈希图,记录不同粒度的使用统计信息。它的更新可以异步完成,因此不会减慢速度。

JBossCache 通常用于重型分布式缓存,但我也更喜欢它用于这些轻量级作业。它是纯 java,不需要对 JVM 进行任何操作(与 Terracotta 不同)。

This is an interesting problem, and the difficulty of the solution depends to a degree on how strict you want to be on the throttling.

My usual solution to this is JBossCache, partly because it comes packaged with JBoss AppServer, but also because it handles the task rather well. You can use it as a kind of distributed hashmap, recording the usage statistics at various degrees of granularity. Updates to it can be done asynchronously, so it doesn't slow things down.

JBossCache is usually used for heavy-duty distributed caching, but I rather like it for these lighter-weight jobs too. It's pure java, and requires no mucking about with the JVM (unlike Terracotta).

请叫√我孤独 2024-08-30 05:20:04

Hystrix 的设计几乎与您所描述的场景完全相同。您可以为每个服务定义线程池大小,以便设置最大并发请求数,并在池已满时对请求进行排队。您还可以为每个服务定义一个超时,当某个服务开始超过其超时时,Hystrix 将在短时间内拒绝对该服务的进一步请求,以便让该服务有机会恢复正常。还可以通过 Turbine 实时监控整个集群。

Hystrix was designed for pretty much the exact scenario you're describing. You can define a thread pool size for each service so you have a set maximum number of concurrent requests, and it queues up requests when the pool is full. You can also define a timeout for each service and when a service starts exceeding its timeout, Hystrix will reject further requests to that service for a short period of time in order to give the service a chance to get back on its feet. There's also real time monitoring of the entire cluster through Turbine.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文