优先考虑 Erlang 节点

发布于 2024-07-21 06:44:03 字数 658 浏览 10 评论 0原文

假设我有一个由 n 个 Erlang 节点组成的集群,其中一些节点可能位于我的 LAN,而其他 LAN 可能使用 WAN(即通过互联网)连接,什么是合适的机制来满足 a) 不同的带宽可用性/行为(例如,引起的延迟)和 b) 具有不同计算能力的节点(甚至是内存限制)?

换句话说,我如何优先考虑具有大量计算能力的本地节点,而不是那些具有高延迟且可能功能较弱的节点,或者我如何理想地优先考虑具有高传输延迟的高性能远程节点,以专门执行这些过程相对较大的计算/传输(即每条消息、每时间单位完成的工作)比率?

我主要考虑的是通过向集群中的每个节点发送一个在初始化期间运行的基准测试进程来对集群中的每个节点进行基本基准测试,以便可以计算消息传递中涉及的延迟以及整体计算速度(即,使用节点- 特定的计时器来确定节点终止任何任务的速度)。

也许,类似的事情必须重复完成,一方面是为了获得代表性数据(即平均数据),另一方面它甚至可能在运行时有用,以便能够动态调整改变运行时条件。

(从同样的意义上讲,人们可能希望优先考虑本地运行的节点而不是在其他机器上运行的节点)

这将有望优化内部作业调度,以便特定节点处理特定作业。

Assuming I have a cluster of n Erlang nodes, some of which may be on my LAN, while others may be connected using a WAN (that is, via the Internet), what are suitable mechanisms to cater for a) different bandwidth availability/behavior (for example, latency induced) and b) nodes with differing computational power (or even memory constraints for that matter)?

In other words, how do I prioritize local nodes that have lots of computational power, over those that have a high latency and may be less powerful, or how would I ideally prioritize high performance remote nodes with high transmission latencies to specifically do those processes with a relatively huge computations/transmission (that is, completed work per message ,per time unit) ratio?

I am mostly thinking in terms of basically benchmarking each node in a cluster by sending them a benchmark process to run during initialization, so that the latencies involved in messasing can be calculated, as well as the overall computation speed (that is, using a node-specific timer to determine how fast a node terminates with any task).

Probably, something like that would have to be done repeatedly, on the one hand in order to get representative data (that is, averaging data) and on the other hand it might possibly even be useful at runtime in order to be able to dynamically adjust to changing runtime conditions.

(In the same sense, one would probably want to prioritize locally running nodes over those running on other machines)

This would be meant to hopefully optimize internal job dispatch so that specific nodes handle specific jobs.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

浅沫记忆 2024-07-28 06:44:03

我们已经做了类似的事情,仅在我们的内部 LAN/WAN 上(例如从旧金山到伦敦的 WAN)。 该问题归结为以下因素的组合:

  1. 简单地通过本地(内部)调用进行远程调用的开销
  2. 节点的网络延迟(作为请求/结果负载的函数)
  3. 的性能
  4. 远程节点 执行函数所需的计算能力
  5. 如果存在共享的“静态”数据集,则批处理调用是否会提供任何性能改进。

对于 1. 我们假设没有开销(与其他开销相比可以忽略不计)

对于 2. 我们使用探测消息主动测量往返时间,并整理来自实际调用的信息

对于 3. 我们在节点上测量它并让他们广播该信息(这取决于节点上当前活动的负载而变化)

对于 4 和 5。我们根据给定批次的经验计算出

然后调用者求解以获得一批调用的最小解决方案(在我们的例子中)为一大堆衍生品定价)并将它们批量发送到节点。

使用这种技术,我们可以更好地利用我们的计算“网格”,但这需要付出相当大的努力。 我们还有一个额外的优势,即网格仅由该环境使用,因此我们拥有更多的控制权。 添加互联网组合(可变延迟)和网格的其他用户(可变性能)只会增加复杂性,并可能导致收益递减......

We've done something similar to this, on our internal LAN/WAN only (WAN being for instance San Francisco to London). The problem boiled down to a combination of these factors:

  1. The overhead in simply making a remote call over a local (internal) call
  2. The network latency to the node (as a function of the request/result payload)
  3. The performance of the remote node
  4. The compute power needed to execute the function
  5. Whether batching of calls provides any performance improvement if there was a shared "static" data set.

For 1. we assumed no overhead (it was negligible compared to the others)

For 2. we actively measured it using probe messages to measure round trip time, and we collated information from actual calls made

For 3. we measured it on the node and had them broadcast that information (this changed depending on the load current active on the node)

For 4 and 5. we worked it out empirically for the given batch

Then the caller solved to get the minimum solution for a batch of calls (in our case pricing a whole bunch of derivatives) and fired them off to the nodes in batches.

We got much better utilization of our calculation "grid" using this technique but it was quite a bit of effort. We had the added advantage that the grid was only used by this environment so we had a lot more control. Adding in an internet mix (variable latency) and other users of the grid (variable performance) would only increase the complexity with possible diminishing returns...

绿萝 2024-07-28 06:44:03

您所讨论的问题已在网格计算的背景下以多种不同的方式得到解决(例如,请参阅 秃鹰)。 为了更彻底地讨论这个问题,我认为需要一些额外的信息(要解决的问题的同质性、对节点的控制程度[即是否存在意外的外部负载等?])。

实现自适应作业调度程序通常还需要调整探测可用资源的频率(否则探测造成的开销可能会超过性能增益)。

理想情况下,您可能能够使用基准测试来提出一个经验(统计)模型,该模型允许您预测给定问题的计算难度(需要良好的领域知识和对执行速度有很大影响的问题特征,并且是易于提取),另一个用于预测通信开销。 结合使用两者应该可以实现一个简单的调度程序,该调度程序基于预测模型做出决策,并通过考虑实际执行时间作为反馈/奖励来改进它们(例如,通过 强化学习)。

The problem you are talking about has been tackled in many different ways in the context of Grid computing (e.g, see Condor). To discuss this more thoroughly, I think some additional information is required (homogeneity of the problems to be solved, degree of control over the nodes [i.e. is there unexpected external load etc.?]).

Implementing an adaptive job dispatcher will usually require to also adjust the frequency with which you probe the available resources (otherwise the overhead due to probing could exceed the performance gains).

Ideally, you might be able to use benchmark tests to come up with an empirical (statistical) model that allows you to predict the computational hardness of a given problem (requires good domain knowledge and problem features that have a high impact on execution speed and are simple to extract), and another one to predict communication overhead. Using both in combination should make it possible to implement a simple dispatcher that bases its decisions on the predictive models and improves them by taking into account actual execution times as feedback/reward (e.g., via reinforcement learning).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文