Erlang 文档/SMP:每台机器或每个应用程序的单节点和多节点,以及随之而来的混乱

发布于 2024-08-14 06:29:51 字数 911 浏览 9 评论 0原文

我现在正在研究Erlang的进程模型。我在技术报告(第3节第2段)中遇到了障碍在 Erlang 上:

这解释了为什么在某些情况下运行多个 SMP VM 会更有效 每个调度程序一个,而不是一个 SMP VM 上多个调度程序。当然 多个VM的运行要求应用程序可以在许多并行任务中运行 彼此之间没有或很少有沟通。

现在这一段让我感到困惑;我可以看到单进程多个调度程序场景,但我无法看到具有单个调度程序的多个进程;据推测,每个进程都会有一个不同的节点名称,这意味着某个应用程序在未经修改的情况下不能与该模型一起使用;报告中提到了不需要修改的优点作为SMP的一个关键特性。 如果多个进程具有相同的节点名称,那么由于 Erlang 进程间消息传递风暴,性能将是灾难性的——这假设使用内存遗忘症。是否有一些流程模型未在本文中介绍并且我在这里遗漏了?

作者在这里想表达什么?他是否试图建议对于多进程单调度程序的情况必须重写应用程序(以考虑多个唯一的节点名称)?

-- 编辑1:问题根源的澄清 --

该问题已通过讨论得到解答;以下是我遇到的麻烦的概述。

我记得这个问题的问题在于,文档并没有涉及每台物理机运行多个 Erlang 模拟器的场景——它总是表明模拟器代表你的物理机(在工业用途中);此外,从未考虑过必须显式划分程序以提高计算效率的情况。这种突然的介绍是我痛苦的根源。

该惯例仍然偏向于创建大量进程,并且未来会对 Erlang 的 SMP 模拟器进行许多改进,这意味着假设有良好的应用程序设计,每台机器单个节点仍然是一个非常可行的选择。

I'm studying Erlang's process model at the moment. I have hit a snag in a tech report (section 3, paragraph 2) on Erlang:

This explains why it in some cases can be more efficient to run several SMP VM's
with one scheduler each instead on one SMP VM with several schedulers
. Of course
the running of several VM's require that the application can run in many parallel tasks
which has no or very little communication with each other.

Now this paragraph is confusing me; I can see the uni-process multiple scheduler scenario, but I am failing to see multiple processes with a single scheduler; Presumably each process would have a different node name, and this would mean a certain application, without modification, cannot be used with this model; the virtue of not requiring modification has been mentioned as a key feature of SMP in the report. If the multiple processes have the same node names, than performance would be disastrous due to inter-Erlang-process messaging storms -- this assume the use of in-memory amnesia. Is there some process model that is not introduced in the article and that I am missing here ?

What is the author trying say here ? is he trying to suggest that an application would have to be rewritten (to take multiple unique node-names into account) for the multi-process single-scheduler case ?

-- edit 1: Clarification of Source of Problem --

The question has been answered through discussion; the following is an outline of the trouble I had.

The issue for this question has been that the documentation, as I recall, does not touch on a scenario of running multiple Erlang emulators per physical machine -- it has always been shown that the emulator represents your physical machine (in industrial usage); also, the scenario of having to explicitly partition a program for computational efficiency has never been considered. This sudden introduction has been the source of my woe.

The convention is still biased towards creating LOTS of processes and that the future holds many improvements for the SMP emulator for Erlang, and this means that single node per machine is still a very viable option assuming favourable application design.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

土豪我们做朋友吧 2024-08-21 06:29:51

读完文章后重写:

这解释了为什么它在某些情况下可以
更高效地运行多个 SMP
每个虚拟机都有一个调度程序
在一个具有多个调度程序的 SMP VM 上。

  • 非SMP VM没有锁,因此运行速度很快。
  • 由于检查锁的成本,单调度程序 SMP VM 慢 10%
  • 由于使用/等待锁,多调度程序 SMP VM 再次变慢

当然是运行多个虚拟机
要求应用程序可以运行
在许多没有或没有的并行任务中
彼此之间的交流很少
其他。

  • 我认为:同一服务器上的节点必须有不同的名称。
  • 由于进程间性质与 VM 节点的进程内消息传递相比,进程间消息传递速度较慢。

Rewrite after reading article:

This explains why it in some cases can
be more efficient to run several SMP
VM's with one scheduler each instead
on one SMP VM with several schedulers.

  • Non-SMP VM has no-lock so runs fast.
  • Single scheduler SMP VM 10% slower, due to cost of checking locks
  • Multiple scheduler SMP VM slower again due to using/waiting for locks

Of course the running of several VM's
require that the application can run
in many parallel tasks which has no or
very little communication with each
other.

  • I think: Nodes on the same server have to have different names.
  • Inter process messaging while by slower due to the inter-process nature verse intra process messaging of a VM node.
莫多说 2024-08-21 06:29:51

如果单个虚拟机中有多个调度程序,由于内部架构的原因,它们将不可避免地争夺各种资源(例如ets元表、原子表、迁移期间的调度程序运行队列等)。如果您只有一个调度程序,则显然不会发生争用。不过,锁检查和获取仍然会完成,因此运行非 SMP VM 将会产生更好的性能(但需要从源重建 VM)。

以四核机器为例。选项一意味着您运行 Erlang VM 的四个实例,每个实例都有一个调度程序,并且关联设置为不同的处理器核心。选项二意味着运行具有四个调度程序的单个 Erlang VM,每个调度程序的关联性设置为不同的处理器内核。

如果您有大量独立进程要运行,选项二将带来更好的性能,因为四个核心将得到充分利用(理论上)。相反,在选项一中,这是不可能的,因为锁争用将使核心上的执行时不时地相互等待。

另一方面,如果您的进程需要进行大量通信,则选项一是最佳选择,因为进程间通信比不同虚拟机之间的通信便宜得多。这样做的好处比锁争用造成的损失要多。

If you have multiple schedulers in a single VM, they will inevitably contend over various resources (e.g. ets meta table, atom-table, scheduler run-queue during migration, etc.) because of the inner architecture. If you have a single scheduler, contention will obviously not occur. Lock checking and acquiring will still be done though, so running a non SMP VM instead shall yield even better performance (but requires a rebuilding of the VM from source).

Take a four-core machine for example. Option one means that you run four instances of the Erlang VM, each with a single scheduler, affinity set to different processor cores. Option two means running a single Erlang VM with four schedulers, each scheduler's affinity set to different processor cores.

If you have a whole lot of independent processes to run, option two will result in better performance, because the four cores will be fully utilized (theoretically). In contrast, in option one, this won't be possible, because the lock contention will make execution on cores wait for each other every now and then.

On the other hand if your processes need to chatter a lot, option one is the way to go because the inter-process communication is way cheaper than communication between different VMs. You gain more with this than you lose with lock contention.

埋葬我深情 2024-08-21 06:29:51

我相信答案就在上一段:

只有一个调度程序的 SMP 虚拟机比没有调度程序的虚拟机稍慢 (10%)
SMP 虚拟机。
这是因为SMP VM需要对所有共享使用锁
数据结构。但作为
只要不存在锁冲突所造成的开销
锁定并没有那么高(它
是需要时间的锁冲突)。

调度程序对共享数据结构锁的依赖可能会给给定系统带来开销。由此看来,在一个 SMP VM 上拥有多个调度程序会带来更大的总体开销。

I believe the answer is in the preceding paragraph:

The SMP VM with only one scheduler is slightly slower (10%) than the non
SMP VM.
This is because the SMP VM need to use locks for all shared
datastructures. But as
long as there are no lock-conflicts the overhead caused by
locking is not that high (it
is the lock conflicts that takes time).

Scheduler's reliance on locks for shared data structures can impose an overhead on a given system. It seems to follow that having multiple schedulers on one SMP VM imposes a collectively greater overhead.

静若繁花 2024-08-21 06:29:51

一台物理机器上有多个节点有一些优点。

1)如上所述的资源锁定开销。

2) 故障转移。在电信产品中,您确实不希望光束撞到您身上。如果您的系统中有 NIF 或链接驱动程序,则可能会发生这种情况。

3)内存局部性。很少的节点为您提供了一种将进程强制到几个核心的穷人方式。这对于 NUMA 架构来说通常是一个巨大的推动,对于 SMP 来说也是如此。调度程序尚未考虑 NUMA。您可以将一个进程生成到特定的调度程序并将其锁定,它不会迁移,但这是一个未记录的功能......或者它被一起删除。我忘记了。

对于多个节点,您当然需要节点之间的负载平衡器,但无论如何这是通常的方法。一些监督节点的逻辑。

然而,EUC 论文中的数字已经有一年多了 [@],如果您并不真正需要的话,我不会推荐使用多节点方法。如今,运行时系统可以更好地处理这些类型的问题。大量的锁开销已被删除,并且 mrq 调度程序已得到改进。

@ 2009 年的数字看起来像这个

编辑:

关于3)我提到的生成功能,

spawn_opt(fun() -> ... end, [{scheduler, Id}]) -> pid(),
    where Id is an integer and refers to a specific scheduler.

我不建议使用它,因为它没有记录。

There are some advatanges with several nodes on one physical machine.

1) Resource locking overhead as mentioned.

2) Fail-over. In telecom products you really don't want to have the beam come crashing down on you. If you have NIFs or linked-in drivers in your system this might occur.

3) Memory locality. Few nodes gives you a poor-mans way to force processes to a few cores. This could be a big boost for NUMA archs typically but also for SMP. The scheduler don't take NUMA into account (yet). You can spawn a process to a specific scheduler and lock it to it, it won't migrate but that is an undocumented feature ... or it was removed all together. I forget.

With several nodes you will need a load balancer between the nodes of course but that is the usual way to do it anyways. Some logic that supervises the nodes.

However, the numbers from the EUC papers are over a year old [@] and I wouldn't recommend a multi-node approach if you don't really need it. The runtime system is much better at handling these types of problems today. A lot of lock overhead has been removed and the mrq-scheduler has been improved.

@ 2009's numbers look like this.

Edit:

Regarding 3) the spawn feature i mentioned is,

spawn_opt(fun() -> ... end, [{scheduler, Id}]) -> pid(),
    where Id is an integer and refers to a specific scheduler.

I wouldn't recommend using it since it undocumented.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文