PVM(并行虚拟机)库在HPC中广泛使用吗?

发布于 2024-08-12 03:42:17 字数 53 浏览 3 评论 0原文

每个人都迁移到 MPI(消息传递接口)还是 PVM 仍然在超级计算机和 HPC 中广泛使用?

Has everyone migrated to MPI (message passing interface) or is PVM still widely used in supercomputers and HPC?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

时光病人 2024-08-19 03:42:17

我的经验是,PVM 在高性能计算中没有得到广泛应用。 MPI 似乎被广泛使用,像 co-array Fortran 之类的东西可能是大规模并行系统的前进道路未来。

我使用一个名为 InterComm 的库将物理模型耦合在一起,如下所示单独的可执行文件。 InterComm 目前利用 PVM 在这些耦合模型之间进行通信。 PVM 和 InterComm 吹嘘它们可以在同构和异构网络环境中工作(我被告知 MPI 不支持异构计算/网络环境)。然而,这是我们从未使用过的功能(我非常怀疑我们是否会使用)。

我在学术计算环境中运行 PVM 时遇到了困难。由于我们在移植向特定超级计算机编写代码,其中路由器/排队环境不喜欢与 PVM 一起启动多个并行可执行文件。

如果您正处于项目的架构/设计阶段,我建议您远离 PVM,除非您需要在异构计算/网络环境中工作!

My experience is that PVM is not widely utilized in high-performance computing. MPI seems widely used and something like co-array Fortran might be the path forward for massively parallel systems of the future.

I use a library called InterComm to couple physics models together as separate executables. InterComm currently utilizes PVM for communication between these coupled models. PVM and InterComm boast that they work on homogeneous and heterogeneous network environments (I've been told MPI does not support heterogeneous compute/network environments). However, this is a feature that we've never used (and I highly doubt we ever will).

I have had a difficult time running PVM on academic compute environments. Some sys-admin/support-type people at reputable national computing centers have even suggested that we "simply" re-code our 20 year-old O(10^4) line code to use MPI because of issues we ran into while porting the code to a particular supercomputer in which the router/queing environment didn't like launching multiple parallel executables alongside PVM.

If you're at the architecture/design stage of a project, I'd recommend staying away from PVM unless you need to work on heterogeneous compute/network environments!

久光 2024-08-19 03:42:17

它可能高度依赖于站点,但根据我的经验,MPI 完全依赖于站点
PVM 在(至少在学术界)HPC 领域占据主导地位。你不能
实际上在没有 MPI 支持的情况下推出新的 HPC 互连,但是
PVM 似乎是绝对可选的。是否有 PVM 实现
例如无限带宽?

It may be highly site-dependent but in my experience MPI completely
dominates PVM in the (academic at least) HPC space. You can't
realistically launch a new HPC interconnect without MPI support but
PVM seems to be decidedly optional. Is there a PVM implementation for
Infiniband for instance?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文