使用 Boost 为 MPI 设计共享内存

发布于 2024-10-26 11:11:06 字数 539 浏览 6 评论 0原文

我想问是否有人知道针对初学者的 boost::mpi 文档? (我已经从 Internet 站点阅读了 Boost.MPI 文档)。

关于我的系统的一些信息,我有一个大约 90 节的集群,每个节有两个带有两个核心的 CPU(4 个核心在一起)和 4Gb 内存。

我想为每个结定义共享内存,并存储一个向量 std::vector;职业;。然后每个进程都需要根据其 rank() 数字进行计算。

现在,所有多进程都需要等待所有核心完成计算,然后发送一个向量,std::vector删除;,到主进程(rank() == 0),它将更新向量占用,然后将新的发送到所有节点占领向量。

可能最好简单地使用 mpi.h 而不是 boost::mpi

我想听听您的意见,因为我在 MPI 这方面没有经验。

I would like to ask if someone knows a boost::mpi documentation for beginners? (i already read Boost.MPI documentation from the Internet site).

a bit about my system, i have a claster of about 90 knots, each knot have two CPU with two cores (4 cores together) and 4Gb ram.

i would like to define shared memory to each knot and there store a vector std::vector<bool> occupation;. Then each process need to compute something based on his rank() number.

Now, all the multi Process need to wait until all the cores are done computing and then send a vector, std::vector<uint32_t> remove;, to the main process (rank() == 0) which will update vector occupation and then send to all the knots the new occupation vector.

It could be that it would better to use simply mpi.h instead of boost::mpi.

I would like to hear your opinion, since i don't have experience in this area of MPI.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

对你再特殊 2024-11-02 11:11:06

如果您的编译器支持,请考虑将 OpenMP 用于共享内存部分,然后设置一个进程每个节点执行当前设置中四个进程的操作。 MPI 并不是真正为共享内存而设计的。

Consider using OpenMP for the shared-memory part if your compiler supports it, then setting up one process per node that does what four processes what do in your current setup. MPI wasn't really designed for shared memory.

妖妓 2024-11-02 11:11:06

正如 larsmans 所提到的,你不能真正使用 MPI 来共享内存。但听起来你并不真正需要分布式共享内存;听起来所有任务都需要在开始时获取职业的副本,进行计算,以remove的形式将结果发送回master,并且然后主机广播职业的更新副本。 MPI 可以很好地做到这一点。

一种开始方法是让主进程使用 broadcast 进行数据的初始发送,让工作进程使用 发送< /a> 将更新发送回主站,并让主站 recv< /a> 每个任务的数据;完成后,循环就会重复。

As mentioned by larsmans, you can't really do shared memory with MPI. But it sounds like you don't really need distributed shared memory; it sounds like all of the tasks need to get a copy of occupation at the start, do their calculations, send their results back to the master in the form of remove, and the master then broadcasts an updated copy of occupation. MPI can do that just fine.

A way to start would be to have the master process use broadcast to do the initial sending of data, have the worker processes use send to send the updates back to the master, and have the master recv the data from each task; when that's done, the cycle repeats itself.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文