使用 Boost 为 MPI 设计共享内存
我想问是否有人知道针对初学者的 boost::mpi 文档? (我已经从 Internet 站点阅读了 Boost.MPI 文档)。
关于我的系统的一些信息,我有一个大约 90 节的集群,每个节有两个带有两个核心的 CPU(4 个核心在一起)和 4Gb 内存。
我想为每个结定义共享内存,并存储一个向量 std::vector
。然后每个进程都需要根据其 rank()
数字进行计算。
现在,所有多进程都需要等待所有核心完成计算,然后发送一个向量,std::vector
,到主进程(rank() == 0
),它将更新向量占用
,然后将新的发送到所有节点占领向量。
可能最好简单地使用 mpi.h
而不是 boost::mpi
。
我想听听您的意见,因为我在 MPI 这方面没有经验。
I would like to ask if someone knows a boost::mpi documentation for beginners? (i already read Boost.MPI documentation from the Internet site).
a bit about my system, i have a claster of about 90 knots, each knot have two CPU with two cores (4 cores together) and 4Gb ram.
i would like to define shared memory to each knot and there store a vector std::vector<bool> occupation;
. Then each process need to compute something based on his rank()
number.
Now, all the multi Process need to wait until all the cores are done computing and then send a vector, std::vector<uint32_t> remove;
, to the main process (rank() == 0
) which will update vector occupation
and then send to all the knots the new occupation
vector.
It could be that it would better to use simply mpi.h
instead of boost::mpi
.
I would like to hear your opinion, since i don't have experience in this area of MPI.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
如果您的编译器支持,请考虑将
OpenMP
用于共享内存部分,然后设置一个进程每个节点执行当前设置中四个进程的操作。 MPI 并不是真正为共享内存而设计的。Consider using
OpenMP
for the shared-memory part if your compiler supports it, then setting up one process per node that does what four processes what do in your current setup. MPI wasn't really designed for shared memory.正如 larsmans 所提到的,你不能真正使用 MPI 来共享内存。但听起来你并不真正需要分布式共享内存;听起来所有任务都需要在开始时获取
职业
的副本,进行计算,以remove
的形式将结果发送回master,并且然后主机广播职业
的更新副本。 MPI 可以很好地做到这一点。一种开始方法是让主进程使用
broadcast
进行数据的初始发送,让工作进程使用发送
< /a> 将更新发送回主站,并让主站recv
< /a> 每个任务的数据;完成后,循环就会重复。As mentioned by larsmans, you can't really do shared memory with MPI. But it sounds like you don't really need distributed shared memory; it sounds like all of the tasks need to get a copy of
occupation
at the start, do their calculations, send their results back to the master in the form ofremove
, and the master then broadcasts an updated copy ofoccupation
. MPI can do that just fine.A way to start would be to have the master process use
broadcast
to do the initial sending of data, have the worker processes usesend
to send the updates back to the master, and have the masterrecv
the data from each task; when that's done, the cycle repeats itself.