使用单向通信但没有任何内容加载到目标中的排名总和

发布于 2025-01-20 03:50:22 字数 1047 浏览 4 评论 0原文

我的目标是在类似于 allreduce 程序的环中创建排名总和,但使用单方面通信。 例如,如果这个系统中有四个进程。输出将是:

PE0:    Sum = 6
PE2:    Sum = 6
PE3:    Sum = 6
PE1:    Sum = 6

然而,对于我当前的单向通信解决方案,所有总和均为 0。 到目前为止我的代码:

#!/usr/bin/env python3
from mpi4py import MPI
import numpy as np

rcv_buf = np.empty((), dtype=np.intc) # uninitialized 0 dimensional integer array
status = MPI.Status()

comm_world = MPI.COMM_WORLD
my_rank = comm_world.Get_rank()
size = comm_world.Get_size()

right = (my_rank+1)      % size;
left  = (my_rank-1+size) % size;

snd_buf = np.array(my_rank, dtype=np.intc) # 0 dimensional integer array with 1 element initialized with the value of my_rank
sum = 0
copy = 0

# create a window
win = MPI.Win.Create(snd_buf, 1, MPI.INFO_NULL, comm=comm_world)

# we need a master process 

# sync remote get call
for i in range(size):
    win.Fence(0)
    win.Get(snd_buf, left, copy)
    win.Fence(0)
    
    sum += copy

win.Free()


print(f"PE{my_rank}:\tSum = {copy}")

我不确定如何检查 Get 调用是否正常工作,如果是,是否有其他方法来加载和存储。

I am aiming to create create a sum of ranks in a ring similar to a allreduce program but using one-sided communication.
For example, if there four processes in this system. The output would be:

PE0:    Sum = 6
PE2:    Sum = 6
PE3:    Sum = 6
PE1:    Sum = 6

However, with my current solution with one-sided communication, all the sums are 0.
My code so far:

#!/usr/bin/env python3
from mpi4py import MPI
import numpy as np

rcv_buf = np.empty((), dtype=np.intc) # uninitialized 0 dimensional integer array
status = MPI.Status()

comm_world = MPI.COMM_WORLD
my_rank = comm_world.Get_rank()
size = comm_world.Get_size()

right = (my_rank+1)      % size;
left  = (my_rank-1+size) % size;

snd_buf = np.array(my_rank, dtype=np.intc) # 0 dimensional integer array with 1 element initialized with the value of my_rank
sum = 0
copy = 0

# create a window
win = MPI.Win.Create(snd_buf, 1, MPI.INFO_NULL, comm=comm_world)

# we need a master process 

# sync remote get call
for i in range(size):
    win.Fence(0)
    win.Get(snd_buf, left, copy)
    win.Fence(0)
    
    sum += copy

win.Free()


print(f"PE{my_rank}:\tSum = {copy}")

I'm not sure how to check that the Get call is working properly and if it is, is there any other way to load and store.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

叫思念不要吵 2025-01-27 03:50:22

我错误地使用了 win.Get 调用。在文档,Get 调用的第一个参数被指定为 origin (BufSpec),我误以为窗口中的原始值是 snd_buf,但它应该是您希望存储答案的缓冲区。我还必须包含一个 Put 调用,将排名值发送到下一个进程。这使得最终代码:

#!/usr/bin/env python3
from mpi4py import MPI
import numpy as np

rcv_buf = np.empty((), dtype=np.intc) # uninitialized 0 dimensional integer array
status = MPI.Status()

comm_world = MPI.COMM_WORLD
my_rank = comm_world.Get_rank()
size = comm_world.Get_size()

right = (my_rank+1)      % size;
left  = (my_rank-1+size) % size;

snd_buf = np.array(my_rank, dtype=np.intc) # 0 dimensional integer array with 1 element initialized with the value of my_rank
sum = 0

# create a window
win = MPI.Win.Create(snd_buf, 1, MPI.INFO_NULL, comm=comm_world)


# sync remote get call
for i in range(size):
    win.Fence(0)
    win.Put(snd_buf, left)
    win.Fence(0)
    win.Get(rcv_buf, right)
    win.Fence(0)
    sum += rcv_buf


print(f"PE{my_rank}:\tSum = {sum}")


win.Free()



I was using the win.Get call incorrectly. In the documentation, the first parameter of the Get call is specified as origin (BufSpec) which I mistook for the origin value that was in the window which was snd_buf but it should be the buffer where you would want your answer to be stored. I also had to include a Put call to send the value of the rank to the next process. This makes the final code:

#!/usr/bin/env python3
from mpi4py import MPI
import numpy as np

rcv_buf = np.empty((), dtype=np.intc) # uninitialized 0 dimensional integer array
status = MPI.Status()

comm_world = MPI.COMM_WORLD
my_rank = comm_world.Get_rank()
size = comm_world.Get_size()

right = (my_rank+1)      % size;
left  = (my_rank-1+size) % size;

snd_buf = np.array(my_rank, dtype=np.intc) # 0 dimensional integer array with 1 element initialized with the value of my_rank
sum = 0

# create a window
win = MPI.Win.Create(snd_buf, 1, MPI.INFO_NULL, comm=comm_world)


# sync remote get call
for i in range(size):
    win.Fence(0)
    win.Put(snd_buf, left)
    win.Fence(0)
    win.Get(rcv_buf, right)
    win.Fence(0)
    sum += rcv_buf


print(f"PE{my_rank}:\tSum = {sum}")


win.Free()



~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文