在 FORTRAN 90 结构中使用 MPI_type_contigulous 和 MPI_Type_CREATE_Subarray
你好 我正在尝试使用这样的 Fortran 结构
type some
u ! actual code will have 17 such scalars
end type some
TYPE(some),ALLOCATABLE,DIMENSION(:) :: metvars,newmetvars
现在我的测试程序的目标是将 10 个数字从一个处理器发送到另一个处理器,但这 10 个数字的起点将是我的选择(例如,如果我有一个包含 20 个数字的向量,则不是有必要我会将前 10 个数字传送到下一个处理器,但可以说我的选择是从 5 到 15)。所以首先你像这样使用 mpi_type_contigulous
CALL MPI_TYPE_CONTIGUOUS(10,MPI_REAL,MPI_METVARS,ierr) ! declaring a derived datatype of the object to make it in to contiguous memory
CALL MPI_TYPE_COMMIT(MPI_METVARS,ierr)
我执行发送记录并能够将前 10 个数字发送到另一个处理器(我正在测试 2 个处理器)
if(rank.EQ.0)then
do k= 2,nz-1
metvars(k)%u = k
un(k)=k
enddo
endif
发送这个
我现在 对于第二部分,我使用了 mpi_TYPE_CREATE_SUBARRAY 所以 然后
array_size = (/20/)
array_subsize =(/10/)
array_start = (/5/)
CALL MPI_TYPE_CREATE_SUBARRAY(1,array_size,array_subsize,array_start,MPI_ORDER_FORTRAN,MPI_METVARS,newtype,ierr)
CALL MPI_TYPE_COMMIT(newtype,ierr)
array_size = (/20/)
array_subsize =(/10/)
array_start = (/0/)
CALL MPI_TYPE_CREATE_SUBARRAY(1,array_size,array_subsize,array_start,MPI_ORDER_FORTRAN,MPI_METVARS,newtype2,ierr)
CALL MPI_TYPE_COMMIT(newtype2,ierr)
if(rank .EQ. 0)then
CALL MPI_SEND(metvars,1,newtype,1,19,MPI_COMM_WORLD,ierr)
endif
if(rank .eq. 1)then
CALL MPI_RECV(newmetvars,1,newtype2,0,19,MPI_COMM_WORLD,MPI_STATUS_IGNORE,ierr)
endif
我不明白该怎么做。
我收到一条错误消息,说
[flatm1001:14066] *** An error occurred in MPI_Recv
[flatm1001:14066] *** on communicator MPI_COMM_WORLD
[flatm1001:14066] *** MPI_ERR_TRUNCATE: message truncated
[flatm1001:14066] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
我在本地计算机上使用 openmpi。我能够在没有 mpi_type_contigulous 部分的情况下使用 subarray 命令。但是,如果我将两者结合起来,因为在这种情况下我需要这样做,因为我在实际代码中有一个带有 Fortran 的结构。我也不知道是否有更好的方法。任何形式的帮助和建议都值得赞赏。 提前致谢
Hi
I am trying to use fortran structure like this
type some
u ! actual code will have 17 such scalars
end type some
TYPE(some),ALLOCATABLE,DIMENSION(:) :: metvars,newmetvars
Now the aim of my test program is to send 10 numbers from one processor to another but the starting point of these 10 numbers would be my choice (example if i have an vector of say 20 numbers not necesary i will take the first 10 numbers to the next processor but lets say my choice is from 5 to 15). So first u use mpi_type_contiguous like this
CALL MPI_TYPE_CONTIGUOUS(10,MPI_REAL,MPI_METVARS,ierr) ! declaring a derived datatype of the object to make it in to contiguous memory
CALL MPI_TYPE_COMMIT(MPI_METVARS,ierr)
I do the send rec and was able to get the first 10 numbers to the other processor (I am testing it for 2 processors)
if(rank.EQ.0)then
do k= 2,nz-1
metvars(k)%u = k
un(k)=k
enddo
endif
I am sending this
now
for the second part i used mpi_TYPE_CREATE_SUBARRAY so
then
array_size = (/20/)
array_subsize =(/10/)
array_start = (/5/)
CALL MPI_TYPE_CREATE_SUBARRAY(1,array_size,array_subsize,array_start,MPI_ORDER_FORTRAN,MPI_METVARS,newtype,ierr)
CALL MPI_TYPE_COMMIT(newtype,ierr)
array_size = (/20/)
array_subsize =(/10/)
array_start = (/0/)
CALL MPI_TYPE_CREATE_SUBARRAY(1,array_size,array_subsize,array_start,MPI_ORDER_FORTRAN,MPI_METVARS,newtype2,ierr)
CALL MPI_TYPE_COMMIT(newtype2,ierr)
if(rank .EQ. 0)then
CALL MPI_SEND(metvars,1,newtype,1,19,MPI_COMM_WORLD,ierr)
endif
if(rank .eq. 1)then
CALL MPI_RECV(newmetvars,1,newtype2,0,19,MPI_COMM_WORLD,MPI_STATUS_IGNORE,ierr)
endif
I don't understand how to do this.
I get an error saying
[flatm1001:14066] *** An error occurred in MPI_Recv
[flatm1001:14066] *** on communicator MPI_COMM_WORLD
[flatm1001:14066] *** MPI_ERR_TRUNCATE: message truncated
[flatm1001:14066] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
I use openmpi in my local machine. I was able to make use of the subarray command without the mpi_type_contiguous part. However if i combine both because in this case I need to do that since i have a structure with fortran in the real code. I dunno if there is a better way to do it either. Any sort of help and suggestios are appreciated.
Thanks in advance
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我假设您的自定义类型包含 1 个实数,因为它没有指定。您首先构造一个由 10 个这样的变量组成的连续类型,即 MPI_METVARS 表示 10 个连续的实数。现在,我不知道这是否真的是问题所在,因为您发布的代码可能不完整,但现在看起来是您构造了 10 个 MPI_METVARS 类型的子数组,这意味着您实际上在 newtype 中拥有 100 个连续实数和新类型2。
处理该结构的“正确”方法是使用 MPI_TYPE_CREATE_STRUCT 为其创建一个类型,该类型应该是您的 MPI_METVARS 类型。
因此,请为您的自定义类型提供正确的代码并检查新类型的大小。
I assume your custom type contains 1 real, as it's not specified. You first construct a contigious type of 10 of these variables, i.e. MPI_METVARS represents 10 contiguous reals. Now, I don't know if this is really the problem, as the code you posted might be incomplete, but the way it looks now is that you construct a subarray of 10 MPI_METVARS types, meaning you have in effect 100 contiguous reals in newtype and newtype2.
The 'correct' way to handle the structure is to create a type for it with MPI_TYPE_CREATE_STRUCT, which should be your MPI_METVARS type.
So, pls provide the correct code for your custom type and check the size of the newtype type.