带有数组的 MPI 结构数据类型
我想在 mpi 的一次 MPI_SEND/RECV 调用中轻松发送 someObject 。
type someObject
integer :: foo
real :: bar,baz
double precision :: a,b,c
double precision, dimension(someParam) :: x, y
end type someObject
我开始使用 MPI_TYPE_STRUCT,但后来意识到数组 x 和 y 的大小取决于 someParam。我最初想到在结构中嵌套一个 MPI_TYPE_CONTIGUOUS 来表示数组,但似乎无法让它工作。如果这可能的话?
! Setup description of the 1 MPI_INTEGER field
offsets(0) = 0
oldtypes(0) = MPI_INTEGER
blockcounts(0) = 1
! Setup description of the 2 MPI_REAL fields
call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
offsets(1) = blockcounts(0) * extent
oldtypes(1) = MPI_REAL
blockcounts(1) = 2
! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
offsets(2) = offsets(1) + blockcounts(1) * extent
oldtypes(2) = MPI_DOUBLE_PRECISION
blockcounts(2) = 3
! Setup x and y MPI_DOUBLE_PRECISION array fields
call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
call MPI_TYPE_COMMIT(sOarraytype, ierr)
call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
offsets(3) = offsets(2) + blockcounts(2) * extent
oldtypes(3) = sOarraytype
blockcounts(3) = 2 ! x and y
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)
我想做的:
...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
x(i) = i
y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...
到目前为止,我只是遇到分段错误,没有太多迹象表明我做错了什么,或者这是否可能。文档从未说过我不能在结构数据类型中使用连续的数据类型。
I would like to easily send an someObject in one MPI_SEND/RECV call in mpi.
type someObject
integer :: foo
real :: bar,baz
double precision :: a,b,c
double precision, dimension(someParam) :: x, y
end type someObject
I started using a MPI_TYPE_STRUCT, but then realized the sizes of the arrays x and y are dependent upon someParam. I initially thought of nesting a MPI_TYPE_CONTIGUOUS in the struct to represent the arrays, but cannot seem to get this to work. If this is even possible?
! Setup description of the 1 MPI_INTEGER field
offsets(0) = 0
oldtypes(0) = MPI_INTEGER
blockcounts(0) = 1
! Setup description of the 2 MPI_REAL fields
call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
offsets(1) = blockcounts(0) * extent
oldtypes(1) = MPI_REAL
blockcounts(1) = 2
! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
offsets(2) = offsets(1) + blockcounts(1) * extent
oldtypes(2) = MPI_DOUBLE_PRECISION
blockcounts(2) = 3
! Setup x and y MPI_DOUBLE_PRECISION array fields
call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
call MPI_TYPE_COMMIT(sOarraytype, ierr)
call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
offsets(3) = offsets(2) + blockcounts(2) * extent
oldtypes(3) = sOarraytype
blockcounts(3) = 2 ! x and y
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)
What I would like to do:
...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
x(i) = i
y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...
So far I am just getting segmentation faults, without much indication of what I'm doing wrong or if this is even possible. The documentation never said I couldn't use a contiguous datatype inside a struct datatype.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
从看起来你不能嵌套这些类型的数据类型,这是一个完全错误的解决方案。
感谢:http://static.msi.umn.edu/教程/scicomp/general/MPI/mpi_data.html 和 http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes .pdf 获取指导。
定义 MPI_TYPE_STRUCT 的正确方法如下:
这允许我以我最初想要的方式发送和接收对象!
From what it seems you can't nest those kinds of datatypes and was a completely wrong solution.
Thanks to: http://static.msi.umn.edu/tutorial/scicomp/general/MPI/mpi_data.html and http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes.pdf for guidance.
the right way to define the MPI_TYPE_STRUCT is as follows:
That allows me to send and receive the object with the way I originally wanted!
MPI 结构类型是一个很让人头疼的问题。如果此代码不属于代码的性能关键部分,请查看
MPI_PACKED
类型。打包调用相对较慢(基本上您要发送的每个元素调用一个函数!),因此不要将其用于非常大的消息,但它很容易使用,并且在您可以发送的内容方面非常灵活。The MPI struct type is a big headache. If this code is not in a performance-critical part of your code, look into the
MPI_PACKED
type. The packing call is relatively slow (basically one function call per element you're sending!), so don't use it for very large messages, but is easy fairly easy to use and very flexible in what you can send.