带有数组的 MPI 结构数据类型

发布于 2024-12-18 00:29:10 字数 2010 浏览 3 评论 0原文

我想在 mpi 的一次 MPI_SEND/RECV 调用中轻松发送 someObject

   type someObject
     integer :: foo
     real :: bar,baz
     double precision :: a,b,c
     double precision, dimension(someParam) :: x, y
   end type someObject

我开始使用 MPI_TYPE_STRUCT,但后来意识到数组 xy 的大小取决于 someParam。我最初想到在结构中嵌套一个 MPI_TYPE_CONTIGUOUS 来表示数组,但似乎无法让它工作。如果这可能的话?

  ! Setup description of the 1 MPI_INTEGER field
  offsets(0) = 0
  oldtypes(0) = MPI_INTEGER
  blockcounts(0) = 1
  ! Setup description of the 2 MPI_REAL fields
  call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
  offsets(1) = blockcounts(0) * extent
  oldtypes(1) = MPI_REAL
  blockcounts(1) = 2
  ! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
  call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
  offsets(2) = offsets(1) + blockcounts(1) * extent
  oldtypes(2) = MPI_DOUBLE_PRECISION
  blockcounts(2) = 3
  ! Setup x and y MPI_DOUBLE_PRECISION array fields
  call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
  call MPI_TYPE_COMMIT(sOarraytype, ierr)
  call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
  offsets(3) = offsets(2) + blockcounts(2) * extent
  oldtypes(3) = sOarraytype
  blockcounts(3) = 2 ! x and y

  ! Now Define structured type and commit it
  call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
  call MPI_TYPE_COMMIT(sOtype, ierr)

我想做的:

...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
  x(i) = i
  y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
  WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...

到目前为止,我只是遇到分段错误,没有太多迹象表明我做错了什么,或者这是否可能。文档从未说过我不能在结构数据类型中使用连续的数据类型。

I would like to easily send an someObject in one MPI_SEND/RECV call in mpi.

   type someObject
     integer :: foo
     real :: bar,baz
     double precision :: a,b,c
     double precision, dimension(someParam) :: x, y
   end type someObject

I started using a MPI_TYPE_STRUCT, but then realized the sizes of the arrays x and y are dependent upon someParam. I initially thought of nesting a MPI_TYPE_CONTIGUOUS in the struct to represent the arrays, but cannot seem to get this to work. If this is even possible?

  ! Setup description of the 1 MPI_INTEGER field
  offsets(0) = 0
  oldtypes(0) = MPI_INTEGER
  blockcounts(0) = 1
  ! Setup description of the 2 MPI_REAL fields
  call MPI_TYPE_EXTENT(MPI_INTEGER, extent, ierr)
  offsets(1) = blockcounts(0) * extent
  oldtypes(1) = MPI_REAL
  blockcounts(1) = 2
  ! Setup descripton of the 3 MPI_DOUBLE_PRECISION fields
  call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, extent, ierr)
  offsets(2) = offsets(1) + blockcounts(1) * extent
  oldtypes(2) = MPI_DOUBLE_PRECISION
  blockcounts(2) = 3
  ! Setup x and y MPI_DOUBLE_PRECISION array fields
  call MPI_TYPE_CONTIGUOUS(someParam, MPI_DOUBLE_PRECISION, sOarraytype, ierr)
  call MPI_TYPE_COMMIT(sOarraytype, ierr)
  call MPI_TYPE_EXTENT(sOarraytype, extent, ierr)
  offsets(3) = offsets(2) + blockcounts(2) * extent
  oldtypes(3) = sOarraytype
  blockcounts(3) = 2 ! x and y

  ! Now Define structured type and commit it
  call MPI_TYPE_STRUCT(4, blockcounts, offsets, oldtypes, sOtype, ierr)
  call MPI_TYPE_COMMIT(sOtype, ierr)

What I would like to do:

...
type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
do i=1,someParam
  x(i) = i
  y(i) = i
end do
newObject = someObject(1,0.0,1.0,2.0,3.0,4.0,x,y)
MPI_SEND(newObject, 1, sOtype, 1, 1, MPI_COMM_WORLD, ierr) ! master
...
! slave would:
MPI_RECV(rcvObject, 1, sOtype, master, MPI_ANY_TAG, MPI_COMM_WORLD, status, ierr)
WRITE(*,*) rcvObject%foo
do i=1,someParam
  WRITE(*,*) rcvObject%x(i), rcvObject%y(i)
end do
...

So far I am just getting segmentation faults, without much indication of what I'm doing wrong or if this is even possible. The documentation never said I couldn't use a contiguous datatype inside a struct datatype.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

不必了 2024-12-25 00:29:10

从看起来你不能嵌套这些类型的数据类型,这是一个完全错误的解决方案。

感谢:http://static.msi.umn.edu/教程/scicomp/general/MPI/mpi_data.htmlhttp://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes .pdf 获取指导。

定义 MPI_TYPE_STRUCT 的正确方法如下:

type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
data x/someParam * 0/, w/someParam * 0/
integer sOtype, oldtypes(0:7), blocklengths(0:7), offsets(0:7), iextent, rextent, dpextent
! Define MPI datatype for someObject object
! set up extents
call MPI_TYPE_EXTENT(MPI_INTEGER, iextent, ierr)
call MPI_TYPE_EXTENT(MPI_REAL, rextent, ierr)
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, dpextent, ierr)
! setup blocklengths /foo,bar,baz,a,b,c,x,y/
data blocklengths/1,1,1,1,1,1,someParam,someParam/
! setup oldtypes
oldtypes(0) = MPI_INTEGER
oldtypes(1) = MPI_REAL
oldtypes(2) = MPI_REAL
oldtypes(3) = MPI_DOUBLE_PRECISION
oldtypes(4) = MPI_DOUBLE_PRECISION
oldtypes(5) = MPI_DOUBLE_PRECISION
oldtypes(6) = MPI_DOUBLE_PRECISION
oldtypes(7) = MPI_DOUBLE_PRECISION
! setup offsets
offsets(0) = 0
offsets(1) = iextent * blocklengths(0)
offsets(2) = offsets(1) + rextent*blocklengths(1)
offsets(3) = offsets(2) + rextent*blocklengths(2)
offsets(4) = offsets(3) + dpextent*blocklengths(3)
offsets(5) = offsets(4) + dpextent*blocklengths(4)
offsets(6) = offsets(5) + dpextent*blocklengths(5)
offsets(7) = offsets(6) + dpextent*blocklengths(6)
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(8, blocklengths, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)

这允许我以我最初想要的方式发送和接收对象!

From what it seems you can't nest those kinds of datatypes and was a completely wrong solution.

Thanks to: http://static.msi.umn.edu/tutorial/scicomp/general/MPI/mpi_data.html and http://www.osc.edu/supercomputing/training/mpi/Feb_05_2008/mpi_0802_mod_datatypes.pdf for guidance.

the right way to define the MPI_TYPE_STRUCT is as follows:

type(someObject) :: newObject, rcvObject
double precision, dimension(someParam) :: x, y
data x/someParam * 0/, w/someParam * 0/
integer sOtype, oldtypes(0:7), blocklengths(0:7), offsets(0:7), iextent, rextent, dpextent
! Define MPI datatype for someObject object
! set up extents
call MPI_TYPE_EXTENT(MPI_INTEGER, iextent, ierr)
call MPI_TYPE_EXTENT(MPI_REAL, rextent, ierr)
call MPI_TYPE_EXTENT(MPI_DOUBLE_PRECISION, dpextent, ierr)
! setup blocklengths /foo,bar,baz,a,b,c,x,y/
data blocklengths/1,1,1,1,1,1,someParam,someParam/
! setup oldtypes
oldtypes(0) = MPI_INTEGER
oldtypes(1) = MPI_REAL
oldtypes(2) = MPI_REAL
oldtypes(3) = MPI_DOUBLE_PRECISION
oldtypes(4) = MPI_DOUBLE_PRECISION
oldtypes(5) = MPI_DOUBLE_PRECISION
oldtypes(6) = MPI_DOUBLE_PRECISION
oldtypes(7) = MPI_DOUBLE_PRECISION
! setup offsets
offsets(0) = 0
offsets(1) = iextent * blocklengths(0)
offsets(2) = offsets(1) + rextent*blocklengths(1)
offsets(3) = offsets(2) + rextent*blocklengths(2)
offsets(4) = offsets(3) + dpextent*blocklengths(3)
offsets(5) = offsets(4) + dpextent*blocklengths(4)
offsets(6) = offsets(5) + dpextent*blocklengths(5)
offsets(7) = offsets(6) + dpextent*blocklengths(6)
! Now Define structured type and commit it
call MPI_TYPE_STRUCT(8, blocklengths, offsets, oldtypes, sOtype, ierr)
call MPI_TYPE_COMMIT(sOtype, ierr)

That allows me to send and receive the object with the way I originally wanted!

被你宠の有点坏 2024-12-25 00:29:10

MPI 结构类型是一个很让人头疼的问题。如果此代码不属于代码的性能关键部分,请查看 MPI_PACKED 类型。打包调用相对较慢(基本上您要发送的每个元素调用一个函数!),因此不要将其用于非常大的消息,但它很容易使用,并且在您可以发送的内容方面非常灵活。

The MPI struct type is a big headache. If this code is not in a performance-critical part of your code, look into the MPI_PACKED type. The packing call is relatively slow (basically one function call per element you're sending!), so don't use it for very large messages, but is easy fairly easy to use and very flexible in what you can send.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文