使用 MPI_Type_create_subarray 发送时可以转置数组吗?

发布于 2024-10-31 12:31:02 字数 1644 浏览 5 评论 0原文

我正在尝试使用 C 中的 MPI 转置矩阵。每个进程都有一个方形子矩阵,我想将其发送到正确的进程(网格上的“相反”进程),将其转置作为通信的一部分。

我正在使用 MPI_Type_create_subarray ,它有一个顺序参数,分别为行优先和列优先的 MPI_ORDER_CMPI_ORDER_FORTRAN 。我想,如果我作为其中之一发送,并作为另一个接收,那么我的矩阵将作为通信的一部分进行转置。然而,这似乎并没有发生——它只是保持不转置。

代码的重要部分如下,整个代码文件可在此要点中获取。有谁知道为什么这不起作用?这种转置方法应该起作用吗?在阅读了 MPI_ORDER_CMPI_ORDER_FORTRAN 的描述后,我本以为会这样,但也许不会。

/* ----------- DO TRANSPOSE ----------- */
/* Find the opposite co-ordinates (as we know it's a square) */
coords2[0] = coords[1];
coords2[1] = coords[0];

/* Get the rank for this process */
MPI_Cart_rank(cart_comm, coords2, &rank2);

/* Send to these new coordinates */

tag = (coords[0] + 1) * (coords[1] + 1);

/* Create new derived type to receive as */
/* MPI_Type_vector(rows_in_core, cols_in_core, cols_in_core, MPI_DOUBLE, &vector_type); */
sizes[0] = rows_in_core;
sizes[1] = cols_in_core;

subsizes[0] = rows_in_core;
subsizes[1] = cols_in_core;

starts[0] = 0;
starts[1] = 0;

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_FORTRAN, MPI_DOUBLE, &send_type);
MPI_Type_commit(&send_type);

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_DOUBLE, &recv_type);
MPI_Type_commit(&recv_type);


/* We're sending in row-major form, so it's just rows_in_core * cols_in_core lots of MPI_DOUBLE */
MPI_Send(&array[0][0], 1, send_type, rank2, tag ,cart_comm);

/* Receive from these new coordinates */
MPI_Recv(&new_array[0][0], 1, recv_type, rank2, tag, cart_comm, &status);

I'm trying to transpose a matrix using MPI in C. Each process has a square submatrix, and I want to send that to the right process (the 'opposite' one on the grid), transposing it as part of the communication.

I'm using MPI_Type_create_subarray which has an argument for the order, either MPI_ORDER_C or MPI_ORDER_FORTRAN for row-major and column-major respectively. I thought that if I sent as one of these, and received as the other, then my matrix would be transposed as part of the communication. However, this doesn't seem to happen - it just stays non-transposed.

The important part of the code is below, and the whole code file is available at this gist. Does anyone have any ideas why this isn't working? Should this approach to doing the transpose work? I'd have thought it would, having read the descriptions of MPI_ORDER_C and MPI_ORDER_FORTRAN, but maybe not.

/* ----------- DO TRANSPOSE ----------- */
/* Find the opposite co-ordinates (as we know it's a square) */
coords2[0] = coords[1];
coords2[1] = coords[0];

/* Get the rank for this process */
MPI_Cart_rank(cart_comm, coords2, &rank2);

/* Send to these new coordinates */

tag = (coords[0] + 1) * (coords[1] + 1);

/* Create new derived type to receive as */
/* MPI_Type_vector(rows_in_core, cols_in_core, cols_in_core, MPI_DOUBLE, &vector_type); */
sizes[0] = rows_in_core;
sizes[1] = cols_in_core;

subsizes[0] = rows_in_core;
subsizes[1] = cols_in_core;

starts[0] = 0;
starts[1] = 0;

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_FORTRAN, MPI_DOUBLE, &send_type);
MPI_Type_commit(&send_type);

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_DOUBLE, &recv_type);
MPI_Type_commit(&recv_type);


/* We're sending in row-major form, so it's just rows_in_core * cols_in_core lots of MPI_DOUBLE */
MPI_Send(&array[0][0], 1, send_type, rank2, tag ,cart_comm);

/* Receive from these new coordinates */
MPI_Recv(&new_array[0][0], 1, recv_type, rank2, tag, cart_comm, &status);

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

寄风 2024-11-07 12:31:02

我本以为这也会起作用,但显然不行。

如果您费力地阅读 MPI 标准的相关部分在它实际定义结果类型映射的地方,原因就很清楚了 - MPI_Type_create_subarray 映射出子数组在整个数组中占用的区域,但以线性顺序在内存中行进,因此数据布局不'改变。换句话说,当大小等于子大小时,子数组只是一个连续的内存块;对于严格小于整个数组的子数组,您只需更改发送/接收的子区域,而不是数据排序。您可以在仅选择子区域时看到效果:

int sizes[]={cols,rows};
int subsizes[]={2,4};
int starts[]={1,1};

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_FORTRAN, MPI_INT, &ftype);
MPI_Type_commit(&ftype);

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_INT, &ctype);
MPI_Type_commit(&ctype);

MPI_Isend(&(send[0][0]), 1, ctype, 0, 1, MPI_COMM_WORLD,&reqc);
MPI_Recv(&(recvc[0][0]), 1, ctype, 0, 1, MPI_COMM_WORLD, &statusc);

MPI_Isend(&(send[0][0]), 1, ctype, 0, 1, MPI_COMM_WORLD,&reqf);
MPI_Recv(&(recvf[0][0]), 1, ftype, 0, 1, MPI_COMM_WORLD, &statusf);

/*...*/

printf("Original:\n");
printarr(send,rows,cols);
printf("\nReceived -- C order:\n");
printarr(recvc,rows,cols);
printf("\nReceived: -- Fortran order:\n");
printarr(recvf,rows,cols);

给您这样的结果:

  0   1   2   3   4   5   6 
 10  11  12  13  14  15  16 
 20  21  22  23  24  25  26 
 30  31  32  33  34  35  36 
 40  41  42  43  44  45  46 
 50  51  52  53  54  55  56 
 60  61  62  63  64  65  66 

Received -- C order:
  0   0   0   0   0   0   0 
  0  11  12  13  14   0   0 
  0  21  22  23  24   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 

Received: -- Fortran order:
  0   0   0   0   0   0   0 
  0  11  12   0   0   0   0 
  0  13  14   0   0   0   0 
  0  21  22   0   0   0   0 
  0  23  24   0   0   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 

因此发送和接收的数据相同;真正发生的只是数组的大小、子大小和起始位置被颠倒了。

您可以使用 MPI 数据类型进行转置——该标准甚至给出了一个 几个示例,其中一个我已在此处音译为 C 语言 - 但您必须自己创建类型。好消息是它实际上不比子数组的东西长:

MPI_Type_vector(rows, 1, cols, MPI_INT, &col);
MPI_Type_hvector(cols, 1, sizeof(int), col, &transpose);
MPI_Type_commit(&transpose);

MPI_Isend(&(send[0][0]), rows*cols, MPI_INT, 0, 1, MPI_COMM_WORLD,&req);
MPI_Recv(&(recv[0][0]), 1, transpose, 0, 1, MPI_COMM_WORLD, &status);

MPI_Type_free(&col);
MPI_Type_free(&transpose);

printf("Original:\n");
printarr(send,rows,cols);
printf("Received\n");
printarr(recv,rows,cols);



$ mpirun -np 1 ./transpose2 
Original:
  0   1   2   3   4   5   6 
 10  11  12  13  14  15  16 
 20  21  22  23  24  25  26 
 30  31  32  33  34  35  36 
 40  41  42  43  44  45  46 
 50  51  52  53  54  55  56 
 60  61  62  63  64  65  66 
Received
  0  10  20  30  40  50  60 
  1  11  21  31  41  51  61 
  2  12  22  32  42  52  62 
  3  13  23  33  43  53  63 
  4  14  24  34  44  54  64 
  5  15  25  35  45  55  65 
  6  16  26  36  46  56  66 

I would have thought this would work, too, but apparently not.

If you slog through the relevant bit of the MPI standard where it actually defines the resulting typemap, the reason becomes clear -- MPI_Type_create_subarray maps out the region that the subarray takes in the full array, but marches through the memory in linear order, so the data layout doesn't change. In other words, when the sizes equal the subsizes, the subarray is just a contiguous block of memory; and for a subarray strictly smaller than the whole array, you're just changing the subregion that is being sent/received to, not the data ordering. You can see the effect when choosing just a subregion:

int sizes[]={cols,rows};
int subsizes[]={2,4};
int starts[]={1,1};

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_FORTRAN, MPI_INT, &ftype);
MPI_Type_commit(&ftype);

MPI_Type_create_subarray(2, sizes, subsizes, starts, MPI_ORDER_C, MPI_INT, &ctype);
MPI_Type_commit(&ctype);

MPI_Isend(&(send[0][0]), 1, ctype, 0, 1, MPI_COMM_WORLD,&reqc);
MPI_Recv(&(recvc[0][0]), 1, ctype, 0, 1, MPI_COMM_WORLD, &statusc);

MPI_Isend(&(send[0][0]), 1, ctype, 0, 1, MPI_COMM_WORLD,&reqf);
MPI_Recv(&(recvf[0][0]), 1, ftype, 0, 1, MPI_COMM_WORLD, &statusf);

/*...*/

printf("Original:\n");
printarr(send,rows,cols);
printf("\nReceived -- C order:\n");
printarr(recvc,rows,cols);
printf("\nReceived: -- Fortran order:\n");
printarr(recvf,rows,cols);

gives you this:

  0   1   2   3   4   5   6 
 10  11  12  13  14  15  16 
 20  21  22  23  24  25  26 
 30  31  32  33  34  35  36 
 40  41  42  43  44  45  46 
 50  51  52  53  54  55  56 
 60  61  62  63  64  65  66 

Received -- C order:
  0   0   0   0   0   0   0 
  0  11  12  13  14   0   0 
  0  21  22  23  24   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 

Received: -- Fortran order:
  0   0   0   0   0   0   0 
  0  11  12   0   0   0   0 
  0  13  14   0   0   0   0 
  0  21  22   0   0   0   0 
  0  23  24   0   0   0   0 
  0   0   0   0   0   0   0 
  0   0   0   0   0   0   0 

So the same data is getting sent and received; all that's really happening is that the arrays sizes, subsizes and starts are being reversed.

You can transpose with MPI datatypes -- the standard even gives a couple of examples, one of which I've transliterated into C here -- but you have to create the types yourself. The good news is that it's really no longer than the subarray stuff:

MPI_Type_vector(rows, 1, cols, MPI_INT, &col);
MPI_Type_hvector(cols, 1, sizeof(int), col, &transpose);
MPI_Type_commit(&transpose);

MPI_Isend(&(send[0][0]), rows*cols, MPI_INT, 0, 1, MPI_COMM_WORLD,&req);
MPI_Recv(&(recv[0][0]), 1, transpose, 0, 1, MPI_COMM_WORLD, &status);

MPI_Type_free(&col);
MPI_Type_free(&transpose);

printf("Original:\n");
printarr(send,rows,cols);
printf("Received\n");
printarr(recv,rows,cols);



$ mpirun -np 1 ./transpose2 
Original:
  0   1   2   3   4   5   6 
 10  11  12  13  14  15  16 
 20  21  22  23  24  25  26 
 30  31  32  33  34  35  36 
 40  41  42  43  44  45  46 
 50  51  52  53  54  55  56 
 60  61  62  63  64  65  66 
Received
  0  10  20  30  40  50  60 
  1  11  21  31  41  51  61 
  2  12  22  32  42  52  62 
  3  13  23  33  43  53  63 
  4  14  24  34  44  54  64 
  5  15  25  35  45  55  65 
  6  16  26  36  46  56  66 
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文