如何使用 MPI I/O 将多维结构数组写入磁盘?

发布于 2025-01-13 08:07:45 字数 2573 浏览 3 评论 0原文

我正在尝试使用 MPI I/O 将复数数组写入磁盘。特别是,我试图使用函数 MPI_File_set_view 来实现这一点,以便我可以将我的代码推广到更高的维度。请参阅下面我的尝试。

struct complex
{
    float real=0;
    float imag=0;
};

int main(int argc, char* argv[])
{
    /* Initialise MPI parallel branch */
    int id, nprocs;
    MPI_Init(&argc, &argv);
    MPI_Comm comm = MPI_COMM_WORLD;
    MPI_Comm_rank(comm, &id);
    MPI_Comm_size(comm, &nprocs);

    /* Create a datatype to represent structs of complex numbers */
    MPI_Datatype MPI_Complex;
    const int    lengths[2] = { 1, 1 };
    MPI_Datatype types[2] = { MPI_FLOAT, MPI_FLOAT };
    MPI_Aint     displacements[2], base_address;
    complex      dummy_complex;
    MPI_Get_address(&dummy_complex, &base_address);
    MPI_Get_address(&dummy_complex.real, &displacements[0]);
    MPI_Get_address(&dummy_complex.imag, &displacements[1]);
    displacements[0] = MPI_Aint_diff(displacements[0], base_address);
    displacements[1] = MPI_Aint_diff(displacements[1], base_address);
    MPI_Type_create_struct(2, lengths, displacements, types, &MPI_Complex);
    MPI_Type_commit(&MPI_Complex);

    /* Create a datatype to represent local arrays as subarrays of a global array */
    MPI_Datatype MPI_Complex_array;
    const int global_size[1] = { 100 };
    const int local_size[1] = { (id < nprocs-1 ? 100/nprocs : 100/nprocs + 100%nprocs) };
    const int glo_coord_start[1] = { id * (100/nprocs) };
    MPI_Type_create_subarray(1, global_size, local_size, glo_coord_start, MPI_ORDER_C,
                             MPI_Complex, &MPI_Complex_array);
    MPI_Type_commit(&MPI_Complex_array);

    /* Define and populate an array of complex numbers */
    complex *z = (complex*) malloc( sizeof(complex) * local_size[0] );
    /* ...other stuff here... */

    /* Write local data out to disk concurrently */
    MPI_Offset offset = 0;
    MPI_File file;
    MPI_File_open(comm, "complex_nums.dat", MPI_MODE_CREATE|MPI_MODE_WRONLY, MPI_INFO_NULL, &file);
    MPI_File_set_view(file, offset, MPI_Complex, MPI_Complex_array, "external", MPI_INFO_NULL);
    MPI_File_write_all(file, z, local_size[0], MPI_Complex, MPI_STATUS_IGNORE);
    MPI_File_close(&file);

    /* ...more stuff here... */
}

然而,使用上面的代码,只有来自标记为id = 0的进程的本地数据被保存到磁盘。我做错了什么以及如何解决这个问题?谢谢。

注意请注意,对于这个一维示例,我可以通过放弃 MPI_File_set_view 并使用 MPI_File_write_at_all 等更简单的东西来避免该问题。尽管如此,我还没有解决根本问题,因为我仍然不明白为什么上面的代码不起作用,并且我想要一个可以推广到多维数组的解决方案。在这方面非常感谢您的帮助。谢谢。

I am trying to write an array of complex numbers to disk using MPI I/O. In particular, I am trying to achieve this using the function MPI_File_set_view so that I can generalise my code for higher dimensions. Please see my attempt below.

struct complex
{
    float real=0;
    float imag=0;
};

int main(int argc, char* argv[])
{
    /* Initialise MPI parallel branch */
    int id, nprocs;
    MPI_Init(&argc, &argv);
    MPI_Comm comm = MPI_COMM_WORLD;
    MPI_Comm_rank(comm, &id);
    MPI_Comm_size(comm, &nprocs);

    /* Create a datatype to represent structs of complex numbers */
    MPI_Datatype MPI_Complex;
    const int    lengths[2] = { 1, 1 };
    MPI_Datatype types[2] = { MPI_FLOAT, MPI_FLOAT };
    MPI_Aint     displacements[2], base_address;
    complex      dummy_complex;
    MPI_Get_address(&dummy_complex, &base_address);
    MPI_Get_address(&dummy_complex.real, &displacements[0]);
    MPI_Get_address(&dummy_complex.imag, &displacements[1]);
    displacements[0] = MPI_Aint_diff(displacements[0], base_address);
    displacements[1] = MPI_Aint_diff(displacements[1], base_address);
    MPI_Type_create_struct(2, lengths, displacements, types, &MPI_Complex);
    MPI_Type_commit(&MPI_Complex);

    /* Create a datatype to represent local arrays as subarrays of a global array */
    MPI_Datatype MPI_Complex_array;
    const int global_size[1] = { 100 };
    const int local_size[1] = { (id < nprocs-1 ? 100/nprocs : 100/nprocs + 100%nprocs) };
    const int glo_coord_start[1] = { id * (100/nprocs) };
    MPI_Type_create_subarray(1, global_size, local_size, glo_coord_start, MPI_ORDER_C,
                             MPI_Complex, &MPI_Complex_array);
    MPI_Type_commit(&MPI_Complex_array);

    /* Define and populate an array of complex numbers */
    complex *z = (complex*) malloc( sizeof(complex) * local_size[0] );
    /* ...other stuff here... */

    /* Write local data out to disk concurrently */
    MPI_Offset offset = 0;
    MPI_File file;
    MPI_File_open(comm, "complex_nums.dat", MPI_MODE_CREATE|MPI_MODE_WRONLY, MPI_INFO_NULL, &file);
    MPI_File_set_view(file, offset, MPI_Complex, MPI_Complex_array, "external", MPI_INFO_NULL);
    MPI_File_write_all(file, z, local_size[0], MPI_Complex, MPI_STATUS_IGNORE);
    MPI_File_close(&file);

    /* ...more stuff here... */
}

However, with the above code, only the local data from the process labelled id = 0 is being saved to disk. What am I doing wrong and how can I fix this? Thank you.

N.B. Please note that for this one dimensional example, I can avoid the problem by giving up on MPI_File_set_view and using something simpler like MPI_File_write_at_all. Nevertheless, I have not solved the underlying problem because I still don't understand why the above code does not work, and I would like a solution that can be generalised for multi-dimensional arrays. Your help is much appreciated in this regard. Thank you.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

淤浪 2025-01-20 08:07:45

默认行为是当 MPI-IO 子例程失败时中止。
因此,除非您更改此默认行为,否则您应该始终测试 MPI-IO 子例程返回的错误代码。

在您的情况下,MPI_File_set_view() 失败,因为external 不是有效的数据表示。
我猜这是一个拼写错误,您的意思是 external32

The default behavior is not to abort when a MPI-IO subroutine fails.
So unless you change this default behavior, you should always test the error code returned by MPI-IO subroutines.

In your case, MPI_File_set_view() fails because external is not a valid data representation.
I guess this is a typo and you meant external32.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文