向 C++ 添加 MPI 支持 程序

发布于 2024-07-26 06:21:26 字数 417 浏览 5 评论 0原文

我有一个用 C++ 实现的程序,现在我想添加 MPI 支持。 有一个用于 C++ 的 MPI 绑定,带有名称空间 MPI 和所有内容。

就我而言,我有一个特定的对象适合作为集群中的并行进程。

我的问题是:

  • 以前有人做过类似的事情吗? 我可以获得一些关于如何最好地实施这一点的建议吗?
  • 如何在构造函数内初始化 MPI? 在类的构造函数中初始化 MPI 后,所有中间调用也会并行化吗?

例如:

MyClass obj;

x = x; //this will be parallelized ?
onj.calc();

y = x++; //this will be parallelized ?

z = obj.result();

I have a program that is been implemented in C++ which I now want to add MPI support. There is an MPI binding for C++, with namespace MPI and everything.

In my case I have a specific object that is suitable to be the parallelized process into the cluster.

My questions are:

  • Has anyone done something like this before? Can I get some advices on how best to implement this?
  • How do I initialize MPI inside the constructor? After initializing MPI inside the constructor of the Class, will all the intermediate calls be parallelized too?

For example:

MyClass obj;

x = x; //this will be parallelized ?
onj.calc();

y = x++; //this will be parallelized ?

z = obj.result();

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

乜一 2024-08-02 06:21:26

我真的建议您阅读Gropp MPI Book,它对基本 MPI 确实有帮助!

I would really recommend picking up the Gropp MPI Book, it really helps for basic MPI!

二手情话 2024-08-02 06:21:26

MPI 不会自动并行化任何内容,它只为您提供在节点之间发送数据的接口。 您的代码在每个节点上独立地像平常的顺序代码一样编写和运行,并且每隔一段时间您就将数据发送到其他节点或尝试从其他节点接收数据。

MPI doesn't parallelize anything automatically, it only gives you an interface for sending data between nodes. Your code is written and runs as usual sequential code independently on each node and every once in a while you send data to some other node or try to receive data from some other node.

殤城〤 2024-08-02 06:21:26

Chiming in on an old thread, I found OpenMPI and Boost::MPI nice to work with. The object oriented design of the library may be a bit bolted on, but I found it much nicer to work with than pure MPI, especially with auto-serialization of many types and a rather extendable interface for gather/reduce functions as well as serialization of user types.

¢好甜 2024-08-02 06:21:26

作为背景信息:

大多数使用 MPI 的应用程序都是用 Fortran 或 C 编写的。MPI 的每个主要实现都是用 C 编写的。

对 MPI C++ 绑定的支持充其量是粗略的:某些 MPI 数据类型不可用(例如 MPI_DOUBLE),有I/O 以及标头包含在源文件中的顺序存在问题。 如果 MPI 库是用 C 构建的,而应用程序是用 Fortran 或 C++ 构建的,则存在名称修改问题。 mpich2 常见问题解答包含可帮助解决这些问题的条目。 我对 Open MPI 不太熟悉,它在 Fortran 和 C++ 中具有特殊的行为。

对于您的具体问题:

我认为您对 MPI 是什么、不是什么以及应用程序代码应如何与 MPI 库交互存在根本性的误解。

使用 MPI 进行并行编程是学习使用 MPI 进行编程的绝佳参考。 代码示例采用 C 语言,大多数 MPI API 都在示例中显示。 我强烈建议您阅读这本书,以了解什么是并行编程、什么不是并行编程。

As background information:

Most applications that use MPI are written in Fortran or C. Every major implementation of MPI is written in C.

Support for MPI C++ bindings is sketchy at best: Some of the MPI Datatypes are not available (e.g. MPI_DOUBLE), there are issues with I/O and the order that headers are included in the source files. There are issues of name mangling if the MPI library was built with C and the application is built with Fortran or C++. mpich2 FAQ has entries to help work through these issues. I am less familiar with Open MPI and it's particular behavior with Fortran and C++.

For your specific questions:

I think that you have a fundamental mis-understanding about what MPI is and is not, and how application code should interact with the MPI libraries.

Parallel Programming with MPI is an excellent reference for learning to program with MPI. The code examples are in C, and most of the MPI API's are shown in an example. I highly recommend that you work through this book to learn what parallel programing is and is not.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文