并行编程==多线程编程吗?

发布于 2024-08-22 04:44:20 字数 20 浏览 10 评论 0原文

并行编程==多线程编程吗?

Is parallel programming == multithread programming?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

小矜持 2024-08-29 04:44:20

多线程编程是并行的,但并行编程不一定是多线程的。

除非多线程发生在单核上,这种情况下它只是并发。

Multithreaded programming is parallel, but parallel programming is not necessarily multithreaded.

Unless the multithreading occurs on a single core, in which case it is only concurrent.

廻憶裏菂餘溫 2024-08-29 04:44:20

不一定。您可以在多个进程甚至多台机器之间分配作业 - 我不会将其归类为“多线程”编程,因为每个进程可能只使用单个线程,但它肯定是并行编程。诚然,您可能会争辩说,对于多个进程,整个系统内有多个线程......

最终,这样的定义仅在上下文中有用。在您的具体情况下,这会产生什么区别?或者这只是出于兴趣?

Not necessarily. You can distribute jobs between multiple processes and even multiple machines - I wouldn't class that as "multi-threaded" programming as each process may only use a single thread, but it's certainly parallel programming. Admittedly you could then argue that with multiple processes there are multiple threads within the system as a whole...

Ultimately, definitions like this are only useful within a context. In your particular case, what difference is it going to make? Or is this just out of interest?

说好的呢 2024-08-29 04:44:20

不,多线程编程意味着你有一个进程,并且这个进程生成一堆线程。所有线程同时运行,但它们都在同一进程空间下:它们可以访问相同的内存,具有相同的打开文件描述符,等等。

并行编程的定义更加“通用”。在MPI中,您通过多次运行相同的进程来执行并行编程,不同之处在于每个进程获得不同的“标识符”,因此如果您愿意,您可以区分每个进程,但这不是必需的。此外,这些进程彼此独立,它们必须通过管道或网络/unix 套接字进行通信。 MPI 库提供特定函数以同步或异步方式在节点之间来回移动数据。

相比之下,OpenMP 通过多线程和共享内存实现并行化。您向编译器指定特殊指令,它会自动为您执行并行执行。

OpenMP 的优点是非常透明。有一个循环可以并行化吗?只需添加几个指令,编译器就会将其分成几部分,并将循环的每个部分分配给不同的处理器。不幸的是,为此您需要一个共享内存架构。具有基于节点的架构的集群无法在集群级别使用 OpenMP。 MPI 允许您在基于节点的架构上工作,但您必须付出更复杂且不透明的使用的代价。

No. multithread programming means that you have a single process, and this process generates a bunch of threads. All the threads are running at the same time, but they are all under the same process space: they can access the same memory, have the same open file descriptors, and so on.

Parallel programming is a bit more "general" as a definition. in MPI, you perform parallel programming by running the same process multiple times, with the difference that every process gets a different "identifier", so if you want, you can differentiate each process, but it is not required. Also, these processes are independent from each other, and they have to communicate via pipes, or network/unix sockets. MPI libraries provide specific functions to move data to-and-fro the nodes, in synchronous or asynchronous style.

In contrast, OpenMP achieves parallelization via multithreading and shared-memory. You specify special directives to the compiler, and it automagically performs parallel execution for you.

The advantage of OpenMP is that it is very transparent. Have a loop to parallelize? just add a couple of directives and the compiler chunks it in pieces, and assign each piece of the loop to a different processor. Unfortunately, you need a shared-memory architecture for this. Clusters having a node-based architecture are not able to use OpenMP at the cluster level. MPI allows you to work on a node-based architecture, but you have to pay the price of a more complex and not transparent usage.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文