并行编程入门

发布于 2024-08-07 02:39:48 字数 593 浏览 6 评论 0原文

所以看起来多核及其所有相关的复杂性都将继续存在。我正在计划一个肯定会受益于并行性的软件项目。问题是我编写并发软件的经验很少。我在大学学习过它,并且非常了解其概念和理论,但自学校起就没有构建在多个处理器上运行的软件的有用经验。

所以我的问题是,开始多处理器编程的最佳方法是什么? 我主要熟悉 Mac OS X 上的 C/C++ 和 Obj-C Linux 开发,而 Windows 经验几乎为零。另外,我计划的软件项目将需要 FFT 以及可能需要对大量数据进行浮点比较。

有 OpenCL、OpenMP、MPI、POSIX 线程等...我应该从哪些技术开始?

以下是我正在考虑的几个堆栈选项,但不确定它们是否会让我尝试实现我的目标:

  • 我应该购买 Snow Leopard 并尝试 运行 OpenCL Obj-C 程序 在我的 ATI X1600 GPU 上执行 笔记本电脑?或者
  • 我应该得到一个 Playstation 并尝试编写 C 代码 抛出其六个可用的 Cell SPE 核心? 或者
  • 我应该构建一个 Linux 盒子吗 使用 Nvidia 卡并尝试工作 与CUDA?

预先感谢您的帮助。

So it looks like multicore and all its associated complications are here to stay. I am planning a software project that will definitely benefit from parallelism. The problem is that I have very little experience writing concurrent software. I studied it at University and understand the concepts and theory very well but have zero hands on useful experience building software to run on on multiple processors since school.

So my question is, what is the best way to get started with multiprocessor programming?
I am familiar with mostly Linux development in C/C++ and Obj-C on Mac OS X with almost zero Windows experience. Also my planned software project will require FFT and probably floating point comparisons of a lot of data.

There is OpenCL, OpenMP, MPI, POSIX threads, etc... What technologies should I start with?

Here are a couple stack options I am considering but not sure if they will let me experiment in working towards my goal:

  • Should I get Snow Leopard and try to
    get OpenCL Obj-C programs to run
    execution on the ATI X1600 GPU on my
    laptop? or
  • Should I get a
    Playstation and try writing C code to
    throw across its six available Cell SPE cores?
    or
  • Should I build out a Linux box
    with an Nvidia card and try working
    with CUDA?

Thanks in advance for your help.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

迎风吟唱 2024-08-14 02:39:48

我建议首先使用 OpenMP 和 MPI,不确定您首先选择哪一个是否重要,但您绝对应该想要(在我看来:-))学习共享和分布式内存并行计算方法。

我建议首先避免使用 OpenCL、CUDA、POSIX 线程:在开始研究子结构之前,先打好并行应用程序基础知识的基础。例如,学习在 MPI 中使用广播通信比在线程中对其进行编程要容易得多。

我会在 Mac 上坚持使用 C/C++,因为您已经熟悉它们,并且有适合该平台和这些语言的优秀开源 OpenMP 和 MPI 库。

而且,对于我们中的一些人来说,这是一个很大的优势,无论您学习 C/C++ 和 MPI(在较小程度上也是 OpenMP),当您毕业到真正的超级计算机时,都会对您有很大帮助。

都是主观的和有争议的,所以如果你愿意的话可以忽略这一点。

I'd suggest going OpenMP and MPI initially, not sure it matters which you choose first, but you definitely ought to want (in my opinion :-) ) to learn both shared and distributed memory approaches to parallel computing.

I suggest avoiding OpenCL, CUDA, POSIX threads, at first: get a good grounding in the basics of parallel applications before you start to wrestle with the sub-structure. For example, it's much easier to learn to use broadcast communications in MPI than it is to program them in threads.

I'd stick with C/C++ on your Mac since you are already familiar with them, and there are good open-source OpenMP and MPI libraries for that platform and those languages.

And, and for some of us it's a big plus, whatever you learn about C/C++ and MPI (to a lesser extent it's true of OpenMP too) will serve you well when you graduate to real supercomputers.

All subjective and argumentative, so ignore this if you wish.

姜生凉生 2024-08-14 02:39:48

如果您对 OS X 中的并行性感兴趣,请务必查看 Grand Central Dispatch,特别是因为该技术已经开源,并且可能很快就会得到更广泛的采用。

If you're interested in parallelism in OS X, make sure to check out Grand Central Dispatch, especially since the tech has been open-sourced and may soon see much wider adoption.

疯了 2024-08-14 02:39:48

传统且必要的“带锁的共享状态”并不是您唯一的选择。 Rich Hickey 是 Clojure(JVM 的 Lisp 1)的创建者,他对共享状态提出了非常令人信服的论点。他基本上认为这几乎是不可能的。您可能想阅读有关 Erlang actor 或 STM 库的消息传递的信息。

The traditional and imperative 'shared state with locks' isn't your only choice. Rich Hickey, the creator of Clojure, a Lisp 1 for the JVM, makes a very compelling argument against shared state. He basically argues that it's almost impossible to get right. You may want to read up on message passing ala Erlang actors or STM libraries.

尾戒 2024-08-14 02:39:48

你应该学习一些Erlang。为了伟大的利益。

You should Learn You Some Erlang. For great good.

夜唯美灬不弃 2024-08-14 02:39:48

您不需要图形卡和单元等特殊硬件来进行并行编程。您的简单多核 CPU 也将从并行编程中受益。如果您有 C/C++ 和 Objective-C 的经验,请从其中之一开始并学习使用线程。从矩阵乘法或迷宫求解等简单示例开始,您将了解那些烦人的问题(并行软件是不确定的,并且充满 Heisenbug)。

如果您想进入大规模多重并行,我会选择 openCL,因为它是最可移植的。 Cuda 仍然拥有更大的社区、更多的文档和示例,而且更容易一些,但你需要一张 nvidia 卡。

You don't need special hardware like graphic cards and Cells to do parallel programming. Your simple multi-core CPU will also profit from parallel programming. If you have experience with C/C++ and objective-c, start with one of those and learn to use threads. Start with simple examples like matrix multiplication or maze solving and you'll learn about those pesky problems (parallel software is non-deterministic and full of Heisenbugs).

If you want to go into the massive multiparallelism, I'd choose openCL as it's the most portable one. Cuda still has a larger community, more documentation and examples and is a bit easier, but you'd an nvidia card.

梦情居士 2024-08-14 02:39:48

也许您的问题适合 MapReduce 范例。它自动处理负载平衡和并发问题,Google 的研究论文已经是经典。您有一个名为 Mars 的单机实现,它在 GPU 上运行,这可能适合您。还有 Phoenix 在多核和对称多处理器上运行 map-reduce。

Maybe your problem is suitable for the MapReduce paradigm. It automatically takes care of load balancing and concurrency issues, the research paper from Google is already a classic. You have a single-machine implementation called Mars that run on GPUs, this may work fine for you. There is also Phoenix that runs map-reduce on multicore and symmetric multiprocessors.

轻拂→两袖风尘 2024-08-14 02:39:48

当您学习如何处理分布式内存时,我将从 MPI 开始。 帕切科的书是一本老书,但却是一本好书,MPI 运行良好OS X 上的盒子现在提供相当好的多核性能。

I would start with MPI as you learn how to deal with distributed memory. Pacheco's book is an oldie but a goodie, and MPI runs fine out of the box on OS X now giving pretty good multicore performance.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文