First off, I'll point out that concurrent programming is not necessarily synonymous with parallel programming. Concurrent programming is about constructing applications from loosely-coupled tasks. For instance, a dialog window could have interactions with each control implemented as a separate task. Parallel programming, on the other hand, is explicitly about spreading the solution of some computational task across more than a single piece of execution hardware, essentially always for performance reasons of some sort (note: even too little RAM is a performance reason when the alternative is swapping.
So, I have to ask in return: What books are you referring to? Are they about concurrent programming (I have a few of these, there's a lot of interesting theory there), or about parallel programming?
If they really are about parallel programming, I'll make a few observations:
CUDA is a rapidly moving target, and has been since its release. A book written about it today would be half-obsolete by the time it made it into print.
OpenCL's standard was released just under a year ago. Stable implementations came out over the last 8 months or so. There's simply not been enough time to get a book written yet, let alone revised and published.
OpenMP is covered in at least a few of the parallel programming textbooks that I've used. Up to version 2 (v3 was just released), it was essentially all about data parallel programming.
I think those working with parallel computing academically today are usually coming from the cluster computing field. OpenCL and CUDA use graphics processors, which more or less inadvertently have evolved into general purpose processors along with the development of more advanced graphics rendering algorithms.
However, the graphics people and the high performance computing people have been "discovering" each other for some time now, and a lot or research is being does into using GPUs for general purpose computing.
发布评论
评论(4)
“总是”有点强;有一些资源(示例)包含数据并行性主题。
"always" is a bit strong; there are resources out there (example) that include data parallelism topics.
Hillis 的经典著作《连接机器》讲的都是数据并行。这是我的最爱之一
The classic book "The Connection Machine" by Hillis was all data parallelism. It's one of my favorites
首先,我要指出并发编程不一定是并行编程的同义词。并发编程是关于从松散耦合的任务构建应用程序。例如,对话窗口可以与作为单独任务实现的每个控件进行交互。另一方面,并行编程明确是将某些计算任务的解决方案分散到多个执行硬件上,本质上总是出于某种性能原因(注意:当替代方案时,即使 RAM 太少也是一个性能原因) ?
所以,我必须反问:你指的是哪些书?它们是关于并发编程的(我有一些,里面有很多有趣的理论),或者是关于并行编程
的 关于并行编程,我将提出一些意见:
First off, I'll point out that concurrent programming is not necessarily synonymous with parallel programming. Concurrent programming is about constructing applications from loosely-coupled tasks. For instance, a dialog window could have interactions with each control implemented as a separate task. Parallel programming, on the other hand, is explicitly about spreading the solution of some computational task across more than a single piece of execution hardware, essentially always for performance reasons of some sort (note: even too little RAM is a performance reason when the alternative is swapping.
So, I have to ask in return: What books are you referring to? Are they about concurrent programming (I have a few of these, there's a lot of interesting theory there), or about parallel programming?
If they really are about parallel programming, I'll make a few observations:
我认为当今学术界从事并行计算工作的人通常来自集群计算领域。 OpenCL 和 CUDA 使用图形处理器,随着更先进的图形渲染算法的发展,图形处理器或多或少地已经演变为通用处理器。
然而,图形人员和高性能计算人员已经“发现”彼此有一段时间了,并且正在开展大量研究以使用 GPU 进行通用计算。
I think those working with parallel computing academically today are usually coming from the cluster computing field. OpenCL and CUDA use graphics processors, which more or less inadvertently have evolved into general purpose processors along with the development of more advanced graphics rendering algorithms.
However, the graphics people and the high performance computing people have been "discovering" each other for some time now, and a lot or research is being does into using GPUs for general purpose computing.