即使在 CPU 上的多线程环境中,每个线程也在低级别上做自己的事情,线程之间的同步也相对罕见。 要使用 GPGPU 的强大功能,您需要运行数千个线程,这些线程在逻辑上对不同的数据运行相同的指令,几乎完全同步。
与了解数据并行范例相比,学习 CUDA 语法相对较快,因此,如果您打算为 GPGPU 编程做好准备,那么现在开始学习 CUDA 将是一个非常值得的举措。
From my experience, the major jump from general purpose processor programming to GPGPU programming are the conceptual leaps. The key here is data parallel code.
Even in a multi-threaded environment on a CPU, each thread is doing its own thing on a low level, and synchronization between threads is a relatively rare occurrence. To use the power of a GPGPU, you need to be running thousands of threads which are logically running the same instructions, on different data, almost completely in sync.
Learning the CUDA syntax is relatively quick compared to getting one's head around the data parallel paradigm, so if you intend on tooling yourself up for GPGPU programming, starting with CUDA now would be a very worthwhile move.
我的一般建议是,我认为现在有很多东西需要学习,稍后您将能够使用它们(通过 DX11 或 OpenCL),但是您必须问自己是否愿意学习一些可能无法在现实中应用的技术。从长远来看。 无论如何,这些只是我的想法,我还没有大量的 CUDA 经验。
从高度推测的角度来看,我的直觉是 CUDA 之类的 API 不会长期存在,而 DirectX 和/或 OpenCL 是唯一有未来的解决方案(除非他们真的搞砸了他们的实现,我对此表示怀疑)。
From a learning point-of-view I think you would benefit from starting with CUDA now, since it will help you a lot with thinking in data-parallelism which is what the GPUs are good at. Then when/if you turn to DirectX 11, you have a good foundation for working with it, but it depends on the kind of time you have available (i.e. if you have time to experiment with stuff just for the learning experience).
Alternatively, the mac people are pushing for OpenCL (Open Compute Language) to be the general solution, though not much is known at this point. This is another technology you can wait for and check out.
The Microsoft PDC conference is held later this month, maybe they will announce some useful info on DX11 to help you make up your mind.
My general advice would be that I think there is a lot to learn now which you will be able to use later (with DX11 or OpenCL) but that you have to ask yourself if you are willing to learn some technology which might not make it in the long run. Anyways, these are just my thoughts, I don't have a huge amount of experience with CUDA yet.
On a highly speculative note, my gut feeling is that APIs such as CUDA won't survive for long and that DirectX and/or OpenCL are the only solutions which have a future (Unless they really botch their implementations, which I doubt).
nVidia's Cuda and ATI's CAL are roughly equivalent in features. Cuda only works on nVidia gpus and CAL only works on ATI gpus.
Eventually, there will be good cross-platform development tools, but that's a huge void right now. DirectX 11 compute shaders and OpenCL will be fighting it out to be the tool of choice, but neither one is available yet.
If you want build some "real" app, and not just a throw-away learning experience, and you want it to work cross-platform, there are some alternatives: Brook, for example. Also, people have been doing gpgpu work with both DirectX and OpenGL (not OpenCL) for several years, without waiting for explicit GPGPU features. Go to gpgpu.org for pointers
Both DirectX 11 Compute Shaders and OpenCL are mainly based on CUDA, so it is definitely worth to start working with CUDA now. Basically, they all use the same memory model, and have a similar syntax, which is closer to CUDA than to Brook+ (which you would use with the Stream SDK).
However, if you want DX11, there is no need to wait, just grab the November 2008 SDK from Microsoft which comes with a DX11 preview, which you can already use to write (at least) simple compute shader applications.
发布评论
评论(4)
根据我的经验,从通用处理器编程到 GPGPU 编程的主要跳跃是概念上的跳跃。 这里的关键是数据并行代码。
即使在 CPU 上的多线程环境中,每个线程也在低级别上做自己的事情,线程之间的同步也相对罕见。 要使用 GPGPU 的强大功能,您需要运行数千个线程,这些线程在逻辑上对不同的数据运行相同的指令,几乎完全同步。
与了解数据并行范例相比,学习 CUDA 语法相对较快,因此,如果您打算为 GPGPU 编程做好准备,那么现在开始学习 CUDA 将是一个非常值得的举措。
From my experience, the major jump from general purpose processor programming to GPGPU programming are the conceptual leaps. The key here is data parallel code.
Even in a multi-threaded environment on a CPU, each thread is doing its own thing on a low level, and synchronization between threads is a relatively rare occurrence. To use the power of a GPGPU, you need to be running thousands of threads which are logically running the same instructions, on different data, almost completely in sync.
Learning the CUDA syntax is relatively quick compared to getting one's head around the data parallel paradigm, so if you intend on tooling yourself up for GPGPU programming, starting with CUDA now would be a very worthwhile move.
从学习的角度来看,我认为现在开始使用 CUDA 会让您受益匪浅,因为它将对您思考数据并行性有很大帮助,而这正是 GPU 所擅长的。 然后,当/如果您转向 DirectX 11,您就有了使用它的良好基础,但这取决于您可用的时间(即您是否有时间尝试一些东西只是为了学习体验)。
另外,Mac 开发人员正在推动 OpenCL(开放计算语言)成为通用解决方案,尽管目前还知之甚少。 这是另一项您可以等待并检验的技术。
微软PDC会议将于本月晚些时候召开,也许他们会公布一些关于DX11的有用信息来帮助你下定决心。
我的一般建议是,我认为现在有很多东西需要学习,稍后您将能够使用它们(通过 DX11 或 OpenCL),但是您必须问自己是否愿意学习一些可能无法在现实中应用的技术。从长远来看。 无论如何,这些只是我的想法,我还没有大量的 CUDA 经验。
从高度推测的角度来看,我的直觉是 CUDA 之类的 API 不会长期存在,而 DirectX 和/或 OpenCL 是唯一有未来的解决方案(除非他们真的搞砸了他们的实现,我对此表示怀疑)。
From a learning point-of-view I think you would benefit from starting with CUDA now, since it will help you a lot with thinking in data-parallelism which is what the GPUs are good at. Then when/if you turn to DirectX 11, you have a good foundation for working with it, but it depends on the kind of time you have available (i.e. if you have time to experiment with stuff just for the learning experience).
Alternatively, the mac people are pushing for OpenCL (Open Compute Language) to be the general solution, though not much is known at this point. This is another technology you can wait for and check out.
The Microsoft PDC conference is held later this month, maybe they will announce some useful info on DX11 to help you make up your mind.
My general advice would be that I think there is a lot to learn now which you will be able to use later (with DX11 or OpenCL) but that you have to ask yourself if you are willing to learn some technology which might not make it in the long run. Anyways, these are just my thoughts, I don't have a huge amount of experience with CUDA yet.
On a highly speculative note, my gut feeling is that APIs such as CUDA won't survive for long and that DirectX and/or OpenCL are the only solutions which have a future (Unless they really botch their implementations, which I doubt).
如果你想要学习经验,那就去吧!
另一种选择是 AMD/ATI 的流 SDK,您可以在此处下载:http://ati。 amd.com/technology/streamcomputing/sdkdwnld.html
nVidia 的 Cuda 和 ATI 的 CAL 在功能上大致相当。 Cuda 仅适用于 nVidia gpus,CAL 仅适用于 ATI gpus。
最终,将会出现良好的跨平台开发工具,但目前这是一个巨大的空白。 DirectX 11 计算着色器和 OpenCL 将争夺成为首选工具,但目前还没有哪一个可用。
如果您想要构建一些“真正的”应用程序,而不仅仅是一次性的学习体验,并且您希望它能够跨平台工作,那么有一些替代方案:例如 Brook。 此外,人们多年来一直在使用 DirectX 和 OpenGL(不是 OpenCL)进行 gpgpu 工作,而没有等待明确的 GPGPU 功能。 前往 gpgpu.org 获取指点
If you want the learning experience, go for it!
Another alternative is AMD/ATI's stream SDK which you can download here: http://ati.amd.com/technology/streamcomputing/sdkdwnld.html
nVidia's Cuda and ATI's CAL are roughly equivalent in features. Cuda only works on nVidia gpus and CAL only works on ATI gpus.
Eventually, there will be good cross-platform development tools, but that's a huge void right now. DirectX 11 compute shaders and OpenCL will be fighting it out to be the tool of choice, but neither one is available yet.
If you want build some "real" app, and not just a throw-away learning experience, and you want it to work cross-platform, there are some alternatives: Brook, for example. Also, people have been doing gpgpu work with both DirectX and OpenGL (not OpenCL) for several years, without waiting for explicit GPGPU features. Go to gpgpu.org for pointers
DirectX 11 计算着色器和 OpenCL 主要基于 CUDA,因此现在绝对值得开始使用 CUDA。 基本上,它们都使用相同的内存模型,并且具有相似的语法,这更接近 CUDA,而不是 Brook+(您将与 Stream SDK 一起使用)。
但是,如果您想要 DX11,则无需等待,只需获取 来自 Microsoft 的 2008 年 11 月 SDK,附带 DX11 预览版,您已经可以使用它来编写(至少)简单的计算着色器应用程序。
Both DirectX 11 Compute Shaders and OpenCL are mainly based on CUDA, so it is definitely worth to start working with CUDA now. Basically, they all use the same memory model, and have a similar syntax, which is closer to CUDA than to Brook+ (which you would use with the Stream SDK).
However, if you want DX11, there is no need to wait, just grab the November 2008 SDK from Microsoft which comes with a DX11 preview, which you can already use to write (at least) simple compute shader applications.