CUDA 是什么样的?它是做什么用的?有什么好处?以及如何开始?

发布于 2024-10-20 19:37:23 字数 333 浏览 2 评论 0原文

我对在一些新技术下进行开发很感兴趣,并且我正在考虑尝试 CUDA。现在...他们的文档技术性太强,没有提供我正在寻找的答案。另外,我想听听那些已经有 CUDA 经验的人的回答。

基本上我的问题就是标题中的问题:

CUDA 到底是什么? (它是一个框架吗?还是一个 API?什么?)

它是做什么用的? (除了对 GPU 进行编程之外,还有其他东西吗?)

它是什么样的?

针对 CUDA 编程而不是针对 CPU 编程有什么好处?

什么是开始使用 CUDA 编程的好地方?

I am interested in developing under some new technology and I was thinking in trying out CUDA. Now... their documentation is too technical and doesn't provide the answers I'm looking for. Also, I'd like to hear those answers from people that've had some experience with CUDA already.

Basically my questions are those in the title:

What exactly IS CUDA? (is it a framework? Or an API? What?)

What is it for? (is there something more than just programming to the GPU?)

What is it like?

What are the benefits of programming against CUDA instead of programming to the CPU?

What is a good place to start programming with CUDA?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

西瑶 2024-10-27 19:37:23

CUDA 汇集了多种功能:

  • 设计用于运行通用(非图形)代码的大规模并行硬件,以及用于执行此操作的适当驱动程序。
  • 用于对所述硬件进行编程的基于C的编程语言,以及其他编程语言可以用作目标的汇编语言。
  • 一种软件开发工具包,其中包括库、各种调试、分析和编译工具以及允许 CPU 端编程语言调用 GPU 端代码的绑定。

CUDA 的重点是编写可以在兼容的大规模并行 SIMD 架构上运行的代码:这包括多种 GPU 类型以及非 GPU 硬件,例如 nVidia Tesla。大规模并行硬件每秒可以运行比 CPU 多得多的操作,并且财务成本相当相似,在允许的情况下,性能可以提高 50 倍或更多。

与早期方法相比,CUDA 的好处之一是可以使用通用语言,而不必使用像素和顶点着色器来模拟通用计算机。该语言基于 C,并带有一些附加关键字和概念,这使得非 GPU 程序员相当容易上手。

这也是 nVidia 愿意在其硬件上支持通用并行化的一个迹象:它现在听起来不再像“用 GPU 进行修改”,而更像是“使用供应商支持的技术”,这使得它的采用更加容易非技术利益相关者。

要开始使用 CUDA,请下载 SDK,阅读手册(说真的,如果您已经了解 C,那么它并不那么复杂)并购买 CUDA 兼容硬件(您可以使用首先是模拟器,但性能是最终目的,如果您可以实际尝试您的代码,那就更好了)

CUDA brings together several things:

  • Massively parallel hardware designed to run generic (non-graphic) code, with appropriate drivers for doing so.
  • A programming language based on C for programming said hardware, and an assembly language that other programming languages can use as a target.
  • A software development kit that includes libraries, various debugging, profiling and compiling tools, and bindings that let CPU-side programming languages invoke GPU-side code.

The point of CUDA is to write code that can run on compatible massively parallel SIMD architectures: this includes several GPU types as well as non-GPU hardware such as nVidia Tesla. Massively parallel hardware can run a significantly larger number of operations per second than the CPU, at a fairly similar financial cost, yielding performance improvements of 50× or more in situations that allow it.

One of the benefits of CUDA over the earlier methods is that a general-purpose language is available, instead of having to use pixel and vertex shaders to emulate general-purpose computers. That language is based on C with a few additional keywords and concepts, which makes it fairly easy for non-GPU programmers to pick up.

It's also a sign that nVidia is willing to support general-purpose parallelization on their hardware: it now sounds less like "hacking around with the GPU" and more like "using a vendor-supported technology", and that makes its adoption easier in presence of non-technical stakeholders.

To start using CUDA, download the SDK, read the manual (seriously, it's not that complicated if you already know C) and buy CUDA-compatible hardware (you can use the emulator at first, but performance being the ultimate point of this, it's better if you can actually try your code out)

凉风有信 2024-10-27 19:37:23

(免责声明:我只在 2008 年的一个学期项目中使用过 CUDA,所以从那时起事情可能发生了变化。)CUDA 是一个开发工具链,用于创建可以在 nVidia GPU 上运行的程序,以及一个用于从中央处理器。

GPU 编程相对于 CPU 编程的优势在于,对于一​​些高度并行化的问题,您可以获得大幅加速(大约快两个数量级)。然而,许多问题很难或不可能以适合并行化的方式表述。

从某种意义上说,CUDA 相当简单,因为您可以使用常规 C 来创建程序。然而,为了获得良好的性能,必须考虑很多事情,包括 Tesla GPU 架构的许多底层细节。

(Disclaimer: I have only used CUDA for a semester project in 2008, so things might have changed since then.) CUDA is a development toolchain for creating programs that can run on nVidia GPUs, as well as an API for controlling such programs from the CPU.

The benefits of GPU programming vs. CPU programming is that for some highly parallelizable problems, you can gain massive speedups (about two orders of magnitude faster). However, many problems are difficult or impossible to formulate in a manner that makes them suitable for parallelization.

In one sense, CUDA is fairly straightforward, because you can use regular C to create the programs. However, in order to achieve good performance, a lot of things must be taken into account, including many low-level details of the Tesla GPU architecture.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文