GPU作为CPU的可行性?

发布于 2024-07-04 13:38:35 字数 1449 浏览 12 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

灼痛 2024-07-11 13:38:35

如果您对科学和并行计算感兴趣,请投入时间。 不要考虑 CUDA 并使 GPU 看起来像 CPU。 它只允许比旧的 GPGPU 编程技术更直接的 GPU 编程方法。

通用 CPU 从分支预测、流水线、超级缩放器等方面的所有工作中获得了在各种任务上良好工作的能力。这使得它们能够在各种工作负载上实现良好的性能,同时使它们在高吞吐量内存密集型浮点运算中表现不佳。

GPU 最初设计是为了做一件事,并且做得非常非常好。 图形操作本质上是并行的。 您可以同时计算屏幕上所有像素的颜色,因为结果之间不存在数据依赖性。 此外,所需的算法不必处理分支,因为几乎任何所需的分支都可以通过将系数设置为零或一来实现。 因此,硬件可以非常简单。 无需担心分支预测,您可以简单地添加尽可能多的 ALU,而不是制作处理器超级缩放器。

借助可编程纹理和顶点着色器,GPU 获得了通用可编程性的途径,但它们仍然受到硬件的限制,而硬件仍然是为高吞吐量浮点运算而设计的。 可能会添加一些额外的电路来实现更通用的计算,但仅限于一定程度。 任何损害 GPU 处理图形的能力的东西都不会出现。毕竟,GPU 公司仍然从事图形业务,目标市场仍然是游戏玩家和需要高端可视化的人。

GPGPU市场仍然是杯水车薪,并且在一定程度上仍将如此。 毕竟,“看起来很漂亮”比“每次都 100% 保证且可重复的结果”要满足的标准要低得多。

简而言之,GPU 永远不可能像 CPU 那样可行。 它们只是针对不同类型的工作负载而设计的。 我预计 GPU 将获得有助于快速解决更广泛问题的功能,但它们始终首先是图形处理单元。

始终将您遇到的问题与解决问题的最合适的工具相匹配始终很重要。

Commit time if you are interested in scientific and parallel computing. Don't think of CUDA and making a GPU appear as a CPU. It only allows a more direct method of programming GPUs than older GPGPU programming techniques.

General purpose CPUs derive their ability to work well on a wide variety of tasks from all the work that has gone into branch prediction, pipelining, superscaler, etc. This makes it possible for them to achieve good performance on a wide variety of workloads, while making them suck at high-throughput memory intensive floating point operations.

GPUs were originally designed to do one thing, and do it very, very well. Graphics operations are inherently parallel. You can calculate the colour of all pixels on the screen at the same time, because there are no data dependencies between the results. Additionally, the algorithms needed did not have to deal with branches, since nearly any branch that would be required could be achieved by setting a co-efficient to zero or one. The hardware could therefore be very simple. It is not necessary to worry about branch prediction, and instead of making a processor superscaler, you can simply add as many ALU's as you can cram on the chip.

With programmable texture and vertex shaders, GPU's gained a path to general programmability, but they are still limited by the hardware, which is still designed for high throughput floating point operations. Some additional circuitry will probably be added to enable more general purpose computation, but only up to a point. Anything that compromises the ability of a GPU to do graphics won't make it in. After all, GPU companies are still in the graphics business and the target market is still gamers and people who need high end visualization.

The GPGPU market is still a drop in the bucket, and to a certain extent will remain so. After all, "it looks pretty" is a much lower standard to meet than "100% guaranteed and reproducible results, every time."

So in short, GPU's will never be feasible as CPU's. They are simply designed for different kinds of workloads. I expect GPU's will gain features that make them useful for quickly solving a wider variety of problems, but they will always be graphics processing units first and foremost.

It will always be important to always match the problem you have with the most appropriate tool you have to solve it.

情泪▽动烟 2024-07-11 13:38:35

从长远来看,我认为 GPU 将不复存在,因为通用处理器不断发展以接管这些功能。 英特尔的 Larrabee 是第一步。 历史表明,做空 x86 并不是一个好主意。

大规模并行架构和矢量处理的研究仍然有用。

Long-term I think that the GPU will cease to exist, as general purpose processors evolve to take over those functions. Intel's Larrabee is the first step. History has shown that betting against x86 is a bad idea.

Study of massively parallel architectures and vector processing will still be useful.

北斗星光 2024-07-11 13:38:35

首先,我认为这个问题不属于SO。

在我看来,每当您进行基于向量的浮点数学计算时,GPU 都是一个非常有趣的替代方案。 然而这意味着:它不会成为主流。 大多数主流(桌面)应用程序很少进行浮点计算。

它已经在游戏(物理引擎)和科学计算中获得了关注。 如果您将这两者中的任何一个视为“主流”,那么 GPU 将成为主流。

我不认为这两者是主流,因此我认为 GPU 将成为主流行业的下一个流行趋势。

如果您作为一名学生对基于物理的科学计算感兴趣,那么您绝对应该投入一些时间(无论如何 GPU 都是非常有趣的硬件)。

First of all I don't think this questions really belongs on SO.

In my opinion the GPU is a very interesting alternative whenever you do vector-based float mathematics. However this translates to: It will not become mainstream. Most mainstream (Desktop) applications do very few floating-point calculations.

It has already gained traction in games (physics-engines) and in scientific calculations. If you consider any of those two as "mainstream", than yes, the GPU will become mainstream.

I would not consider these two as mainstream and I therefore think, the GPU will raise to be the next adopted fad in the mainstream industry.

If you, as a student have any interest in heavily physics based scientific calculations, you should absolutely commit some time to it (GPUs are very interesting pieces of hardware anyway).

箜明 2024-07-11 13:38:35

GPU永远不会取代CPU。 CPU 执行一组顺序指令,GPU 并行执行非常特定类型的计算。 这些 GPU 在数值计算和图形方面具有很大的实用性; 然而,大多数程序根本无法利用这种计算方式。

您很快就会开始看到来自 Intel 和 AMD 的新处理器,其中包括 GPU 式浮点矢量计算以及标准 CPU 计算。

GPU's will never supplant CPU's. A CPU executes a set of sequential instructions, and a GPU does a very specific type of calculation in parallel. These GPU's have great utility in numerical computing and graphics; however, most programs can in no way utilize this flavor of computing.

You will soon begin seeing new processers from Intel and AMD that include GPU-esque floating point vector computations as well as standard CPU computations.

谁的新欢旧爱 2024-07-11 13:38:35

我认为这是正确的做法。

考虑到GPU 已被用来创建廉价的超级计算机,这似乎是事物的自然演变。 既然已经为您完成了如此多的计算能力和研发工作,为什么不利用可用的技术呢?

所以继续去做吧。 这将有助于一些很酷的研究,也是购买高端显卡的合理理由,这样你就可以在完整的图形细节上玩《孤岛危机》和《刺客信条》;)

I think it's the right way to go.

Considering that GPUs have been tapped to create cheap supercomputers, it appears to be the natural evolution of things. With so much computing power and R&D already done for you, why not exploit the available technology?

So go ahead and do it. It will make for some cool research, as well as a legit reason to buy that high-end graphic card so you can play Crysis and Assassin's Creed on full graphic detail ;)

野侃 2024-07-11 13:38:35

这是你看到一两个应用程序的事情之一,但很快就会有人想出一个“杀手级应用程序”,它弄清楚如何用它以超快的速度做一些更普遍有用的事情。

像素着色器将例程应用于大型浮点值数组,也许我们会看到一些 GIS 覆盖应用程序,或者好吧,我不知道。 如果你不比我投入更多的时间,那么你的洞察力就会和我一样——即很少!

我有一种感觉,这可能是一件非常大的事情,就像英特尔和 S3 一样,也许它只需要在硬件上添加一点小小的调整,或者需要有人在头顶上放一个灯泡。

Its one of those things that you see 1 or 2 applications for, but soon enough someone will come up with a 'killer app' that figures out how to do something more generally useful with it, at superfast speeds.

Pixel shaders to apply routines to large arrays of float values, maybe we'll see some GIS coverage applications or well, I don't know. If you don't devote more time to it than I have then you'll have the same level of insight as me - ie little!

I have a feeling it could be a really big thing, as do Intel and S3, maybe it just needs 1 little tweak adding to the hardware, or someone with a lightbulb above their head.

机场等船 2024-07-11 13:38:35

有这么多未开发的能量,我不知道它会如何长时间闲置。 但问题是 GPU 将如何用于此目的。 目前 CUDA 似乎是一个不错的猜测,但其他技术正在出现,这可能会让普通开发人员更容易理解。

Apple 最近发布了 OpenCL,他们声称 OpenCL 比 CUDA 更强大,但也非常简单。 我不知道到底要做什么,但是 khronos 小组(致力于 OpenGL 标准的人)正在研究 OpenCL 标准,并试图使其与 OpenGL 具有高度的互操作性。 这可能会带来一种更适合正常软件开发的技术。

这是一个有趣的主题,顺便说一句,我即将开始我的硕士论文,主题是如何最好地以 CUDA 为主要焦点,为普通开发人员提供 GPU 功能(如果可能的话)。

With so much untapped power I cannot see how it would go unused for too long. The question is, though, how the GPU will be used for this. CUDA seems to be a good guess for now but other techologies are emerging on the horizon which might make it more approachable by the average developer.

Apple have recently announced OpenCL which they claim is much more than CUDA, yet quite simple. I'm not sure what exactly to make of that but the khronos group (The guys working on the OpenGL standard) are working on the OpenCL standard, and is trying to make it highly interoperable with OpenGL. This might lead to a technology which is better suited for normal software development.

It's an interesting subject and, incidentally, I'm about to start my master thesis on the subject of how best to make the GPU power available to the average developers (if possible) with CUDA as the main focus.

一瞬间的火花 2024-07-11 13:38:35

很久以前,进行浮点计算确实很困难(在像 80386 这样性能极差(按照今天的标准)的 CPU 上,每条指令需要模拟数千/数百万个周期)。 需要浮点性能的人可以获得FPU(例如80387。旧的FPU相当紧密地集成到CPU的操作中,但它们是外部的。后来它们变得集成,80486内置了FPU。

旧的 FPU 类似于 GPU 计算,我们已经可以通过 AMD 的 APU 来实现它,APU 是内置 GPU 的 CPU,

因此,我认为您问题的实际答案是,GPU 不会成为 CPU 。 ,相反,CPU 将内置 GPU。

A long time ago, it was really hard to do floating point calculations (thousands/millions of cycles of emulation per instruction on terribly performing (by today's standards) CPUs like the 80386). People that needed floating point performance could get an FPU (for example, the 80387. The old FPU were fairly tightly integrated into the CPU's operation, but they were external. Later on they became integrated, with the 80486 having an FPU built-in.

The old-time FPU is analagous to GPU computation. We can already get it with AMD's APUs. An APU is a CPU with a GPU built into it.

So, I think the actual answer to your question is, GPU's won't become CPUs, instead CPU's will have a GPU built in.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文