比较 CPU 和 GPU - 是否总是有意义?

发布于 2025-01-04 09:41:33 字数 434 浏览 3 评论 0原文

我正在阅读 这篇文章介绍 GPU 速度与 CPU 速度。既然 CPU 有很多 GPU 不需要的职责,为什么我们首先要这样比较它们呢? “我不记得有哪家公司推出过慢一个数量级的竞争基准”这句话,听起来像是英特尔和 NVIDIA 都在生产 GPU。

显然,从程序员的角度来看,您想知道将应用程序移植到 GPU 是否值得花费时间和精力,在这种情况下,(公平的)比较是有用的。但比较它们总是有意义吗?

我想要的是一个技术解释,解释为什么英特尔推广其比 NVIDIA-GPU 慢的基准测试可能很奇怪,正如安迪·基恩 (Andy Keane) 似乎认为的那样。

I was reading this article on GPU speed vs CPU speed. Since a CPU has a lot of responsibilities the GPU does not need to have, why do we even compare them like that in the first place? The quote "I can’t recall another time I’ve seen a company promote competitive benchmarks that are an order of magnitude slower" makes it sound like both Intel and NVIDIA are making GPUs.

Obviously, from a programmer's perspective, you wonder if porting your application to the GPU is worth your time and effort, and in that case a (fair) comparison is useful. But does it always make sense to compare them?

What I am after is a technical explanation of why it might be weird for Intel to promote their slower-than-NVIDIA-GPUs benchmarks, as Andy Keane seems to think.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

我不是你的备胎 2025-01-11 09:41:33

由于 CPU 有很多职责,GPU 不需要
有,为什么我们首先要这样比较它们?

那么,如果 CPU 提供比 GPU 更好的性能,人们就会使用额外的 CPU 作为协处理器,而不是使用 GPU 作为协处理器。这些额外的 CPU 协处理器不一定具有与主主机 CPU 相同的负担。

显然,从程序员的角度来看,您想知道是否移植您的
将应用程序应用到 GPU 值得您花费时间和精力,在这种情况下
(公平的)比较是有用的。但这总是有意义吗
比较它们?

我认为比较它们是有道理的,也是公平的;毕竟,它们都是处理器,了解在什么情况下使用其中一种是有益的或有害的可能是非常有用的信息。需要记住的重要一点是,在某些情况下使用 CPU 是一种更优越的方法,而在某些情况下使用 GPU 则更有意义。 GPU 并不能加速所有应用程序。

我想要的是对为什么它可能很奇怪的技术解释
安迪 (Andy) 要求英特尔推广其比 NVIDIA-GPU 慢的基准测试
基恩似乎认为

如果英特尔唯一的观点是 CPU 与 GPU 相比并没有那么糟糕,那么听起来英特尔并没有选择一个特别好的应用示例。他们可能会选择 CPU 确实更快的例子;没有足够的数据并行性或算术强度或 SIMD 程序行为来提高 GPU 的效率。如果您选择一个分形生成程序来显示 CPU 仅比 GPU 慢 14 倍,那么您就太傻了;您应该连续计算术语,或者运行一个并行作业,其中有很多分支分歧或每个线程执行完全不同的代码。英特尔本可以做得比 14 倍更好; NVIDIA 知道这一点,研究人员和从业者也知道这一点,而撰写 NVIDIA 嘲笑的论文的木偶也应该知道这一点。

Since a CPU has a lot of responsibilities the GPU does not need to
have, why do we even compare them like that in the first place?

Well, if CPUs offered better performance than GPUs, people would use additional CPUs as coprocessors instead of using GPUs as coprocessors. These additional CPU coprocessors wouldn't necessarily have the same baggage as main host CPUs.

Obviously, from a programmer's perspective, you wonder if porting your
application to the GPU is worth your time and effort, and in that case
a (fair) comparison is useful. But does it always make sense to
compare them?

I think it makes sense and is fair to compare them; they are both kinds of processors, after all, and knowing in what situations using one is beneficial or detrimental can be very useful information. The important thing to keep in mind is that there are situations where using a CPU is a far superior way to go, and situations where using a GPU makes much more sense. GPUs do not speed up every application.

What I am after is a technical explanation of why it might be weird
for Intel to promote their slower-than-NVIDIA-GPUs benchmarks, as Andy
Keane seems to think

It sounds like Intel didn't pick a particularly good application example if their only point was that CPUs aren't all that bad compared to GPUs. They might have picked examples where CPUs were indeed faster; where there was not enough data parallelism or arithmetic intensity, or SIMD program behavior, to make GPUs efficient. If you're picking a fractal generating program to show CPUs are only 14x slower than GPUs, you're being silly; you should be computing terms in a series, or running a parallel job with lots of branch divergence or completely different code being executed by each thread. Intel could have done better than 14x; NVIDIA knows it, researchers and practitioners know it, and the muppets that wrote the paper NVIDIA is mocking should have known it.

酒与心事 2025-01-11 09:41:33

答案取决于要执行的代码类型。 GPU 非常适合高度并行化的任务或需要高内存带宽的任务,并且加速可能确实非常高。然而,它们不太适合具有大量顺序操作或复杂控制流程的应用。

这意味着这些数字几乎不能说明任何问题,除非您非常确切地知道它们正在对哪些应用程序进行基准测试以及该用例与您想要加速的实际代码有多相似。根据您让它运行的代码,GPU 可能比 CPU 快 100 倍或慢 100 倍。典型的使用场景需要混合不同类型的操作,因此通用 CPU 还没有消亡,而且在相当长的一段时间内也不会消亡。

如果您要解决特定任务,那么比较该特定任务的 CPU 与 GPU 的性能可能很有意义。但是,从比较中获得的结果通常不会直接转化为不同基准的结果。

The answer depends on the kind of code that is to be executed. GPUs are great for highly-parallelizable tasks or tasks which demand high memory bandwidth and there speedups may indeed be very high. However, they are not well suited for applications with lots of sequential operation or with complex control flow.

This means that the numbers hardly say anything unless you know very exactly what application they are benchmarking and how similar that use case would be to the actual code you would like to accelerate. Depending on the code you let it run, you GPU may be 100 times faster or 100 times slower than a CPU. Typical usage scenarios require a mix of different kinds of operations, so the general-purpose CPU is not dead yet and won't be for quite some time.

If you have a specific task to solve, it may well make sense to compare the performance of CPU vs GPU for that particular task. However, the results you get from the comparison will usually not translate directly to the results for a different benchmark.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文