Cell处理器之死

发布于 2024-08-09 10:27:42 字数 324 浏览 2 评论 0原文

上次我听到很多人声称 Cell 处理器已经死了,主要是由于以下原因:

  • 新的 Playstation 3 缺乏支持,因为用户无法安装 Linux
  • GPU 的处理能力不断增强及其成本沉没
  • 存在针对不同 GPU 而不是 CBE 的统一编程方法 (openCL)(今天宣布了针对 Cell 的方案!)
  • 现实世界中使用 cell 的例子的谨慎性(学术界除外)
  • 全球感觉不成功

你怎么认为?如果您在两三年前开始对单元进行编程,您会继续这样做还是考虑切换到 GPU?新版本的cell即将到来吗?

谢谢

in the last times I heard lots of people claiming that the Cell processor is dead, mainly due to the following reasons:

  • Lack of support in the new playstation 3, as the user can not install linux
  • The increasing processing power of the GPU's and its costs sinking
  • The existence of a unified programming approach (openCL) for different GPU's and not for the CBE (well today was announced for the Cell!)
  • Carency of real world examples of use of the cell (apart from the academic circles)
  • Global feeling of unsuccess

What do you think? If you started two or three years ago to program the cell, will you continue on this or are you considering switching to GPU's? Is a new version of the cell coming?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

耳钉梦 2024-08-16 10:27:42

我认为电池开发不受欢迎的原因更接近于:

  • PS3 缺乏成功(由于索尼方面的许多错误以及来自 XBOX 360 的激烈竞争)
  • 制造产量低,成本高(部分原因是产量低),并且缺乏除 PS3 之外的负担得起的硬件系统
  • 开发难度(该单元是一种不寻常的处理器设计,并且缺乏工具)
  • 与现有的基于 x86 的商品硬件相比,无法实现显着的性能差异。即使 XBOX 360 已有多年历史的三核 Power 架构处理器也已被证明具有竞争力,但与现代 Core2 四核处理器相比,该单元的优势并不明显。
  • 来自 CUDA 等 GPU 通用计算平台的竞争日益激烈

I'd say the reasons for the lack of popularity for cell development are closer to:

  • The lack of success in the PS3 (due to many mistakes on Sony's part and strong competition from the XBOX 360)
  • Low manufacturing yield, high cost (partly due to low yield), and lack of affordable hardware systems other than the PS3
  • Development difficulty (the cell is an unusual processor to design for and the tooling is lacking)
  • Failure to achieve significant performance differences compared to existing x86 based commodity hardware. Even the XBOX 360's several year old triple core Power architecture processor has proven competitive, compared to a modern Core2 Quad processor the cell's advantages just aren't evident.
  • Increasing competition from GPU general purpose computing platforms such as CUDA
(り薆情海 2024-08-16 10:27:42

为 1000 个线程编写并行程序比为 10 个线程编写并行程序更容易。 GPU 有数千个线程,具有硬件线程调度和负载平衡功能。尽管当前的 GPU 主要适用于数据并行小内核,但它们拥有使此类编程变得微不足道的工具。 Cell 在消费级配置中只有少数(大约 10 个)处理器。 (超级计算机中使用的 Cell 衍生品跨越了界限,拥有数百个处理器。)

恕我直言,Cell 最大的问题之一是缺乏指令缓存。 (2005 年,我在从巴塞罗那 MICRO 会议回来的飞机上与 Cell 架构师激烈地争论了这一点。尽管他们不同意我的观点,但我从 Cell 的 bigsuper 计算机用户那里听到了同样的说法。)人们可以应对固定大小的数据存储器的安装问题- GPU 也有同样的问题,尽管它们会抱怨。但是将代码装入固定大小的指令存储器是一件痛苦的事情。添加 IF 语句,性能可能会急剧下降,因为您必须开始使用覆盖。控制数据结构比避免在开发周期后期添加代码来修复错误要容易得多。

GPU 最初与 Cell 有着同样的问题——没有缓存,I 和 D 都没有。

但是 GPU 拥有更多的线程,数据并行性比 Cell 好得多,以至于它们吞噬了这个市场。让 Cell 只锁定控制台客户,以及比 GPU 更复杂但比 CPU 代码复杂的代码。挤在中间。

与此同时,GPU 正在添加 I$ 和 D$。因此它们变得更容易编程。

It's easier to write parallel programs for 1000s of threads than it is for 10s of threads. GPUs have 1000s of threads, with hardware thread scheduling and load balancing. Although current GPUs are suited mainly for data parallel small kernels, they have tools that make doing such programming trivial. Cell has only a few, order of 10s, of processors in consumer configurations. (The Cell derivatives used in supercomputers cross the line, and have 100s of processors.)

IMHO one of the biggest problems with Cell was lack of an instruction cache. (I argued this vociferously with the Cell architects on a plane back from the MICRO conference Barcelona in 2005. Although they disagreed with me, I have heard the same from bigsuper computer users of cell.) People can cope with fitting into fixed size data memories - GPUs have the same problem, although they complain. But fitting code into fixed size instruction memory is a pain. Add an IF statement, and performance may fall off a cliff because you have to start using overlays. It's a lot easier to control your data structures than it is to avoid having to add code to fix bugs late in the development cycle.

GPUs originally had the same problems as cell - no caches, neither I nor D.

But GPUs did more threads, data parallelism so much better than Cell, that they ate up that market. Leaving Cell only its locked in console customers, and codes that were more complicated than GPUs, but less complicated than CPU code. Squeezed in the middle.

And, in the meantime, GPUs are adding I$ and D$. So they are becoming easier to program.

忘东忘西忘不掉你 2024-08-16 10:27:42

细胞为什么死了?

1)SDK 很糟糕。我看到一些非常聪明的开发人员在 IBM 邮件列表中绞尽脑汁,试图用 Cell SDK 解决这个或那个问题。

2) 计算单元之间的总线开始出现扩展问题,并且永远不会达到 32 个核心。

3) OpenCl 大约晚了 3-4 年,没有任何用处。

Why did Cell die?

1) The SDK was horrid. I saw some very bright developers about scratch their eyes out pouring through IBM mailing lists trying to figure out this problem or that with the Cell SDK.

2) The bus between compute units was starting to show scaling problems and never would have made it to 32 cores.

3) OpenCl was about 3-4 years too late to be of any use.

叹沉浮 2024-08-16 10:27:42

如果您在两三年前开始
对单元进行编程,你会继续吗
对此或者您正在考虑
切换到 GPU?

我本以为 90% 为 Cell 处理器编程的人都无法随意决定停止为其编程。您是否针对一个非常具体的开发社区提出这个问题?

If you started two or three years ago
to program the cell, will you continue
on this or are you considering
switching to GPU's?

I would have thought that 90% of the people who program for the Cell processor are not in a position where they can arbitrarily decide to stop programming for it. Are you aiming this question at a very specific development community?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文