多核 Intel CPU 中的高速缓存是如何共享的?

发布于 2024-07-23 06:24:42 字数 400 浏览 3 评论 0原文

我有一些关于多核 CPU 或多处理器系统中使用的高速缓存存储器的问题。 (虽然与编程没有直接关系,但当一个人为多核处理器/多处理器系统编写软件时,它会产生很多影响,因此在这里提问!)

  1. 在多处理器系统或多核处理器(英特尔四核、酷睿二核等)中。 ) 每个 cpu 核心/处理器是否都有自己的高速缓存(数据和程序高速缓存)?

  2. 一个处理器/核心可以访问彼此的高速缓存吗?因为如果允许它们访问彼此的高速缓存,那么我相信在这种情况下,如果该特定处理器高速缓存没有某些数据,但高速缓存未命中的情况可能会更少其他一些第二处理器的缓存可能有它,从而避免从内存读取到第一个处理器的缓存? 这个假设有效且正确吗?

  3. 允许任何处理器访问其他处理器的高速缓存会出现任何问题吗?

I have a few questions regarding Cache memories used in Multicore CPUs or Multiprocessor systems. (Although not directly related to programming, it has many repercussions while one writes software for multicore processors/multiprocessors systems, hence asking here!)

  1. In a multiprocessor system or a multicore processor (Intel Quad Core, Core two Duo etc..) does each cpu core/processor have its own cache memory (data and program cache)?

  2. Can one processor/core access each other's cache memory, because if they are allowed to access each other's cache, then I believe there might be lesser cache misses, in the scenario that if that particular processors cache does not have some data but some other second processors' cache might have it thus avoiding a read from memory into cache of first processor? Is this assumption valid and true?

  3. Will there be any problems in allowing any processor to access other processor's cache memory?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

伤痕我心 2024-07-30 06:24:42

在多处理器系统或多核处理器(Intel 四核、
核心二核等..)每个CPU核心/处理器都有自己的缓存
内存(数据和程序缓存)?

  1. 是的。 它因具体的芯片型号而异,但最常见的设计是每个 CPU 内核都有自己的私有 L1 数据和指令缓存。

    在旧的和/或低功耗的 CPU 上,下一级缓存通常是 L2 统一缓存,通常在所有内核之间共享。 或者在 65nm Core2Quad(一个封装中的两个 core2duo 芯片)上,每对核心都有自己的最后一级缓存,并且无法高效通信。

现代主流 Intel CPU(自第一代 i7 CPU、Nehalem 起)使​​用 3 级缓存。

  • 32kiB 分割 L1i/L1d:每核专用(与早期 Intel 相同)
  • 256kiB 统一 L2:每核专用。 (Skylake-avx512 上为 1MiB)。
  • 大型统一L3:所有核心共享

最后一级缓存是一个大型共享L3。 它在物理上分布在内核之间,在连接内核的环形总线上每个内核都有一个 L3 切片。 通常每个核心有 1.5 到 2.25MB 的 L3 缓存,因此多核 Xeon 可能在所有核心之间共享 36MB L3 缓存。 这就是为什么双核芯片有 2 到 4 MB 的 L3,而四核芯片有 6 到 8 MB。

在 Skylake-avx512 以外的 CPU 上,L3包含每核心私有缓存,因此它的标签可以用作窥探过滤器,以避免向所有核心广播请求。 即,缓存在私有 L1d、L1i 或 L2 中的任何内容也必须分配在 L3 中。 请参阅 intel core 使用哪种缓存映射技术i7 处理器?

David Kanter 的 Sandybridge 文章 有一个很好的图表:内存层次结构/系统架构,显示每个核心缓存及其与共享 L3 的连接,以及与之连接的 DDR3/DMI(芯片组)/PCIe。 (这仍然适用于 Haswell / Skylake-client / Coffee Lake,但后续 CPU 中的 DDR4 除外)。

一个处理器/核心可以访问彼此的缓存吗,因为如果
他们被允许访问彼此的缓存,那么我相信有
在这种情况下,如果特定的
处理器缓存没有一些数据,但有其他一些数据
处理器的缓存可能有它,从而避免从内存读取
第一个处理器的缓存? 这个假设有效且真实吗?

  1. 没有。 每个 CPU 核心的 L1 缓存都紧密集成到该核心中。 访问相同数据的多个核心将在自己的 L1d 缓存中拥有自己的副本,非常靠近加载/存储执行单元。

    多级缓存的要点在于,单个缓存对于非常热的数据来说速度不够快,但对于仍定期访问的不常用数据来说又不够大。 为什么大多数处理器的L1缓存大小都小于L2缓存?

    在英特尔当前的 CPU 中,脱离核心到另一个核心的缓存不会比直接进入 L3 更快。 或者,与仅构建更大/更快的 L3 缓存相比,实现这一点所需的核心之间的网状网络将是令人望而却步的。

    其他核心内置的小/快速缓存可以加速这些核心。 与其他提高缓存命中率的方法相比,直接共享它们可能会花费更多的电力(甚至可能更多的晶体管/芯片面积)。 (功率是比晶体管数量或芯片面积更大的限制因素。这就是为什么现代 CPU 可以拥有大型私有 L2 缓存)。

    此外,您也不希望其他核心污染小型私有缓存,该缓存可能正在缓存与核心相关的内容。

允许任何处理器访问其他处理器是否会出现任何问题?
处理器的高速缓存?

  1. 是的——根本没有电线将各种 CPU 缓存连接到其他内核。 如果一个核心想要访问另一个核心缓存中的数据,则唯一可以通过的数据路径是系统总线。

一个非常重要的相关问题是缓存一致性问题。 请考虑以下情况:假设一个 CPU 核心在其缓存中具有特定的内存位置,并且它写入该内存位置。 然后,另一个核心读取该内存位置。 如何确保第二个核心看到更新后的值? 这就是缓存一致性问题。

正常的解决方案是MESI 协议,或其变体。 英特尔使用 MESIF

In a multiprocessor system or a multicore processor (Intel Quad Core,
Core two Duo etc..) does each cpu core/processor have its own cache
memory (data and program cache)?

  1. Yes. It varies by the exact chip model, but the most common design is for each CPU core to have its own private L1 data and instruction caches.

    On old and/or low-power CPUs, the next level of cache is typically a L2 unified cache is typically shared between all cores. Or on 65nm Core2Quad (which was two core2duo dies in one package), each pair of cores had their own last-level cache and couldn't communicate as efficiently.

Modern mainstream Intel CPUs (since the first-gen i7 CPUs, Nehalem) use 3 levels of cache.

  • 32kiB split L1i/L1d: private per-core (same as earlier Intel)
  • 256kiB unified L2: private per-core. (1MiB on Skylake-avx512).
  • large unified L3: shared among all cores

Last-level cache is a a large shared L3. It's physically distributed between cores, with a slice of L3 going with each core on the ring bus that connects the cores. Typically 1.5 to 2.25MB of L3 cache with every core, so a many-core Xeon might have a 36MB L3 cache shared between all its cores. This is why a dual-core chip has 2 to 4 MB of L3, while a quad-core has 6 to 8 MB.

On CPUs other than Skylake-avx512, L3 is inclusive of the per-core private caches so its tags can be used as a snoop filter to avoid broadcasting requests to all cores. i.e. anything cached in a private L1d, L1i, or L2, must also be allocated in L3. See Which cache mapping technique is used in intel core i7 processor?

David Kanter's Sandybridge write-up has a nice diagram of the memory heirarchy / system architecture, showing the per-core caches and their connection to shared L3, and DDR3 / DMI(chipset) / PCIe connecting to that. (This still applies to Haswell / Skylake-client / Coffee Lake, except with DDR4 in later CPUs).

Can one processor/core access each other's cache memory, because if
they are allowed to access each other's cache, then I believe there
might be lesser cache misses, in the scenario that if that particular
processors cache does not have some data but some other second
processors' cache might have it thus avoiding a read from memory into
cache of first processor? Is this assumption valid and true?

  1. No. Each CPU core's L1 caches tightly integrate into that core. Multiple cores accessing the same data will each have their own copy of it in their own L1d caches, very close to the load/store execution units.

    The whole point of multiple levels of cache is that a single cache can't be fast enough for very hot data, but can't be big enough for less-frequently used data that's still accessed regularly. Why is the size of L1 cache smaller than that of the L2 cache in most of the processors?

    Going off-core to another core's caches wouldn't be faster than just going to L3 in Intel's current CPUs. Or the required mesh network between cores to make this happen would be prohibitive compared to just building a larger / faster L3 cache.

    The small/fast caches built-in to other cores are there to speed up those cores. Sharing them directly would probably cost more power (and maybe even more transistors / die area) than other ways of increasing cache hit rate. (Power is a bigger limiting factor than transistor count or die area. That's why modern CPUs can afford to have large private L2 caches).

    Plus you wouldn't want other cores polluting the small private cache that's probably caching stuff relevant to this core.

Will there be any problems in allowing any processor to access other
processor's cache memory?

  1. Yes -- there simply aren't wires connecting the various CPU caches to the other cores. If a core wants to access data in another core's cache, the only data path through which it can do so is the system bus.

A very important related issue is the cache coherency problem. Consider the following: suppose one CPU core has a particular memory location in its cache, and it writes to that memory location. Then, another core reads that memory location. How do you ensure that the second core sees the updated value? That is the cache coherency problem.

The normal solution is the MESI protocol, or a variation on it. Intel uses MESIF.

烂柯人 2024-07-30 06:24:42

快速解答
1) 是 2) 否,但这可能取决于您所引用的内存实例/资源,数据可能同时存在于多个位置。 3)是的。

有关该问题的完整说明,您应该阅读由 Ulrich Drepper 撰写的 9 部分文章“每个程序员都应该了解内存”(http://lwn.net/Articles/250967/),您将以良好且易于理解的细节全面了解您似乎正在询问的问题。

Quick answers
1) Yes 2)No, but it all may depend on what memory instance/resource you are referring, data may exist in several locations at the same time. 3)Yes.

For a full length explanation of the issue you should read the 9 part article "What every programmer should know about memory" by Ulrich Drepper ( http://lwn.net/Articles/250967/ ), you will get the full picture of the issues you seem to be inquiring about in a good and accessible detail.

つ低調成傷 2024-07-30 06:24:42

首先回答你的第一个问题,我知道Core 2 Duo有一个二级缓存系统,其中每个处理器都有自己的一级缓存,并且它们共享一个二级缓存。 这有助于数据同步和内存利用。

回答你的第二个问题,我相信你的假设是正确的。 如果处理器能够访问彼此的高速缓存,则显然会有更少的高速缓存未命中,因为处理器可以选择更多的数据。 但是,请考虑共享缓存。 对于 Core 2 Duo,共享缓存允许程序员将常用变量安全地放置在该环境中,这样处理器就不必访问其各自的一级缓存。

要回答您的第三个问题,访问其他处理器的高速缓存可能会出现问题,这涉及“单写多读”原则。 我们不能允许多个进程同时写入内存中的同一位置。

有关 core 2 duo 的更多信息,请阅读这篇简洁的文章。

http://software .intel.com/en-us/articles/software-techniques-for-shared-cache-multi-core-systems/

To answer your first, I know the Core 2 Duo has a 2-tier caching system, in which each processor has its own first-level cache, and they share a second-level cache. This helps with both data synchronization and utilization of memory.

To answer your second question, I believe your assumption to be correct. If the processors were to be able to access each others' cache, there would obviously be less cache misses as there would be more data for the processors to choose from. Consider, however, shared cache. In the case of the Core 2 Duo, having shared cache allows programmers to place commonly used variables safely in this environment so that the processors will not have to access their individual first-level caches.

To answer your third question, there could potentially be a problem with accessing other processors' cache memory, which goes to the "Single Write Multiple Read" principle. We can't allow more than one process to write to the same location in memory at the same time.

For more info on the core 2 duo, read this neat article.

http://software.intel.com/en-us/articles/software-techniques-for-shared-cache-multi-core-systems/

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文