4个关于处理器架构的问题。 (计算机工程)

发布于 2024-10-07 17:54:08 字数 405 浏览 6 评论 0原文

为了准备期末考试,我们的老师问了我们大约 50 个对错题。我可以在网上或向亲戚询问大多数问题的答案。然而,这四个问题让我发疯。大多数问题并不难,我只是在任何地方都找不到令人满意的答案。抱歉,原来的问题不是用英文写的,我必须自己翻译。如果您有不明白的地方,请告诉我。 谢谢!

对或错

  1. 处理器操作的地址的大小决定了虚拟内存的大小。然而,内存缓存的大小是独立的。
  2. 长期以来,DRAM 技术与用于执行处理器标准逻辑的 CMOS 技术一直不兼容。这就是 DRAM 内存(大多数时候)在处理器外部(在不同的芯片上)使用的原因。
  3. 分页让多个虚拟寻址空间对应于同一物理寻址空间。
  4. 具有 1 行组的关联高速缓冲存储器是全关联高速缓冲存储器,因为一个存储块可以进入任何组,因为每个组的大小与块的大小相同。

Our teachers has asked us around 50 true of false questions in preparation for our final exam. I could find an answer for most of them online or by asking relative. How ever, those 4 questions adrive driving me crazy. Most of those question aren't that hard, I just cant get any satisfying answer anywhere. Sorry, the original question are not written in english, i had to translate them myself. If you don't understand something, please tell me.
Thanks!

True or false

  1. The size of the manipulated address by the processor determines the size of the virtual memory. How ever, the size of the memory cache is independent.
  2. For long, DRAM technology stayed imcompatible with CMOS technology used to do the standard logic in processor. This is the reason DRAM memory is (most of the time) used outside of the processor (on a different chip).
  3. Pagination let correspond multiple virtual addressing space to a same space of physical addressing.
  4. An associative cache memory with sets of 1 line is an entierly associative cache memory, because one memory block can go in any set since each sets are of the same size that of the block.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

自我难过 2024-10-14 17:54:08
  1. “操纵地址”不是专业术语。您有一个 m 位虚拟地址映射到一个 n 位物理地址。是的,缓存可以是物理地址大小以内的任何大小,但通常要小得多。请注意,高速缓存线用与机器的最大虚拟或物理地址范围相对应的虚拟或更典型的物理地址位来标记。

  2. 是的,DRAM 工艺和逻辑工艺各自针对不同的目标进行调整,并涉及不同的工艺步骤(例如,使用不同的材料和厚度来铺设 DRAM 电容器堆栈/沟槽),而且历史上您还没有在 DRAM 工艺中构建处理器(Mitsubishi M32RD 除外)逻辑处理中也没有 DRAM。所谓的 eDRAM 是个例外,IBM 喜欢将其用于 SOI 流程,并在 IBM 微处理器(例如 Power 7)中用作最后一级缓存。

  3. “分页”就是我们所说的发出换页,以便文本输出从下一页的顶部开始。另一方面,“分页”有时是虚拟内存管理的同义词,通过该管理将虚拟地址(逐页)映射到物理地址。如果您设置页表,那么它允许多个虚拟地址(实际上,来自不同进程的虚拟地址空间的虚拟地址)映射到相同的物理地址,从而映射到实际 RAM 中的相同位置。

  4. “具有 1 行集合的关联高速缓存是完全关联的高速缓存,因为一个内存块可以进入任何集合,因为每个集合的大小与块的大小相同。”

嗯,这是一个奇怪的问题。让我们来分解一下。 1) 您可以拥有直接映射缓存,其中一个地址仅映射到一个缓存行。 2) 您可以拥有一个完全关联的缓存,其中的地址可以映射到任何缓存行;有一些类似 CAM(内容可寻址存储器)标记结构的东西可以找到与地址匹配的行。或者 3) 您可以拥有一个 n 路组关联高速缓存,其中本质上有 n 组直接映射高速缓存,并且给定地址可以映射到 n 行之一。还有其他更深奥的缓存组织,但我怀疑您是否正在学习它们。

那么让我们来解析一下该语句。 “关联高速缓冲存储器”。那么这就排除了直接映射缓存。所以我们只剩下“完全关联”和“n 路集合关联”。它有 1 组线路。好的,所以如果它设置为关联,那么它就不是 4 路 x 64 行/路这样的传统方式,而是 n 路 x 1 行/路。换句话说,它是完全关联的。我想说这是一个真实的陈述,除了技术术语是“完全关联”而不是“完全关联”。

有道理吗?

快乐黑客!

  1. "Manipulated address" is not a term of the art. You have an m-bit virtual address mapping to an n-bit physical address. Yes, a cache may be of any size up to the physical address size, but typically is much smaller. Note that cache lines are tagged with virtual or more typically physical address bits corresponding to the maximum virtual or physical address range of the machine.

  2. Yes, DRAM processes and logic processes are each tuned for different objectives, and involve different process steps (different materials and thicknesses to lay down DRAM capacitor stacks/trenches, for example) and historically you haven't built processors in DRAM processes (except the Mitsubishi M32RD) nor DRAM in logic processes. Exception is so-called eDRAM that IBM likes to use for their SOI processes, and which is used as last level cache in IBM microprocessors such as the Power 7.

  3. "Pagination" is what we call issuing a form feed so that text output begins at the top of the next page. "Paging" on the other hand is sometimes a synonym for virtual memory management, by which a virtual address is mapped (on a page by page basis) to a physical address. If you set up your page tables just so it allows multiple virtual addresses (indeed, virtual addresses from different processes' virtual address spaces) to map to the same physical address and hence the same location in real RAM.

  4. "An associative cache memory with sets of 1 line is an entierly associative cache memory, because one memory block can go in any set since each sets are of the same size that of the block."

Hmm, that's a strange question. Let's break it down. 1) You can have a direct mapped cache, in which an address maps to only one cache line. 2) You can have a fully associative cache, in which an address can map to any cache line; there is something like a CAM (content addressible memory) tag structure to find which if any line matches the address. Or 3) you can have an n-way set associative cache, in which you have, essentially, n sets of direct mapped caches, and a given address can map to one of n lines. There are other more esoteric cache organizations, but I doubt you're being taught them.

So let's parse the statement. "An associative cache memory". Well that rules out direct mapped caches. So we're left with "fully associative" and "n-way set associative". It has sets of 1 line. OK, so if it is set associative, then instead of something traditional like 4-ways x 64 lines/way, it is n-ways x 1 lines/way. In other words, it is fully associative. I would say this is a true statement, except the term of the art is "fully associative" not "entirely associative."

Makes sense?

Happy hacking!

喜爱纠缠 2024-10-14 17:54:08
  1. 确实,或多或少(我猜这取决于你翻译的准确性:))地址中的位数设置了虚拟内存空间的上限;当然,您可以选择不使用所有位。内存缓存的大小取决于实际安装的内存有多少,是独立的;但当然,如果您的内存超出了您的处理能力,那么它仍然无法使用。

  2. 几乎肯定是假的。我们在单独的芯片上安装了 RAM,这样我们就可以安装更多内存,而无需构建全新计算机或更换 CPU。

  1. True, more or less (it depends on the accuracy of your translation I guess :) ) The number of bits in addresses sets an upper limit on the virtual memory space; you could, of course, choose not to use all the bits. The size of the memory cache depends on how much actual memory is installed, which is independent; but of course if you had more memory than you can address, then it still can't be used.

  2. Almost certainly false. We have RAM on separate chips so that we can install more without building a whole new computer or replacing the CPU.

心凉怎暖 2024-10-14 17:54:08
  1. 当然,缓存大小没有先验的上限或下限,尽管在实际应用程序中某些大小比其他大小更有意义。
  2. 我不知道有什么不兼容的地方。我们使用 SRAM 作为片上缓存的原因是它速度更快。
  3. 也许您可以强制 MMU 将不同的虚拟地址映射到同一物理位置,但通常会以相反的方式使用。
  4. 我不明白这个问题。
  1. There is no a-priori upper or lower limit to the cache size, though in a real application certain sizes make more sense than others, of course.
  2. I don't know of any incompatibility. The reason why we use SRAM as on-die cache is because it's faster.
  3. Maybe you can force an MMUs to map different virtual addresses to the same physical location, but usually it's used the other way around.
  4. I don't understand the question.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文