为什么LRU比FIFO更好?

发布于 2024-08-17 21:52:20 字数 140 浏览 8 评论 0原文

就页面文件而言,为什么最近最少使用比 FIFO 更好?

Why is Least Recently Used better than FIFO in relation to page files?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

等数载,海棠开 2024-08-24 21:52:20

如果您的意思是将内存页面卸载到磁盘 - 如果您的进程经常访问某个页面,您确实不希望将其分页到磁盘,即使它是您访问的第一个页面。另一方面,如果您已经好几天没有访问内存页面,那么您在不久的将来就不太可能这样做。

如果这不是您的意思,请编辑您的问题以提供更多详细信息。

If you mean in terms of offloading memory pages to disk - if your process is frequently accessing a page, you really don't want it to be paged to disk, even if it was the very first one you accessed. On the other hand, if you haven't accessed a memory page for several days, it's unlikely that you'll do so in the near future.

If that's not what you mean, please edit your question to give more details.

无所的.畏惧 2024-08-24 21:52:20

没有一种缓存算法能够始终表现良好,因为这需要对未来的完美了解。 (如果您知道从哪里获得……)LRU 在 VM 缓存设计中的主导地位是测量系统行为的长期历史的结果。考虑到实际的工作负载,LRU 在很大一部分时间里都能很好地工作。然而,构造一个使 FIFO 具有优于 LRU 性能的参考字符串并不困难。

考虑对比可用可分页实内存大得多的大地址空间进行线性扫描。 LRU 基于“你最近触摸过的东西很可能会再次触摸”的假设,但线性扫描完全违反了该假设。这就是为什么某些操作系统允许程序向内核提供有关其引用行为的建议 - 一个例子是经典 LISP 解释器所代表的“标记和清除”垃圾收集。 (也是“分代”等更现代的 GC 工作的主要驱动力。)

另一个例子是某个古董宏处理器(STAGE2)中的符号表。从根开始搜索二叉树的每个符号,并且字符串评估在堆栈上完成。事实证明,通过“连接”符号树的根页和堆栈的底部页来减少可用页框,可以极大地提高页错误率。缓存很小,并且剧烈搅拌,总是推出两个最常引用的页面,因为缓存小于到这些页面的相互引用距离。因此,小型缓存效果更好,但这只是因为从缓存中窃取的这两个页面框架得到了明智的使用。

所有这一切的最终结果是,LRU 是标准答案,因为它通常非常适合没有严重超载的系统上的实际工作负载(VM 是可用实际内存的许多倍),并且这是经过多年仔细测量的支持。然而,
你当然可以找到替代行为更优越的情况。这就是为什么测量真实系统很重要。

There is no single cache algorithm which will always do well because that requires perfect knowledge of the future. (And if you know where to get that...) The dominance of LRU in VM cache design is the result of a long history of measuring system behavior. Given real workloads, LRU works pretty well a very large fraction of the time. However, it isn't very hard to construct a reference string for which FIFO would have superior performance over LRU.

Consider a linear sweep through a large address space much larger than the available pageable real memory. LRU is based on the assumption that "what you've touched recently you're likely to touch again", but the linear sweep completely violates that assumption. This is why some operating systems allow programs to advise the kernel about their reference behavior - one example is "mark and sweep" garbage collection typified by classic LISP interpreters. (And a major driver for work on more modern GCs like "generational".)

Another example is the symbol table in a certain antique macro processor (STAGE2). The binary tree is searched from the root for every symbol, and the string evaluation is being done on a stack. It turned out that reducing the available page frames by "wiring down" the root page of the symbol tree and the bottom page of the stack made a huge improvement in the page fault rate. The cache was tiny, and it churned violently, always pushing out the two most frequently referenced pages because the cache was smaller than the inter-reference distance to those pages. So a small cache worked better, but ONLY because those two page frames stolen from the cache were used wisely.

The net of all this is that LRU is the standard answer because it's usually pretty good for real workloads on systems that aren't hideously overloaded (VM many times the real memory available), and that is supported by years of careful measurements. However,
you can certainly find cases where alternative behavior will be superior. This is why measuring real systems is important.

白云悠悠 2024-08-24 21:52:20

将 RAM 视为缓存。为了成为有效的缓存,它需要将最有可能被请求的项目保留在内存中。

LRU 将最近使用过的东西保留在内存中。 FIFO 保留最近添加的内容。一般来说,LRU 效率更高,因为通常有一些内存项被添加一次就不再使用,而有些项则被频繁添加和使用。 LRU 更有可能将经常使用的项目保留在内存中。

Treat the RAM as a cache. In order to be an effective cache, it needs to keep the items most likely to be requested in memory.

LRU keeps the things that were most recently used in memory. FIFO keeps the things that were most recently added. LRU is, in general, more efficient, because there are generally memory items that are added once and never used again, and there are items that are added and used frequently. LRU is much more likely to keep the frequently-used items in memory.

遮了一弯 2024-08-24 21:52:20

根据访问模式,FIFO 有时可以击败 LRU。 自适应替换缓存是混合型的,它根据实际使用模式调整其策略。

Depending on access patterns, FIFO can sometimes beat LRU. An Adaptive Replacement Cache is hybrid that adapts its strategy based on actual usage patterns.

仲春光 2024-08-24 21:52:20

根据引用的时间局部性,最近访问过的内存更有可能被再次访问很快。

According to temporal locality of reference, memory that has been accessed recently is more likely to be accessed again soon.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文