确定各种文件系统的缓存未命中

发布于 2024-08-12 07:48:44 字数 578 浏览 4 评论 0原文

我有一个学校项目,我必须找出文件系统在重载和轻载以及多处理器机器上有多少缓存丢失。在与我的教授讨论这个问题后,我提出了一个基本的执行计划:

  1. 创建一个程序,该程序将使文件系统陷入困境并填满缓冲区缓存。
  2. 使用系统基准测试工具记录缓存未命中的次数。
  3. 冲洗并在新条件下重复。

但作为操作系统设计的新手,我不确定如何继续。因此,这里有一些我需要帮助的地方:

  1. 理想的程序会执行哪些操作填满缓冲区缓存?目前,我编写的程序读取和写入多个不同的文件 x 次。
  2. 有哪些工具可以记录缓存未命中次数?我已经研究过 oprofile 但我不认为它监视文件系统的缓冲区缓存。但我发现这个 列表 看起来很有希望。
  3. 其他正在运行的进程会影响这些基准吗?

感谢您的帮助!

I've got a project for school where I have to find out how many cache misses a filesystem will have under heavy and light loads and on a multiple processor machine. After discussing this with my professor, I came up with a basic plan of execution:

  1. Create a program which will bog down the filesystem and fill up the buffer cache.
  2. Use a system benchmarking tool to record the number of cache misses.
  3. Rinse and repeat with a new conditions.

But being new to operating system design, I am unsure of how to proceed. So here are some points where I need some help:

  1. What actions would an ideal program perform to fill up the buffer cache? Currently, the program that I've written reads and writes to several different files, x amount of times.
  2. What tools are there that record the number of cache misses? I have looked into oprofile but I don't think it monitors the filesystem's buffer cache. But I have found this list which looks promising.
  3. Will other running processes affect these benchmarks?

Thanks for your help!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

拥抱没勇气 2024-08-19 07:48:44

1) 如果您尝试测试文件系统性能,请添加几个与 I/O 线程一起操作大量文件元数据的线程。此外,当在多个并行线程中执行 I/O 时,请混合执行大型传输的线程和执行小型传输的线程。许多文件系统会将小型 I/O 操作合并为较大的请求,物理驱动器可以以更省时的方式处理这些请求,并且混合不同大小的 I/O 可能有助于更快地填充缓存(因为它必须缓冲合并的数据)输入/输出)。

2)请小心该工具列表,许多工具看起来像是设计用于在原始设备上运行而不是通过文件系统层运行(因此您得到的结果可能并不代表您认为它们所做的事情)。如果您正在寻找一种工具来对特定文件系统进行基准测试,那么最好的选择可能是咨询该文件系统的开发团队。他们很可能会向您指出他们在开发过程中用于对 FS 进行基准测试的工具,即使它是内部开发的自定义工具。

3) 是的,任何其他正在运行并且可能访问被测文件系统的东西都可能会影响您的结果。您可能希望创建一个单独的文件系统仅用于此测试,并关闭在运行测试时可能尝试访问它的任何后台扫描。

1) If you are trying to test your filesystem performance, throw in several threads that are manipulating large amounts of file metadata alongside your I/O threads. Also, when doing I/O in several parallel threads, mix threads doing large-sized transfers and threads doing small-sized transfers. Many filesystems will coalesce small I/O operations together into larger requests that the physical drive can handle in a more time-efficient manner, and mixing I/O of various sized may help fill up the cache faster (since it has to buffer the coalesced I/O).

2) Be careful with that list of tools, many look like they are designed to operate on raw devices and not through the filesystem layer (so the results you'd get might not represent what you think they do). If you are looking for a tool to benchmark a particular filesystem, your best bet may be to check with the development team for that filesystem. They can most likely point you to the tool that they used to benchmark their FS during development, even if it is a custom tool developed internally.

3) Yes, anything else that is running and might access the filesystem under test can potentially impact your results. You may want to create a separate filesystem to use only for this test and turn off any background scans that might try to access it while you are running your tests.

空宴 2024-08-19 07:48:44

这是一个有趣的问题。也许我可以给你部分答案。

你应该知道 Linux 有多个与文件系统相关的缓存,这些缓存可能有不同的工具

  • Inode 缓存
  • Dentry 缓存
  • 块缓存

一种方法是计算(猜猜?)你的操作应该产生多少块级流量,然后测量真实的块操作(读、写、查找)使用 blktrace。

我不知道有什么方法可以读取 inode 和 dentry 缓存的缓存未命中状态。我真的很想被告知我在这里错了。

困难的方法是用自己的计数器注释 inode 缓存和 dentry 缓存,但这些缓存是相当困难的内核代码。

That is an interesting question. May be I can give you a partial answer.

You should be aware that Linux has multiple caches related to file systems that may have different tools

  • Inode cache
  • Dentry cache
  • Block cache

One way is to calculate (guess?) how much block level traffic your operations should generate, and then measure the real block operations (reads, writes, seeks) with blktrace.

I am not aware of any way to read the cache miss state of the inode and dentry cache. I would really like to be told that I am wrong here.

The hard way is to annotate the inode cache and dentry cache with own counters, but these caches are pretty hard kernel code.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文