测量 L1 和 L2 缓存的大小和路序
如何以编程方式测量(不查询操作系统)L1 和 L2 缓存(数据缓存)的大小和关联顺序?
关于系统的假设:
- 它有 L1 和 L2 缓存(也可能是 L3,可能是缓存共享),
- 它可能有一个硬件预取单元(就像 P4+),
- 它有一个稳定的时钟源(tickcounter 或用于 gettimeofday 的良好 HPET)。
没有关于操作系统的假设(可以是 Linux、Windows 或其他操作系统),并且我们不能使用 POSIX 查询。
语言是 C,编译器优化可能被禁用。
How can I programmatically measure (not query the OS) the size and order of associativity of L1 and L2 caches (data caches)?
Assumptions about system:
- It has L1 and L2 cache (may be L3 too, may be cache sharing),
- It may have a hardware prefetch unit (just like P4+),
- It has a stable clocksource (tickcounter or good HPET for gettimeofday).
There are no assumptions about OS (it can be Linux, Windows, or something else), and we can't use POSIX queries.
Language is C, and compiler optimizations may be disabled.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我认为您需要做的就是以不断增加的块重复访问内存(以确定缓存大小),并且我认为您可以改变步幅来确定关联性。
因此,您将开始尝试访问非常短的内存段,并不断将大小加倍,直到访问速度减慢。每次访问速度减慢时,您就确定了另一级缓存的大小。
I think all you need to do is repeatedly access memory in ever-increasing chunks (to determine cache size), and I think you can vary the strides to determine associativity.
So you would start out trying to access very short segments of memory and keep doubling the size until access slows down. Every time access slows down you've determined the size of another level of cache.
这是来自 ATLAS 的代码。它适用于 L1 缓存大小
ATLAS/tune/sysinfo/L1CacheSize.c
(https://github.com/vtjnash/atlas-3.10.0/blob/master/tune/sysinfo/L1CacheSize.c)
但它只是l1缓存并且只有它的大小,不是路数。
Here is the code from ATLAS. It is for L1 cache size
ATLAS/tune/sysinfo/L1CacheSize.c
(https://github.com/vtjnash/atlas-3.10.0/blob/master/tune/sysinfo/L1CacheSize.c)
but it is only l1 cache and only size of it, not the way-count.
您可能会发现STREAM 基准测试有用或有趣,或两者兼而有之。
You might find the STREAM benchmark useful or interesting or both.
问题有点过时了,但答案是这里。
Question is outdated a little, but the answer is here.