在哪些应用程序中缓存没有任何优势?

发布于 12-07 17:33 字数 80 浏览 2 评论 0原文

我们的教授要求我们考虑一种嵌入式系统设计,其中缓存无法充分利用。我一直在寻找这样的设计,但目前还没有找到。如果你知道这样的设计,可以给一些建议吗?

Our professor asked us to think of an embedded system design where caches cannot be used to their full advantage. I have been trying to find such a design but could not find one yet. If you know such a design, can you give a few tips?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

蘸点软妹酱2024-12-14 17:33:42

考虑一下缓存是如何工作的。例如,如果您想破坏缓存,根据缓存的不同,您可以尝试将经常访问的数据放在 0x10000000、0x20000000、0x30000000、0x40000000 等处。每个位置只需要很少的数据即可导致缓存抖动并显着影响性能损失。

另一种是高速缓存通常拉入“高速缓存行”,单个指令读取可能会导致读取 8 或 16 或更多字节或字。任何情况下,在被驱逐以引入另​​一个高速缓存行之前,平均使用一小部分高速缓存行的情况都会使高速缓存的性能下降。

一般来说,您必须首先了解您的缓存,然后想出阻止性能增益的方法,然后考虑可能导致这种情况的任何现实情况。并非所有缓存都是平等创建的,因此没有一种好或坏的习惯或攻击适用于所有缓存。对于后面具有不同存储器的相同高速缓存或者前面具有不同处理器或存储器接口或存储器周期的情况也是如此。您还需要将系统视为一个整体。

编辑:

也许我回答了错误的问题。没有...充分的优势。这是一个简单得多的问题。在什么情况下,嵌入式应用程序必须接触缓存之外的内存(在初始填充之后)?进入主存储器会删除“充分优势”中的“充分”一词。国际海事组织。

Think about how a cache works. For example if you want to defeat a cache, depending on the cache, you might try having your often accessed data at 0x10000000, 0x20000000, 0x30000000, 0x40000000, etc. It takes very little data at each location to cause cache thrashing and a significant performance loss.

Another one is that caches generally pull in a "cache line" A single instruction fetch may cause 8 or 16 or more bytes or words to be read. Any situation where on average you use a small percentage of the cache line before it is evicted to bring in another cache line, will make your performance with the cache on go down.

In general you have to first understand your cache, then come up with ways to defeat the performance gain, then think about any real world situations that would cause that. Not all caches are created equal so there is no one good or bad habit or attack that will work for all caches. Same goes for the same cache with different memories behind it or a different processor or memory interface or memory cycles in front of it. You also need to think of the system as a whole.

EDIT:

Perhaps I answered the wrong question. not...full advantage. that is a much simpler question. In what situations does the embedded application have to touch memory beyond the cache (after the initial fill)? Going to main memory wipes out the word full in "full advantage". IMO.

泛泛之交2024-12-14 17:33:42

在控制内存映射外设方面,缓存没有优势,实际上是一种障碍。协处理器、电机控制器和 UART 之类的东西通常只是处理器地址空间中的另一个内存位置。这些位置不是简单地存储值,而是在写入或读取时导致现实世界中发生某些事情。

缓存会给这些设备带来问题,因为当软件写入它们时,外设不会立即看到写入内容。如果缓存行永远不会被刷新,那么即使 CPU 发送了数百个命令,外设也可能永远不会真正接收到命令。如果将 0xf0 写入 0x5432 应该会导致#3 火花塞点火,或者右侧副翼向下倾斜 2 度,那么缓存将延迟或停止该信号并导致系统故障。

同样,缓存可以阻止 CPU 从传感器获取新数据。 CPU 会重复读取该地址,而缓存会不断发回第一次的值。在缓存的另一侧,传感器耐心地等待永远不会到来的查询,而 CPU 上的软件则疯狂地调整控件,但对纠正永远不会改变的仪表读数没有任何作用。

Caching does not offer an advantage, and is actually a hindrance, in controlling memory-mapped peripherals. Things like coprocessors, motor controllers, and UARTs often appear as just another memory location in the processor's address space. Instead of simply storing a value, those locations can cause something to happen in the real world when written to or read from.

Cache causes problems for these devices because when software writes to them, the peripheral doesn't immediately see the write. If the cache line never gets flushed, the peripheral may never actually receive a command even after the CPU has sent hundreds of them. If writing 0xf0 to 0x5432 was supposed to cause the #3 spark plug to fire, or the right aileron to tilt down 2 degrees, then the cache will delay or stop that signal and cause the system to fail.

Similarly, the cache can prevent the CPU from getting fresh data from sensors. The CPU reads repeatedly from the address, and cache keeps sending back the value that was there the first time. On the other side of the cache, the sensor waits patiently for a query that will never come, while the software on the CPU frantically adjusts controls that do nothing to correct gauge readings that never change.

浅浅淡淡2024-12-14 17:33:42

除了 Halst 几乎完整的回答之外,我还想提一下另一种情况,其中缓存可能远非优势。如果您有多核 SoC,其中所有核心当然都有自己的缓存,并且根据程序代码如何利用这些核心,缓存可能非常低效。例如,如果由于不正确的设计或程序特定(例如多核通信),RAM 中的某些数据块同时被 2 个或更多核使用,则可能会发生这种情况。

In addition to almost complete answer by Halst, I would like to mention one additional case where caches may be far from being an advantage. If you have multiple-core SoC where all cores, of course, have own cache(s) and depending on how program code utilizes these cores - caches can be very ineffective. This may happen if ,for example, due to incorrect design or program specific (e.g. multi-core communication) some data block in RAM is concurrently used by 2 or more cores.

马蹄踏│碎落叶2024-12-14 17:33:41

缓存利用事实数据(和代码)表现出局部性

因此,不表现出局部性的嵌入式系统将无法从缓存中受益。

示例:

嵌入式系统有 1MB 内存和 1kB 缓存。
如果该嵌入式系统以短跳转方式访问内存,它将在同一 1kB 内存区域中停留较长时间,该区域可以成功缓存。
如果这个嵌入式系统在这 1MB 内频繁地在不同的遥远地方跳转,那么就没有局部性,缓存将被严重使用。

另请注意,根据体系结构,您可以为数据和代码使用不同的缓存,也可以使用单个缓存。

更具体的示例:

如果您的嵌入式系统将大部分时间都花在访问相同的数据上,并且(例如)在适合缓存的紧密循环中运行,那么您就可以充分利用缓存。
如果您的系统类似于数据库,将从任何内存范围中获取随机数据,则无法充分利用缓存。 (因为应用程序没有表现出数据/代码的局部性。)

另一个奇怪的例子

有时,如果您正在构建安全关键型或任务关键型系统,您会希望您的系统具有高度可预测性。缓存使您的代码执行变得非常不可预测,因为您无法预测某个内存是否被缓存,因此您不知道访问该内存需要多长时间。因此,如果禁用缓存,它可以让您更准确地判断程序的性能并计算最坏情况的执行时间。这就是为什么在此类系统中禁用缓存很常见。

Caches exploit the fact data (and code) exhibit locality.

So an embedded system wich does not exhibit locality, will not benefit from a cache.

Example:

An embedded system has 1MB of memory and 1kB of cache.
If this embedded system is accessing memory with short jumps it will stay long in the same 1kB area of memory, which could be successfully cached.
If this embedded system is jumping in different distant places inside this 1MB and does that frequently, then there is no locality and cache will be used badly.

Also note that depending on architecture you can have different caches for data and code, or a single one.

More specific example:

If your embedded system spends most of its time accessing the same data and (e.g.) running in a tight loop that will fit in cache, then you're using cache to a full advantage.
If your system is something like a database that will be fetching random data from any memory range, then cache can not be used to it's full advantage. (Because the application is not exhibiting locality of data/code.)

Another, but weird example

Sometimes if you are building safety-critical or mission-critical system, you will want your system to be highly predictable. Caches makes your code execution being very unpredictable, because you can't predict if a certain memory is cached or not, thus you don't know how long it will take to access this memory. Thus if you disable cache it allows you to judge you program's performance more precisely and calculate worst-case execution time. That is why it is common to disable cache in such systems.

給妳壹絲溫柔2024-12-14 17:33:41

我不知道你的背景是什么,但我建议你阅读一下“易失性”关键字在 C 语言中的作用。

I do not know what you background is but I suggest to read about what the "volatile" keyword does in the c language.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文