模式切换涉及的开销是多少

发布于 2024-08-13 12:10:04 字数 348 浏览 9 评论 0原文

很多时候,我读到/听到这样的论点,即进行大量系统调用等会效率低下,因为应用程序会进行模式切换,即从用户模式切换到内核模式,并且在执行系统调用后,通过进行以下操作开始在用户模式下执行:再次切换模式。

我的问题是模式切换的开销是多少? cpu 缓存是否失效或者 tlb 条目被刷新或者发生了什么导致开销?

请注意,我问的是模式切换所涉及的开销,而不是上下文切换。我知道模式切换和上下文切换是两个不同的东西,我完全了解与上下文切换相关的开销,但我不明白的是模式切换会引起什么开销?

如果可能的话,请提供一些有关特定 *nix 平台(如 Linux、FreeBSD、Solaris 等)的信息。

问候

lali

Many a times i read/hear the argument that making a lot of system calls etc would be inefficient since the application make a mode switch i.e goes from user mode to kernel mode and after executing the system call starts executing in the user mode by making a mode switch again.

My question is what is the overhead of a mode switch ? Does cpu cache gets invalidated or tlb entries are flushed out or what happens that causes overhead ?

Please note that i am asking about the overhead involved in mode switch and not context switch. I know that mode switch and context switch are two different things and i am fully aware about overhead associated with a context switch, but what i fail to understand is what overhead is caused by a mode switch ?

If its possible please provide some information about a particular *nix platform like Linux, FreeBSD, Solaris etc.

Regards

lali

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

找个人就嫁了吧 2024-08-20 12:10:04

简单模式开关上不应有 CPU 缓存或 TLB 刷新。

快速测试告诉我,在我的 Linux 笔记本电脑上,用户空间进程完成一个简单的系统调用大约需要 0.11 微秒,除了切换到内核模式并返回之外,该系统调用执行的工作量微不足道。我使用 getuid(),它仅从内存结构中复制单个整数。 strace 确认系统调用重复了 MAX 次。

#include <unistd.h>
#define MAX 100000000
int main() {
  int ii;
  for (ii=0; ii<MAX; ii++) getuid();
  return 0;
}

在我的笔记本电脑上,使用 time ./testover 测量,这大约需要 11 秒,11 秒除以 1 亿得到 0.11 微秒。

从技术上讲,这是两种模式切换,所以我想您可以声称单模式切换需要 0.055 微秒,但单向切换不是很有用,所以我认为往返次数会更多相关的一项。

There should be no CPU cache or TLB flush on a simple mode switch.

A quick test tells me that, on my Linux laptop it takes about 0.11 microsecond for a userspace process to complete a simple syscall that does an insignificant amount of work other than the switch to kernel mode and back. I'm using getuid(), which only copies a single integer from an in-memory struct. strace confirms that the syscall is repeated MAX times.

#include <unistd.h>
#define MAX 100000000
int main() {
  int ii;
  for (ii=0; ii<MAX; ii++) getuid();
  return 0;
}

This takes about 11 seconds on my laptop, measured using time ./testover, and 11 seconds divided by 100 million gives you 0.11 microsecond.

Technically, that's two mode switches, so I suppose you could claim that a single mode switch takes 0.055 microseconds, but a one-way switch isn't very useful, so I'd consider the there-and-back number to be the more relevant one.

維他命╮ 2024-08-20 12:10:04

有很多方法可以在 x86 CPU 上进行模式切换(我在这里假设)。对于用户调用的函数,正常的做法是进行任务跳转或调用(称为任务门和调用门)。这两者都涉及任务切换(相当于上下文切换)。添加一些调用前的处理、调用后的标准验证以及返回。这将最低限度四舍五入为安全模式开关。

至于 Eric 的时机,我不是 Linux 专家,但在我处理过的大多数操作系统中,简单的系统调用会在用户空间中缓存数据(如果可以安全地完成)以避免这种开销。在我看来, getuid() 将是此类数据缓存的主要候选者。因此,Eric 的计时可能更多地反映了用户空间中预切换处理的开销,而不是其他任何事情。

There are many ways to do a mode switch on the x86 CPUs (which I am assuming here). For a user called function, the normal way is to do a Task jump or Call (referred to as Task Gates and Call Gates). Both of these involve a Task switch (equivalent to a context switch). Add to that a bit of processing before the call, the standard verification after the call, and the return. This rounds up the bare minimum to a safe mode switch.

As for Eric's timing, I am not a Linux expert, but in most OS I have dealt with, simple system calls cache data (if it can be done safely) in the user space to avoid this overhead. And it would seem to me that a getuid() would be a prime candidate for such data caching. Thus Eric's timing could be more a reflection of the overhead of the pre-switch processing in user space than anything else.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文