在现代 PC 中测量经过时间的最准确方法是什么?

发布于 2024-09-25 07:23:54 字数 137 浏览 13 评论 0原文

我知道我可以使用 IRQ0,它是系统定时器,但这是基于 14.31818MHz 时钟的,对吗?有什么可以提供更高的精度吗?

谢谢。

编辑:有谁知道 Windows 函数 QueryPerformanceCounter 使用什么?

I know I can use IRQ0, which is the system timer, but this is based on a 14.31818MHz clock, right? Is there anything offering greater precision?

Thanks.

Edit: Does anyone know what the Windows function QueryPerformanceCounter uses?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

﹏半生如梦愿梦如真 2024-10-02 07:23:54

“精度”和“准确度”的含义不同。 “地球的周长是40000.000000000公里”是精确的,但并不准确。对于时钟来说,情况有点复杂:

  • 分辨率:滴答声之间的时间,或滴答声的周期。 (你可能会称其为“精度”,但我认为“分辨率”有更明显的含义。)
  • 偏斜:标称时钟频率和实际时钟频率之间的相对差异(ish)
  • 漂移:偏斜的变化率(由于老化、温度、 ...)。
  • 抖动:滴答计时的随机变化。
  • 延迟:获取时间戳需要多长时间。

即使“系统计时器”(PIT 根据维基百科)运行在 1.something MHz,您通常会在 100 到 1000 Hz 之间获得 IRQ0。 显然您也可以从端口 0x40 读取两次以获取当前计数器值,但我不确定这有什么样的延迟(然后你会得到下一次中断之前的计数,所以你需要做一些数学计算)。它也不适用于更现代的“tickless”内核。

还有一些其他高频定时器:

  • 本地 APIC,它基于总线频率和 2 的幂分频器。我找不到任何关于如何阅读它的文档(大概是一个 I/O 端口?)。
  • ACPI 电源管理计时器(Linux 中的 acpi_pm;我认为,以及 /UsePMTimer Windows 启动标志),根据 这个。 IIRC,读它有点贵。
  • HPET,根据相同链路至少为 10 MHz(但可以更高)。它还应该具有比 ACPI PM 计时器更低的延迟。
  • TSC(有警告)。几乎可以肯定的是最低的延迟,也可能是最高的频率。 (但显然每个“tick”它都会增加超过 1,因此每秒计数不一定与分辨率相同。)

Darwin(即 OS X) 似乎假设 TSC 频率不会改变,并进行调整从 TSC 未运行的睡眠状态唤醒时添加的基值(显然是 C4 或更大)。每个 CPU 有不同的基值,因为 TSC 不需要在 CPU 之间同步。您必须付出合理的努力才能获得合理的时间戳。

IIRC,Linux 只是选择一个时钟源(TSC,如果它是理智的,然后是 HPET,然后是 ACPI PM,我认为)。

IIRC,QueryPerformanceCounter() 使用 Windows 认为最好的任何内容。这在一定程度上也取决于 Windows 版本(据说 XP 不支持 HPET 中断,因此大概也不支持时间戳)。您可以调用 QueryPerformanceFrequency() 进行猜测(我得到 1995030000,这可能意味着它是 TSC)。

"Precision" and "accuracy" mean different things. "The Earth's circumference is 40000.000000000 km" is precise, but not accurate. It's a bit more complicated with clocks:

  • Resolution: time between ticks, or period of ticks. (You could probably call it "precision", but I think "resolution" has a more obvious meaning.)
  • Skew: relative difference between nominal and actual clock frequency (ish)
  • Drift: rate of change of skew (due to aging, temperature, ...).
  • Jitter: Random variation in tick timing.
  • Latency: How long it takes to get a timestamp.

Even though the "system timer" (PIT according to Wikipedia) runs at 1.something MHz, you generally get IRQ0 somewhere between 100 and 1000 Hz. Apparently you can also read from from port 0x40 twice to get the current counter value, but I'm not sure what kind of latency this has (and then you get number of counts until the next interrupt, so you need to do some math). It also doesn't work on more modern "tickless" kernels.

There are a few other high-frequency timers:

  • Local APIC, which is based on the bus frequency and a power-of-2 divider. I can't find any documentation on how to read it though (presumably it's an I/O port?).
  • ACPI power management timer (acpi_pm in Linux; I think, and the /UsePMTimer Windows boot flag), which is about 3.58 MHz according to this. IIRC, reading it is a bit expensive.
  • HPET, which is at least 10 MHz according to the same link (but it can be higher). It's also supposed to have lower latency than the ACPI PM timer.
  • TSC (with caveats). Almost certainly the lowest latency, and probably the highest frequency as well. (But apparently it can go up by more than 1 every "tick", so the counts-per-second isn't necessarily the same as the resolution.)

Darwin (i.e. OS X) appears to assume that the TSC frequency does not change, and adjusts the base value added to it when waking up from a sleep state where the TSC is not running (apparently C4 and greater). There's a different base value per CPU, because the TSC need not be synchronized across CPUs. You have to put in a reasonable amount of effort to get a sensible timestamp.

IIRC, Linux just picks a single clock source (TSC, if it's sane, and then HPET, and then ACPI PM, I think).

IIRC, QueryPerformanceCounter() uses whatever Windows thinks is best. It depends somewhat on Windows version too (XP supposedly doesn't support HPET for interrupts, so presumably it doesn't for timestamps either). You can call QueryPerformanceFrequency() to make a guess (I get 1995030000, which probably means it's the TSC).

早茶月光 2024-10-02 07:23:54

英特尔处理器通常通过 rdtsc 指令提供高精度计时器信息。

它的精度远高于 14 MHz1。需要注意的是,它在多核和速度步进处理器上可能会出现问题。

编辑:这个问题有更多细节关于这个话题。



1. 实际频率取决于处理器 - 但通常是处理器频率。显然,在 Nehalem 处理器上,TSC 以前端总线频率 (133 MHz) 运行。

Intel processors usually have high precision timer information available via the rdtsc instruction.

It has much higher precision than 14 MHz¹. The caveat is that it can have issues on multi-core and speed stepping processors.

Edit: This question has a a lot more detail on this subject.



1. The actual frequency depends on the processor - but is often the processor frequency. Apparently on Nehalem processors the TSC runs at the front side bus frequency (133 MHz).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文