有关时钟精度/准确度的声明有何含义?

发布于 2024-10-20 05:54:18 字数 309 浏览 4 评论 0原文

我看过很多关于系统时钟的讨论,据说Windows 下的标准PC 时钟的精确度仅为+/-10 毫秒,而实时系统时钟的精度为亚毫秒级。但这些说法意味着什么?这种时序变化的严重程度完全取决于测量时钟时序的时间间隔。如果两个连续的时钟调用返回的时间戳相差 10 毫秒,那将是一场灾难,幸运的是情况并非如此;但是,如果时钟在一个月内仅损失/增加 10 毫秒,那么对于任何实际目的而言,这实际上都是完美的计时。以不同的方式提出这个问题,如果我进行两个相隔 1 秒的时钟调用,那么我期望的不准确程度是多少,例如标准 PC-Windows、PC-realtime(例如带有支持它的 mb 的 QNX),以及一台苹果电脑?

I've seen a lot of discussions of system clocks where it's said that e.g. standard PC clocks under e.g. Windows are precise only +/-10ms, whereas on a real time system clocks have submillisecond precision. But what do these claims mean? How significant this timing variability is depends entirely on the interval over which clock timing is being measured. If two successive clock calls returned timestamps that differed by 10ms, that would be a disaster, and fortunately this isn't the case; but if a clock only loses/gains 10ms over the course of a month, that's virtually perfect timing for any practical purpose. To pose the question a different way, if I make two clock calls that are 1 second apart, what degree of inaccuracy could I expect, for say standard PC-Windows, PC-realtime (e.g. QNX with an mb that supports it), and a Mac?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

酸甜透明夹心 2024-10-27 05:54:18

您的问题可能会引发更大规模的讨论。当您谈论测量时钟的时间间隔时,我认为这称为漂移。如果两个连续时钟调用的时间戳相差 10 毫秒,也许需要那么长时间来处理,也许存在中断,也许时钟漂移得那么严重,也许报告精度以 10 毫秒为单位,也许存在舍入误差等等。系统时钟的报告精度取决于其速度(即1GHz = 1ns)、硬件支持和操作系统支持。抱歉,我不知道 Windows 与 Mac 相比如何。

Your question(s) may lead to a larger discussion. When you're talking about the timing interval over which a clock is measured, I believe that is called drift. If the timestamps from two successive clock calls differed by 10ms, maybe it takes that long to process, maybe there was an interrupt, maybe the clock does drift that badly, maybe the reporting precision is in units of 10ms, maybe there is round off error, etc. The reporting precision of a system clock depends on its speed (ie, 1GHz = 1ns), hardware support, and OS support. Sorry I don't know how Windows compares with Mac.

大海や 2024-10-27 05:54:18

由于您没有链接到有关此主题的任何具体讨论,因此我只能从 Java 方面转述我在该主题上的一点经验:

经典粒度 code>System.currentTimeMillis() 前一段时间表现得相当糟糕(在 Windows XP 上为 15ms)。这意味着任何两个不返回相同值的相邻 System.currentTimeMillis() 调用之间的最小可能差异为 15 毫秒。因此,如果您测量一个需要 8 毫秒的事件,那么您将得到 0 毫秒或 15 毫秒的结果。

对于测量小时间跨度来说,这显然是灾难性的。对于测量更长的时间跨度,这并不是真正的问题。

这是 Java 引入 System.nanoTime专门设计用于测量小时间跨度,通常(即当操作系统支持它时) 显着更细的粒度(在我测试过的所有系统上,它从未返回相同的值两次,即使连续调用两次且中间没有计算)。

因此,只要您使用正确的 API,现代计算机通常可以提供非常细粒度和非常精确的时间测量。

Since you don't link to any specific discussions on this topic I can only relay what little experience I have on this topic from the Java side:

The granularity of the classical System.currentTimeMillis() has been pretty bad some time ago (15ms on Windows XP). This means that the smallest possible difference between any two adjacent System.currentTimeMillis() calls that don't return the same value are 15ms. So if you measure an event that takes 8ms, then you get either 0ms or 15ms as a result.

For measuring small time spans that's obviously disastrous. For measuring longer time spans, that's not really a problem.

That's one of the primary reasons why Java introduced System.nanoTime which was specifically designed to measure small time spans and usually (i.e. when the OS supports it) has a significantly finer granularity (on all systems I've tested it on it never returned the same value twice, even when called two times in a row with no calculation nbetween).

So modern computers can usually provide pretty fine-granular and pretty precise time measurement, provided you use the correct APIs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文