如何使用 ANSI C 测量以毫秒为单位的时间?
仅使用 ANSI C,有没有办法以毫秒或更高的精度测量时间? 我正在浏览 time.h 但我只找到了第二精度函数。
Using only ANSI C, is there any way to measure time with milliseconds precision or more? I was browsing time.h but I only found second precision functions.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
没有任何 ANSI C 函数能够提供优于 1 秒的时间分辨率,但 POSIX 函数 < code>gettimeofday 提供微秒分辨率。 时钟函数仅测量进程执行所花费的时间,并且在许多系统上并不准确。
您可以像这样使用此函数:
这将在我的计算机上返回
Time elapsed: 1.000870
。There is no ANSI C function that provides better than 1 second time resolution but the POSIX function
gettimeofday
provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.You can use this function like this:
This returns
Time elapsed: 1.000870
on my machine.我总是使用clock_gettime()函数,从CLOCK_MONOTONIC时钟返回时间。 返回的时间是自过去某个未指定的点(例如纪元的系统启动)以来的时间量,以秒和纳秒为单位。
I always use the clock_gettime() function, returning time from the CLOCK_MONOTONIC clock. The time returned is the amount of time, in seconds and nanoseconds, since some unspecified point in the past, such as system startup of the epoch.
实现便携式解决方案
正如这里已经提到的,对于时间测量问题,没有足够精度的适当 ANSI 解决方案,我想写一下如何获得便携式解决方案,如果可能的话,高分辨率时间测量解决方案。
单调时钟与时间戳
一般来说,时间测量有两种方式:
第一个使用单调时钟计数器(有时称为滴答计数器),它以预定义的频率对滴答进行计数,因此如果您有滴答值并且频率已知,则可以轻松地将滴答转换为经过的时间。 实际上并不能保证单调时钟以任何方式反映当前系统时间,它还可能对系统启动后的滴答数进行计数。 但它保证时钟始终以递增的方式运行,无论系统状态如何。 通常频率与硬件高分辨率源绑定,这就是为什么它提供高精度(取决于硬件,但大多数现代硬件对于高分辨率时钟源没有问题)。
第二种方式提供基于当前系统时钟值的(日期)时间值。 它也可能具有高分辨率,但它有一个主要缺点:这种时间值会受到不同系统时间调整的影响,即时区更改、夏令时 (DST) 更改、NTP 服务器更新、系统休眠等在。 在某些情况下,您可能会得到负的经过时间值,这可能会导致未定义的行为。 实际上,这种时间源不如第一种可靠。
因此,时间间隔测量的第一条规则是如果可能的话使用单调时钟。 它通常具有高精度,并且设计可靠。
回退策略
在实现便携式解决方案时,值得考虑回退策略:如果可用,则使用单调时钟;如果系统中没有单调时钟,则回退到时间戳方法。
Windows
有一篇很棒的文章,名为 在 MSDN 上获取有关 Windows 上时间测量的高分辨率时间戳,其中描述了您可能需要了解的有关软件和硬件支持的所有详细信息。 要在 Windows 上获取高精度时间戳,您应该:
使用 查询性能频率:
计时器频率在系统启动时是固定的,因此您只需获取一次。
使用 查询性能计数器:
将刻度缩放到经过的时间,即微秒:
根据 Microsoft 的说法,在大多数情况下,在 Windows XP 和更高版本上使用此方法不会有任何问题。 但您也可以在 Windows 上使用两种后备解决方案:
GetTickCount
,但从 Windows Vista 及更高版本开始可用。OS X (macOS)
OS X (macOS) 有自己的马赫绝对时间单位,代表单调时钟。 最好的开始方式是苹果的文章 技术问答 QA1398:马赫绝对值时间单位 描述(带有代码示例)如何使用特定于 Mach 的 API 来获取单调刻度。 还有一个关于它的本地问题,称为 Mac OS X 中的clock_gettime 替代方案 最后可能会让您有点困惑如何处理可能的值溢出,因为计数器频率以分子和分母的形式使用。 因此,一个如何获取经过时间的简短示例:
获取时钟频率分子和分母:
您只需执行一次。
使用
mach_absolute_time
查询当前刻度值:使用之前查询的分子和分母将刻度缩放到经过的时间,即微秒:
防止溢出的主要思想是在使用分子和分母之前将刻度缩小到所需的精度。 由于初始计时器分辨率以纳秒为单位,因此我们将其除以 1000 以获得微秒。 您可以在 Chromium 的 time_mac.c< 中找到相同的方法/a>. 如果您确实需要纳秒精度,请考虑阅读 我如何使用mach_absolute_time 不会溢出?.
Linux 和 UNIX
clock_gettime
调用是任何 POSIX 友好系统上的最佳方式。 它可以查询不同时钟源的时间,我们需要的是CLOCK_MONOTONIC
。 并非所有具有clock_gettime
的系统都支持CLOCK_MONOTONIC
,因此您需要做的第一件事是检查其可用性:_POSIX_MONOTONIC_CLOCK
被定义为值>= 0
表示CLOCK_MONOTONIC
可用;如果
_POSIX_MONOTONIC_CLOCK
被定义为0
,这意味着你应该额外检查它在运行时是否工作,我建议使用sysconf
:< /p>clock_gettime
的用法非常简单:获取时间值:
我已将时间缩减为微秒。
以同样的方式计算与之前收到的时间值的差异:
最好的后备策略是使用
gettimeofday
调用:它不是单调的,但它提供了相当多的良好的分辨率。 这个想法与clock_gettime
相同,但是要获取时间值,您应该:同样,时间值被缩小到微秒。
SGI IRIX
IRIX 有
clock_gettime
调用,但缺少CLOCK_MONOTONIC
。 相反,它有自己的单调时钟源,定义为CLOCK_SGI_CYCLE
,您应该使用它来代替CLOCK_MONOTONIC
和clock_gettime
。Solaris 和 HP-UX
Solaris 有自己的高分辨率计时器接口
gethrtime
,它返回当前计时器值(以纳秒为单位)。 尽管较新版本的 Solaris 可能具有clock_gettime
,但如果需要支持旧的 Solaris 版本,您可以坚持使用gethrtime
。用法很简单:
HP-UX 缺少
clock_gettime
,但它支持gethrtime
,您应该以与 Solaris 上相同的方式使用它。BeOS
BeOS 也有自己的高分辨率计时器接口
system_time
返回自计算机启动以来经过的微秒数。用法示例:
OS/2
OS/2 有自己的用于检索高精度时间戳的 API:
使用
DosTmrQueryFreq
查询计时器频率(每单位滴答数)(对于 GCC 编译器):<前><代码>#define INCL_DOSPROFILE;;
#定义 INCL_DOSERRORS
#include
#include
乌龙频率;
DosTmrQueryFreq(&freq);
使用
DosTmrQueryTime
查询当前刻度值:将刻度缩放到经过的时间,即微秒:
示例实现
您可以查看plibsys 库,它实现了上述所有策略(有关详细信息,请参阅 ptimeprofiler*.c)。
Implementing a portable solution
As it was already mentioned here that there is no proper ANSI solution with sufficient precision for the time measurement problem, I want to write about the ways how to get a portable and, if possible, a high-resolution time measurement solution.
Monotonic clock vs. time stamps
Generally speaking there are two ways of time measurement:
The first one uses a monotonic clock counter (sometimes it is called a tick counter) which counts ticks with a predefined frequency, so if you have a ticks value and the frequency is known, you can easily convert ticks to elapsed time. It is actually not guaranteed that a monotonic clock reflects the current system time in any way, it may also count ticks since a system startup. But it guarantees that a clock is always run up in an increasing fashion regardless of the system state. Usually the frequency is bound to a hardware high-resolution source, that's why it provides a high accuracy (depends on hardware, but most of the modern hardware has no problems with high-resolution clock sources).
The second way provides a (date)time value based on the current system clock value. It may also have a high resolution, but it has one major drawback: this kind of time value can be affected by different system time adjustments, i.e. time zone change, daylight saving time (DST) change, NTP server update, system hibernation and so on. In some circumstances you can get a negative elapsed time value which can lead to an undefined behavior. Actually this kind of time source is less reliable than the first one.
So the first rule in time interval measuring is to use a monotonic clock if possible. It usually has a high precision, and it is reliable by design.
Fallback strategy
When implementing a portable solution it is worth to consider a fallback strategy: use a monotonic clock if available and fallback to time stamps approach if there is no monotonic clock in the system.
Windows
There is a great article called Acquiring high-resolution time stamps on MSDN about time measurement on Windows which describes all the details you may need to know about software and hardware support. To acquire a high precision time stamp on Windows you should:
query a timer frequency (ticks per second) with QueryPerformanceFrequency:
The timer frequency is fixed on the system boot so you need to get it only once.
query the current ticks value with QueryPerformanceCounter:
scale the ticks to elapsed time, i.e. to microseconds:
According to Microsoft you should not have any problems with this approach on Windows XP and later versions in most cases. But you can also use two fallback solutions on Windows:
GetTickCount
, but it is available starting from Windows Vista and above.OS X (macOS)
OS X (macOS) has its own Mach absolute time units which represent a monotonic clock. The best way to start is the Apple's article Technical Q&A QA1398: Mach Absolute Time Units which describes (with the code examples) how to use Mach-specific API to get monotonic ticks. There is also a local question about it called clock_gettime alternative in Mac OS X which at the end may leave you a bit confused what to do with the possible value overflow because the counter frequency is used in the form of numerator and denominator. So, a short example how to get elapsed time:
get the clock frequency numerator and denominator:
You need to do that only once.
query the current tick value with
mach_absolute_time
:scale the ticks to elapsed time, i.e. to microseconds, using previously queried numerator and denominator:
The main idea to prevent an overflow is to scale down the ticks to desired accuracy before using the numerator and denominator. As the initial timer resolution is in nanoseconds, we divide it by
1000
to get microseconds. You can find the same approach used in Chromium's time_mac.c. If you really need a nanosecond accuracy consider reading the How can I use mach_absolute_time without overflowing?.Linux and UNIX
The
clock_gettime
call is your best way on any POSIX-friendly system. It can query time from different clock sources, and the one we need isCLOCK_MONOTONIC
. Not all systems which haveclock_gettime
supportCLOCK_MONOTONIC
, so the first thing you need to do is to check its availability:_POSIX_MONOTONIC_CLOCK
is defined to a value>= 0
it means thatCLOCK_MONOTONIC
is avaiable;if
_POSIX_MONOTONIC_CLOCK
is defined to0
it means that you should additionally check if it works at runtime, I suggest to usesysconf
:Usage of
clock_gettime
is pretty straight forward:get the time value:
I've scaled down the time to microseconds here.
calculate the difference with the previous time value received the same way:
The best fallback strategy is to use the
gettimeofday
call: it is not a monotonic, but it provides quite a good resolution. The idea is the same as withclock_gettime
, but to get a time value you should:Again, the time value is scaled down to microseconds.
SGI IRIX
IRIX has the
clock_gettime
call, but it lacksCLOCK_MONOTONIC
. Instead it has its own monotonic clock source defined asCLOCK_SGI_CYCLE
which you should use instead ofCLOCK_MONOTONIC
withclock_gettime
.Solaris and HP-UX
Solaris has its own high-resolution timer interface
gethrtime
which returns the current timer value in nanoseconds. Though the newer versions of Solaris may haveclock_gettime
, you can stick togethrtime
if you need to support old Solaris versions.Usage is simple:
HP-UX lacks
clock_gettime
, but it supportsgethrtime
which you should use in the same way as on Solaris.BeOS
BeOS also has its own high-resolution timer interface
system_time
which returns the number of microseconds have elapsed since the computer was booted.Example usage:
OS/2
OS/2 has its own API to retrieve high-precision time stamps:
query a timer frequency (ticks per unit) with
DosTmrQueryFreq
(for GCC compiler):query the current ticks value with
DosTmrQueryTime
:scale the ticks to elapsed time, i.e. to microseconds:
Example implementation
You can take a look at the plibsys library which implements all the described above strategies (see ptimeprofiler*.c for details).
timespec_get
from C11返回最多纳秒的值,四舍五入到实现的分辨率。
看起来像是从 POSIX 的
clock_gettime
中抄袭的 ANSI。示例:在 Ubuntu 15.10 上每 100 毫秒执行一次
printf
:C11 N1570 标准草案 7.27.2.5 “timespec_get 函数表示”:
C++11 还获得了
std::chrono::high_resolution_clock
: C++ 跨平台高分辨率计时器glibc 2.21 实现
可以在
sysdeps/posix/timespec_get.c
为:
很清楚:
当前仅支持
TIME_UTC
它转发到
__clock_gettime (CLOCK_REALTIME, ts)
,这是一个 POSIX API: http://pubs.opengroup.org/onlinepubs/ 9699919799/functions/clock_getres.htmlLinux x86-64 有一个
clock_gettime
系统调用。请注意,这不是一种万无一失的微基准测试方法,因为:
manclock_gettime
表示,如果您在程序运行时更改某些系统时间设置,则此测量可能会出现中断。 当然,这应该是一个罕见的事件,您也许可以忽略它。这测量了挂机时间,因此如果调度程序决定忘记您的任务,它看起来会运行更长时间。
出于这些原因,
getrusage()
可能是一个更好的 POSIX 基准测试工具,尽管它的微秒最大精度较低。更多信息请访问:测量Linux 中的时间 - time、clock、getrusage、clock_gettime、gettimeofday、timespec_get?
timespec_get
from C11Returns up to nanoseconds, rounded to the resolution of the implementation.
Looks like an ANSI ripoff from POSIX'
clock_gettime
.Example: a
printf
is done every 100ms on Ubuntu 15.10:The C11 N1570 standard draft 7.27.2.5 "The timespec_get function says":
C++11 also got
std::chrono::high_resolution_clock
: C++ Cross-Platform High-Resolution Timerglibc 2.21 implementation
Can be found under
sysdeps/posix/timespec_get.c
as:so clearly:
only
TIME_UTC
is currently supportedit forwards to
__clock_gettime (CLOCK_REALTIME, ts)
, which is a POSIX API: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.htmlLinux x86-64 has a
clock_gettime
system call.Note that this is not a fail-proof micro-benchmarking method because:
man clock_gettime
says that this measure may have discontinuities if you change some system time setting while your program runs. This should be a rare event of course, and you might be able to ignore it.this measures wall time, so if the scheduler decides to forget about your task, it will appear to run for longer.
For those reasons
getrusage()
might be a better better POSIX benchmarking tool, despite it's lower microsecond maximum precision.More information at: Measure time in Linux - time vs clock vs getrusage vs clock_gettime vs gettimeofday vs timespec_get?
接受的答案已经足够好了。但我的解决方案更简单。我只是在Linux中测试,使用gcc(Ubuntu 7.2.0-8ubuntu3.2)7.2.0。
也可以使用
gettimeofday
,tv_sec
是秒的一部分,tv_usec
是微秒,而不是毫秒。它打印:
1522139691342
1522139692342
,正好一秒。<代码>^
The accepted answer is good enough.But my solution is more simple.I just test in Linux, use gcc (Ubuntu 7.2.0-8ubuntu3.2) 7.2.0.
Alse use
gettimeofday
, thetv_sec
is the part of second, and thetv_usec
is microseconds, not milliseconds.It print:
1522139691342
1522139692342
, exactly a second.^
您可能获得的最佳精度是通过使用仅限 x86 的“rdtsc”指令,该指令可以提供时钟级分辨率(当然,必须考虑 rdtsc 调用本身的成本,可以在应用程序启动)。
这里的主要问题是测量每秒的时钟数,这应该不会太难。
The best precision you can possibly get is through the use of the x86-only "rdtsc" instruction, which can provide clock-level resolution (ne must of course take into account the cost of the rdtsc call itself, which can be measured easily on application startup).
The main catch here is measuring the number of clocks per second, which shouldn't be too hard.
从 ANSI/ISO C11 或更高版本开始,您可以使用
timespec_get()
获取毫秒、微秒或纳秒时间戳,如下所示:< strong>对于我的更彻底答案,包括我写的整个计时库,请参见此处:如何在 C 中获取简单的时间戳.
@Ciro Santilli Путлер 还在此处展示了 C11 的
timespec_get()
函数的简明演示,这就是我第一次学习如何使用该功能的方式。在我更彻底的回答中,我解释说,在我的系统上,最佳分辨率可能是~20ns
As of ANSI/ISO C11 or later, you can use
timespec_get()
to obtain millisecond, microsecond, or nanosecond timestamps, like this:For a much-more-thorough answer of mine, including with an entire timing library I wrote, see here: How to get a simple timestamp in C.
@Ciro Santilli Путлер also presents a concise demo of C11's
timespec_get()
function here, which is how I first learned how to use that function.In my more-thorough answer, I explain that on my system, the best resolution possible is ~20ns, but the resolution is hardware-dependent and can vary from system to system.
窗口下:
Under windows: