在 C 中获取以微秒为单位的时间戳?

发布于 2024-11-04 06:27:11 字数 272 浏览 0 评论 0 原文

如何在 C 中获取微秒时间戳?

我正在尝试这样做:

struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_usec;

但这会返回一些无意义的值,如果我得到两个时间戳,第二个时间戳可以比第一个时间戳更小或更大(第二个应该始终更大)。是否可以将 gettimeofday 返回的魔法整数转换为实际可以使用的普通数字?

How do I get a microseconds timestamp in C?

I'm trying to do:

struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_usec;

But this returns some nonsense value that if I get two timestamps, the second one can be smaller or bigger than the first (second one should always be bigger). Would it be possible to convert the magic integer returned by gettimeofday to a normal number which can actually be worked with?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(9

所有深爱都是秘密 2024-11-11 06:27:11

您还需要添加秒数:

unsigned long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec;

请注意,这只会持续大约 232/106 =~ 4295 秒,或者大约 71 分钟(在典型的 32 位系统)。

You need to add in the seconds, too:

unsigned long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec;

Note that this will only last for about 232/106 =~ 4295 seconds, or roughly 71 minutes though (on a typical 32-bit system).

无力看清 2024-11-11 06:27:11

您有两种选择来获取微秒时间戳。第一个(也是最好的)选择是直接使用 timeval 类型:

struct timeval GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

第二个选择(对我来说不太理想)是从 timeval 构建 uint64_t:

uint64_t GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;
}

You have two choices for getting a microsecond timestamp. The first (and best) choice, is to use the timeval type directly:

struct timeval GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

The second, and for me less desirable, choice is to build a uint64_t out of a timeval:

uint64_t GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;
}
鹤舞 2024-11-11 06:27:11

获取 C 语言的时间戳(以微秒为单位)?

以下是与此问题的标题相关的通用答案:

How to get a simple timestamp in C

  1. in millis(ms) with function millis(),
  2. microseconds (us)使用 micros() 计算,
  3. 使用 nanos() 计算纳秒 (ns)

快速总结:如果您很着急并且使用 Linux 或 POSIX 系统,请跳转直接进入标题为的部分下面的“millis()micros()nanos()”,只需使用这些函数即可。如果您在 Linux 或 POSIX 系统上使用 C11,则需要将这些函数中的 clock_gettime() 替换为 timespec_get() >.

C 中的 2 个主要时间戳函数:

  1. C11: timespec_get()C11 或更高版本标准的一部分,但不允许选择时钟类型使用。它也适用于 C++17。请参阅此处的 std::timespec_get() 文档一个>。但是,对于 C++11 及更高版本,我更喜欢使用不同的方法,我可以指定时钟的分辨率,正如我在此处的答案中演示的那样:获取准确的执行时间在 C++ 中(微秒)

    C11 timespec_get() 解决方案比 C++ 解决方案有更多限制,因为您无法指定时钟分辨率或单调性(“单调”时钟定义为仅计数的时钟)向前,并且永远不能向后移动或跳跃——例如:用于时间校正)。测量时间差时,需要单调时钟以确保您永远不会将时钟校正跳跃算作“测量”时间的一部分。

    由于我们无法指定要使用的时钟,timespec_get() 返回的时间戳值的分辨率可能取决于您的硬件架构操作系统编译器。通过快速连续进行 1000 次左右的测量,然后找到任意两次后续测量之间的最小差异,可以获得该函数分辨率的近似值。您时钟的实际分辨率保证等于或小于最小差异。

    我在 get_estimated_resolution() 函数中演示了这一点“noreferrer">timinglib.c 用于 Linux 的时序库。

  2. Linux 和 POSIX: 比 C 语言中的 timespec_get() 更好Linux 和 POSIX 函数 clock_gettime() 函数,在 Linux 上的 C++ 中也能正常工作或 POSIX 系统。 clock_gettime() 确实允许您选择所需的时钟。您可以使用clock_getres()读取指定的时钟分辨率,尽管这也不能为您提供硬件的真实时钟分辨率。相反,它为您提供 struct timespectv_nsec 成员的单位。使用上面和我的 timinglib.c/.h 文件以获得分辨率的估计。

因此,如果您在 Linux 或 POSIX 系统上使用 C,我强烈建议您使用 clock_gettime() 而不是 timespec_get() .

C11 的 timespec_get() (好的)和 Linux/POSIX 的 clock_gettime() (更好):

以下是如何使用这两个函数:

  1. C11 的 timespec_get()
    1. https://en.cppreference.com/w/c/chrono/timespec_get< /a>
    2. 适用于 C,但不允许您选择要使用的时钟。
    3. 带有错误检查的完整示例:
      #include ; // `UINT64_MAX`
      #include ; // `printf()`
      #include ; // `timespec_get()`
      
      /// 将秒转换为纳秒
      #define SEC_TO_NS(秒) ((秒)*1000000000)
      
      uint64_t 纳秒;
      结构 timespec ts;
      int return_code = timespec_get(&ts, TIME_UTC);
      if (返回码 == 0)
      {
          printf("获取时间戳失败。\n");
          纳秒 = UINT64_MAX; // 用它来指示错误
      }
      别的
      {
          // `ts` 现在包含以秒和纳秒为单位的时间戳!到 
          // 将整个结构转换为纳秒,执行以下操作:
          纳秒 = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
      }
      


  2. Linux/POSIX 的 clock_gettime()< /a> -- 尽可能使用这个!


    1. https://man7.org/linux/man-pages/ man3/clock_gettime.3.html(此函数的最佳参考)和:
    2. https://linux.die.net/man/3/clock_gettime< /里>
    3. 在 Linux 或 POSIX 系统上使用 C 语言工作,并允许您选择要使用的时钟!
      1. 我选择 CLOCK_MONOTONIC_RAW 时钟,它最适合获取用于为系统上的事件计时的时间戳。
      2. 还可在此处查看所有时钟类型的定义,例如 CLOCK_REALTIMECLOCK_MONOTONICCLOCK_MONOTONIC_RAW 等:https://man7.org/linux/man-pages/man3/clock_gettime.3.html
      3. 另一个流行的时钟是CLOCK_REALTIME。不过,请不要混淆! “实时”并不意味着它是用于“实时”操作系统或精确计时的好时钟。相反,它意味着它是一个时钟,如果时钟漂移,它将定期调整到“实时”或实际的“世界时间”。再次强调,请勿将此时钟用于精确计时,因为系统可以随时向前或向后调整它,而不受您的控制。
    4. 带有错误检查的完整示例:
      // 此行**必须**出现在**之前**,包括 ;为了
      // 引入 POSIX 函数,例如 `clock_gettime() from `!
      #定义_POSIX_C_SOURCE 199309L
      
      #include ; // `错误号`
      #include ; // `UINT64_MAX`
      #include ; // `printf()`
      #include ; // `strerror(errno)`
      #include ; // `clock_gettime()` 和 `timespec_get()`
      
      /// 将秒转换为纳秒
      #define SEC_TO_NS(秒) ((秒)*1000000000)
      
      uint64_t 纳秒;
      结构 timespec ts;
      int return_code =clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
      if (返回码 == -1)
      {
          printf("获取时间戳失败。errno = %i: %s\n", errno, 
              strerror(errno));
          纳秒 = UINT64_MAX; // 用它来指示错误
      }
      别的
      {
          // `ts` 现在包含以秒和纳秒为单位的时间戳!到 
          // 将整个结构转换为纳秒,执行以下操作:
          纳秒 = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
      }
      

millis()micros()nanos()

无论如何,这是我的 millis(),我在 C 中使用 micros()nanos() 函数来进行简单的时间戳和代码速度分析。

我正在使用下面的 Linux/POSIX clock_gettime() 函数。如果您在没有可用 clock_gettime() 的系统上使用 C11 或更高版本,只需替换 clock_gettime(CLOCK_MONOTONIC_RAW, &ts)< 的所有用法即可/code> 下面用 timespec_get(&ts, TIME_UTC) 代替。

从我的 eRCaGuy_hello_world 存储库获取最新版本的代码:

  1. timinglib.h
  2. timinglib.c< /a>
// This line **must** come **before** including <time.h> in order to
// bring in the POSIX functions such as `clock_gettime() from <time.h>`!
#define _POSIX_C_SOURCE 199309L
        
#include <time.h>

/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)

/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns)   ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns)    ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns)    ((ns)/1000)

/// Get a time stamp in milliseconds.
uint64_t millis()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
    return ms;
}

/// Get a time stamp in microseconds.
uint64_t micros()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
    return us;
}

/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
    return ns;
}

// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.

时间戳分辨率:

在带有 gcc 编译器的 x86-64 Linux Ubuntu 18.04 系统上,clock_getres() 返回的分辨率为1 纳秒

对于 clock_gettime()timespec_get(),我还进行了实证测试,我尽可能快地获取 1000 个时间戳(请参阅 get_estimated_resolution()我的 功能rel="noreferrer">timinglib.c 计时库),并查看时间戳样本之间的最小间隙是多少。当使用 timespec_get(&ts, TIME_UTC)clock_gettime(CLOCK_MONOTONIC, &ts)< 时,这揭示了我的系统上 ~14~26 ns 的范围/code> 和 ~75~130 ns 对于 clock_gettime(CLOCK_MONOTONIC_RAW, &ts)。这可以被认为是这些函数的粗略“实际解决方案”。请参阅 timinglib_get_resolution.c,并查看我的 get_estimated_resolution()get_specified_resolution() 函数的定义(其中由该测试代码使用)在 timinglib.c

这些结果是特定于硬件的,您的硬件上的结果可能会有所不同。

参考:

  1. 我上面链接到的 cppreference.com 文档源。
  2. @Ciro Santilli新疆棉花的答案
  3. [我的答案] 我关于 usleep()nanosleep() 的回答 - 它提醒了我需要执行 #define _POSIX_C_SOURCE 199309L 才能从 引入 clock_gettime() POSIX 函数!
  4. https://linux.die.net/man/3/clock_gettime
  5. https://man7.org/linux/man-pages/man3/clock_gettime.3.html
    1. 提及以下要求:

    <块引用>

    _POSIX_C_SOURCE >= 199309L

    1. 还可在此处查看所有时钟类型的定义,例如 CLOCK_REALTIMECLOCK_MONOTONICCLOCK_MONOTONIC_RAW 等。

另请参阅:

  1. 我的简短且不那么全面的答案,仅适用于 ANSI/ISO C11 或更高版本:如何使用 ANSI C 测量以毫秒为单位的时间?
  2. 我的 3 组时间戳函数(相互交叉链接):
    1. 对于C 时间戳,请在此处查看我的答案:获取以微秒为单位的 C 时间戳?
    2. 对于 C++ 高分辨率时间戳,请参阅我的答案:以下是如何获得简单的 C-就像 C++ 中的毫秒、微秒和纳秒时间戳
    3. 对于Python高分辨率时间戳,请在此处查看我的答案:如何获取毫秒和微秒- Python 中的解析时间戳?
  3. https://en.cppreference.com/w/c/chrono/clock
    1. POSIX clock_gettime()https:// pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
  4. clock_gettime()https://linux.die.net/man/3/clock_gettime
    1. 注意:对于 C11 及更高版本,您可以使用 timespec_get()(正如我上面所做的那样),而不是 POSIX clock_gettime()https://en.cppreference.com/w/c/chrono/clock说:
      <块引用>

      在 C11 中使用 timespec_get

    2. 但是,使用 clock_gettime() 可以让您为您想要的时钟类型选择所需的时钟 ID!另请参阅此处: ***** https:// /people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html

待办事项:

  1. ✓ 于 2022 年 4 月 3 日完成:自 timespec_getres() 直到 C23 才支持,将我的示例更新为其中包括在 Linux 上使用 POSIX clock_gettime()clock_getres() 函数的函数。我想确切地知道在给定系统上我可以期望的时钟分辨率有多好。 是 ms-resolution、us-resolution、ns-resolution 还是其他什么?有关参考,请参阅:
    1. https://linux.die.net/man/3/clock_gettime< /里>
    2. https://people.cs.rutgers .edu/~pxk/416/notes/c-tutorials/gettime.html
    3. https://pubs.opengroup.org/onlinepubs/9699919799/functions/ Clock_getres.html
    4. 答案:clock_getres()返回1 ns,但实际分辨率约为14~27 ns,根据我的 get_estimated_resolution() 函数在这里:https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib.c。在这里查看结果:
      1. https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib_get_resolution.c#L46-L77
      2. 激活 Linux SCHED_RR 软实时循环调度程序,以获得最佳且最一致的时序。请参阅我关于 clock_nanosleep() 的回答:如何配置 Linux SCHED_RR 软实时循环调度程序这样,clock_nanosleep() 的分辨率可以从约 55 us 提高到约 4 us

Get a timestamp in C in microseconds?

Here is a generic answer pertaining to the title of this question:

How to get a simple timestamp in C

  1. in milliseconds (ms) with function millis(),
  2. microseconds (us) with micros(), and
  3. nanoseconds (ns) with nanos()

Quick summary: if you're in a hurry and using a Linux or POSIX system, jump straight down to the section titled "millis(), micros(), and nanos()", below, and just use those functions. If you're using C11 not on a Linux or POSIX system, you'll need to replace clock_gettime() in those functions with timespec_get().

2 main timestamp functions in C:

  1. C11: timespec_get() is part of the C11 or later standard, but doesn't allow choosing the type of clock to use. It also works in C++17. See documentation for std::timespec_get() here. However, for C++11 and later, I prefer to use a different approach where I can specify the resolution and type of the clock instead, as I demonstrate in my answer here: Getting an accurate execution time in C++ (micro seconds).

    The C11 timespec_get() solution is a bit more limited than the C++ solution in that you cannot specify the clock resolution nor the monotonicity (a "monotonic" clock is defined as a clock that only counts forwards and can never go or jump backwards--ex: for time corrections). When measuring time differences, monotonic clocks are desired to ensure you never count a clock correction jump as part of your "measured" time.

    The resolution of the timestamp values returned by timespec_get(), therefore, since we can't specify the clock to use, may be dependent on your hardware architecture, operating system, and compiler. An approximation of the resolution of this function can be obtained by rapidly taking 1000 or so measurements in quick-succession, then finding the smallest difference between any two subsequent measurements. Your clock's actual resolution is guaranteed to be equal to or smaller than that smallest difference.

    I demonstrate this in the get_estimated_resolution() function of my timinglib.c timing library intended for Linux.

  2. Linux and POSIX: Even better than timespec_get() in C is the Linux and POSIX function clock_gettime() function, which also works fine in C++ on Linux or POSIX systems. clock_gettime() does allow you to choose the desired clock. You can read the specified clock resolution with clock_getres(), although that doesn't give you your hardware's true clock resolution either. Rather, it gives you the units of the tv_nsec member of the struct timespec. Use my get_estimated_resolution() function described just above and in my timinglib.c/.h files to obtain an estimate of the resolution.

So, if you are using C on a Linux or POSIX system, I highly recommend you use clock_gettime() over timespec_get().

C11's timespec_get() (ok) and Linux/POSIX's clock_gettime() (better):

Here is how to use both functions:

  1. C11's timespec_get()
    1. https://en.cppreference.com/w/c/chrono/timespec_get
    2. Works in C, but doesn't allow you to choose the clock to use.
    3. Full example, with error checking:
      #include <stdint.h> // `UINT64_MAX`
      #include <stdio.h>  // `printf()`
      #include <time.h>   // `timespec_get()`
      
      /// Convert seconds to nanoseconds
      #define SEC_TO_NS(sec) ((sec)*1000000000)
      
      uint64_t nanoseconds;
      struct timespec ts;
      int return_code = timespec_get(&ts, TIME_UTC);
      if (return_code == 0)
      {
          printf("Failed to obtain timestamp.\n");
          nanoseconds = UINT64_MAX; // use this to indicate error
      }
      else
      {
          // `ts` now contains your timestamp in seconds and nanoseconds! To 
          // convert the whole struct to nanoseconds, do this:
          nanoseconds = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
      }
      
  2. Linux/POSIX's clock_gettime() -- USE THIS ONE WHENEVER POSSIBLE!
    1. https://man7.org/linux/man-pages/man3/clock_gettime.3.html (best reference for this function) and:
    2. https://linux.die.net/man/3/clock_gettime
    3. Works in C on Linux or POSIX systems, and allows you to choose the clock to use!
      1. I choose the CLOCK_MONOTONIC_RAW clock, which is best for obtaining timestamps used to time things on your system.
      2. See definitions for all of the clock types here, too, such as CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, etc: https://man7.org/linux/man-pages/man3/clock_gettime.3.html
      3. Another popular clock to use is CLOCK_REALTIME. Do NOT be confused, however! "Realtime" does NOT mean that it is a good clock to use for "realtime" operating systems, or precise timing. Rather, it means it is a clock which will be adjusted to the "real time", or actual "world time", periodically, if the clock drifts. Again, do NOT use this clock for precise timing usages, as it can be adjusted forwards or backwards at any time by the system, outside of your control.
    4. Full example, with error checking:
      // This line **must** come **before** including <time.h> in order to
      // bring in the POSIX functions such as `clock_gettime() from <time.h>`!
      #define _POSIX_C_SOURCE 199309L
      
      #include <errno.h>  // `errno`
      #include <stdint.h> // `UINT64_MAX`
      #include <stdio.h>  // `printf()`
      #include <string.h> // `strerror(errno)`
      #include <time.h>   // `clock_gettime()` and `timespec_get()`
      
      /// Convert seconds to nanoseconds
      #define SEC_TO_NS(sec) ((sec)*1000000000)
      
      uint64_t nanoseconds;
      struct timespec ts;
      int return_code = clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
      if (return_code == -1)
      {
          printf("Failed to obtain timestamp. errno = %i: %s\n", errno, 
              strerror(errno));
          nanoseconds = UINT64_MAX; // use this to indicate error
      }
      else
      {
          // `ts` now contains your timestamp in seconds and nanoseconds! To 
          // convert the whole struct to nanoseconds, do this:
          nanoseconds = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
      }
      

millis(), micros(), and nanos():

Anyway, here are my millis(), micros(), and nanos() functions I use in C for simple timestamps and code speed profiling.

I am using the Linux/POSIX clock_gettime() function below. If you are using C11 or later on a system which does not have clock_gettime() available, simply replace all usages of clock_gettime(CLOCK_MONOTONIC_RAW, &ts) below with timespec_get(&ts, TIME_UTC) instead.

Get the latest version of my code here from my eRCaGuy_hello_world repo here:

  1. timinglib.h
  2. timinglib.c
// This line **must** come **before** including <time.h> in order to
// bring in the POSIX functions such as `clock_gettime() from <time.h>`!
#define _POSIX_C_SOURCE 199309L
        
#include <time.h>

/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)

/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns)   ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns)    ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns)    ((ns)/1000)

/// Get a time stamp in milliseconds.
uint64_t millis()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
    return ms;
}

/// Get a time stamp in microseconds.
uint64_t micros()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
    return us;
}

/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
    return ns;
}

// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.

Timestamp Resolution:

On my x86-64 Linux Ubuntu 18.04 system with the gcc compiler, clock_getres() returns a resolution of 1 ns.

For both clock_gettime() and timespec_get(), I have also done empirical testing where I take 1000 timestamps rapidly, as fast as possible (see the get_estimated_resolution() function of my timinglib.c timing library), and look to see what the minimum gap is between timestamp samples. This reveals a range of ~14~26 ns on my system when using timespec_get(&ts, TIME_UTC) and clock_gettime(CLOCK_MONOTONIC, &ts), and ~75~130 ns for clock_gettime(CLOCK_MONOTONIC_RAW, &ts). This can be considered the rough "practical resolution" of these functions. See that test code in timinglib_get_resolution.c, and see the definition for my get_estimated_resolution() and get_specified_resolution() functions (which are used by that test code) in timinglib.c.

These results are hardware-specific, and your results on your hardware may vary.

References:

  1. The cppreference.com documentation sources I link to above.
  2. This answer here by @Ciro Santilli新疆棉花
  3. [my answer] my answer about usleep() and nanosleep() - it reminded me I needed to do #define _POSIX_C_SOURCE 199309L in order to bring in the clock_gettime() POSIX function from <time.h>!
  4. https://linux.die.net/man/3/clock_gettime
  5. https://man7.org/linux/man-pages/man3/clock_gettime.3.html
    1. Mentions the requirement for:

    _POSIX_C_SOURCE >= 199309L

    1. See definitions for all of the clock types here, too, such as CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, etc.

See also:

  1. My shorter and less-through answer here, which applies only to ANSI/ISO C11 or later: How to measure time in milliseconds using ANSI C?
  2. My 3 sets of timestamp functions (cross-linked to each other):
    1. For C timestamps, see my answer here: Get a timestamp in C in microseconds?
    2. For C++ high-resolution timestamps, see my answer here: Here is how to get simple C-like millisecond, microsecond, and nanosecond timestamps in C++
    3. For Python high-resolution timestamps, see my answer here: How can I get millisecond and microsecond-resolution timestamps in Python?
  3. https://en.cppreference.com/w/c/chrono/clock
    1. POSIX clock_gettime(): https://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
  4. clock_gettime() on Linux: https://linux.die.net/man/3/clock_gettime
    1. Note: for C11 and later, you can use timespec_get(), as I have done above, instead of POSIX clock_gettime(). https://en.cppreference.com/w/c/chrono/clock says:

      use timespec_get in C11

    2. But, using clock_gettime() instead allows you to choose a desired clock ID for the type of clock you want! See also here: ***** https://people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html

Todo:

  1. ✓ DONE AS OF 3 Apr. 2022: Since timespec_getres() isn't supported until C23, update my examples to include one which uses the POSIX clock_gettime() and clock_getres() functions on Linux. I'd like to know precisely how good the clock resolution is that I can expect on a given system. Is it ms-resolution, us-resolution, ns-resolution, something else? For reference, see:
    1. https://linux.die.net/man/3/clock_gettime
    2. https://people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html
    3. https://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
    4. Answer: clock_getres() returns 1 ns, but the actual resolution is about 14~27 ns, according to my get_estimated_resolution() function here: https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib.c. See the results here:
      1. https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib_get_resolution.c#L46-L77
      2. Activate the Linux SCHED_RR soft real-time round-robin scheduler for the best and most-consistent timing possible. See my answer here regarding clock_nanosleep(): How to configure the Linux SCHED_RR soft real-time round-robin scheduler so that clock_nanosleep() can have improved resolution of ~4 us down from ~ 55 us.
離殇 2024-11-11 06:27:11

struct timeval 包含两个部分,秒和微秒。微秒精度的时间戳表示为自 tv_sec 字段中存储的纪元以来的秒数和 tv_usec 中的小数微秒。因此,您不能仅仅忽略 tv_sec 并期望得到合理的结果。

如果您使用Linux或*BSD,您可以使用timersub()来减去两个struct timeval值,这可能是您想要的。

struct timeval contains two components, the second and the microsecond. A timestamp with microsecond precision is represented as seconds since the epoch stored in the tv_sec field and the fractional microseconds in tv_usec. Thus you cannot just ignore tv_sec and expect sensible results.

If you use Linux or *BSD, you can use timersub() to subtract two struct timeval values, which might be what you want.

一身仙ぐ女味 2024-11-11 06:27:11

timespec_get from C11

返回精度高达纳秒的值,四舍五入到实现的分辨率。

#include <time.h>
struct timespec ts;
timespec_get(&ts, TIME_UTC);
struct timespec {
    time_t   tv_sec;        /* seconds */
    long     tv_nsec;       /* nanoseconds */
};

请参阅我的其他答案中的更多详细信息:如何使用 ANSI C 测量以毫秒为单位的时间?

timespec_get from C11

Returns with precision of up to nanoseconds, rounded to the resolution of the implementation.

#include <time.h>
struct timespec ts;
timespec_get(&ts, TIME_UTC);
struct timespec {
    time_t   tv_sec;        /* seconds */
    long     tv_nsec;       /* nanoseconds */
};

See more details in my other answer here: How to measure time in milliseconds using ANSI C?

烟酒忠诚 2024-11-11 06:27:11

但这会返回一些无意义的值
如果我得到两个时间戳,
第二个可以更小或更大
比第一个(第二个应该
总是更大)。

是什么让你这么想?价值大概还可以。这与秒和分钟的情况相同 - 当您以分钟和秒来测量时间时,当秒数达到 60 时,秒数就会滚到零。

要将返回值转换为“线性”数字,您可以乘以秒数并加上微秒。但如果我计算正确的话,一年大约是 1e6*60*60*24*360μsec,这意味着您需要超过 32 位来存储结果:

$ perl -E '$_=1e6*60*60*24*360; say int log($_)/log(2)'
44

这可能是将原始返回值拆分为的原因之一两块。

But this returns some nonsense value
that if I get two timestamps, the
second one can be smaller or bigger
than the first (second one should
always be bigger).

What makes you think that? The value is probably OK. It’s the same situation as with seconds and minutes – when you measure time in minutes and seconds, the number of seconds rolls over to zero when it gets to sixty.

To convert the returned value into a “linear” number you could multiply the number of seconds and add the microseconds. But if I count correctly, one year is about 1e6*60*60*24*360 μsec and that means you’ll need more than 32 bits to store the result:

$ perl -E '$_=1e6*60*60*24*360; say int log($_)/log(2)'
44

That’s probably one of the reasons to split the original returned value into two pieces.

早茶月光 2024-11-11 06:27:11

使用unsigned long long(即64位单元)来表示系统时间:

typedef unsigned long long u64;

u64 u64useconds;
struct timeval tv;

gettimeofday(&tv,NULL);
u64useconds = (1000000*tv.tv_sec) + tv.tv_usec;

use an unsigned long long (i.e. a 64 bit unit) to represent the system time:

typedef unsigned long long u64;

u64 u64useconds;
struct timeval tv;

gettimeofday(&tv,NULL);
u64useconds = (1000000*tv.tv_sec) + tv.tv_usec;
眼泪淡了忧伤 2024-11-11 06:27:11

迟到总比不到好!这个小程序可以用作获取时间戳(以微秒为单位)并计算进程时间(以微秒为单位)的最快方法:

#include <sys/time.h>
#include <stdio.h>
#include <time.h>

struct timeval GetTimeStamp() 
{
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

int main()
{
    struct timeval tv= GetTimeStamp(); // Calculate time
    signed long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec; // Store time in microseconds
    
    getchar(); // Replace this line with the process that you need to time

    printf("Elapsed time: %ld microsecons\n", (1000000 * GetTimeStamp().tv_sec + GetTimeStamp().tv_usec) - time_in_micros);
    
}

您可以用函数/进程替换 getchar() 。最后,您可以将其存储在有符号长整型中,而不是打印差异。该程序在 Windows 10 中运行良好。

Better late than never! This little programme can be used as the quickest way to get time stamp in microseconds and calculate the time of a process in microseconds:

#include <sys/time.h>
#include <stdio.h>
#include <time.h>

struct timeval GetTimeStamp() 
{
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

int main()
{
    struct timeval tv= GetTimeStamp(); // Calculate time
    signed long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec; // Store time in microseconds
    
    getchar(); // Replace this line with the process that you need to time

    printf("Elapsed time: %ld microsecons\n", (1000000 * GetTimeStamp().tv_sec + GetTimeStamp().tv_usec) - time_in_micros);
    
}

You can replace getchar() with a function/process. Finally, instead of printing the difference you can store it in a signed long. The programme works fine in Windows 10.

沒落の蓅哖 2024-11-11 06:27:11

首先我们需要知道微秒的范围,即000_000到999_999(1000000微秒等于1秒)。 tv.tv_usec 将返回从 0 到 999999 的值,而不是 000000 到 999999,因此当将其与秒一起使用时,我们可能会得到 2.1 秒而不是 2.000001 秒,因为当仅谈论 tv_usec 000001 时本质上是 1。
如果你插入

if(tv.tv_usec<10)
{
 printf("00000");
} 
else if(tv.tv_usec<100&&tv.tv_usec>9)// i.e. 2digits
{
 printf("0000");
}

等等就更好了...

First we need to know on the range of microseconds i.e. 000_000 to 999_999 (1000000 microseconds is equal to 1second). tv.tv_usec will return value from 0 to 999999 not 000000 to 999999 so when using it with seconds we might get 2.1seconds instead of 2.000001 seconds because when only talking about tv_usec 000001 is essentially 1.
Its better if you insert

if(tv.tv_usec<10)
{
 printf("00000");
} 
else if(tv.tv_usec<100&&tv.tv_usec>9)// i.e. 2digits
{
 printf("0000");
}

and so on...

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文