用 C 计算算法运行时间

发布于 2024-12-04 18:37:32 字数 310 浏览 0 评论 0原文

我正在使用 c 中的 time.h lib 来查找运行算法所需的时间。代码结构如下:-

#include <time.h>

int main()
{
  time_t start,end,diff;

  start = clock();
    //ALGORITHM COMPUTATIONS
  end = clock();
  diff = end - start;
  printf("%d",diff);
  return 0;
}

开始和结束的值始终为零。是不是clock()函数不起作用?请帮忙。 提前致谢。

I am using the time.h lib in c to find the time taken to run an algorithm. The code structure is somewhat as follows :-

#include <time.h>

int main()
{
  time_t start,end,diff;

  start = clock();
    //ALGORITHM COMPUTATIONS
  end = clock();
  diff = end - start;
  printf("%d",diff);
  return 0;
}

The values for start and end are always zero. Is it that the clock() function does't work? Please help.
Thanks in advance.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

趁年轻赶紧闹 2024-12-11 18:37:32

并不是说它不起作用。事实上,确实如此。但这不是测量时间的正确方法,因为 clock () 函数返回程序使用的处理器时间的近似值。我不确定其他平台,但在 Linux 上你应该使用 < code>clock_gettime () 带有 CLOCK_MONOTONIC 标志 - 这将为您提供实际的墙上时间。另外,您可以阅读 TSC,但请注意,如果您有多个- 处理器系统和您的进程没有固定到特定的核心。如果你想分析和优化你的算法,我建议你使用一些性能测量工具。我使用英特尔的 vTune 一段时间了,非常满意。它不仅会向您显示哪个部分使用了最多的周期,还会突出显示内存问题、可能的并行性问题等。您可能会对结果感到非常惊讶。例如,大部分 CPU 周期可能用于等待内存总线。希望有帮助!

更新:实际上,如果您运行更高版本的 Linux,它可能会提供 CLOCK_MONOTONIC_RAW,这是一个基于硬件的时钟,不受 NTP 调整的影响。这是您可以使用的一小段代码:

  • stopwatch.hpp< /一>
  • <一href="https://bitbucket.org/Yocto/yocto/src/9cec50caf923/src/stopwatch.cpp" rel="nofollow">stopwatch.cpp

Not that it doesn't work. In fact, it does. But it is not the right way to measure time as the clock () function returns an approximation of processor time used by the program. I am not sure about other platforms, but on Linux you should use clock_gettime () with CLOCK_MONOTONIC flag - that will give you the real wall time elapsed. Also, you can read TSC, but be aware that it won't work if you have a multi-processor system and your process is not pinned to a particular core. If you want to analyze and optimize your algorithm, I'd recommend you use some performance measurement tools. I've been using Intel's vTune for a while and am quite happy. It will show you not only what part uses the most cycles, but highlight memory problems, possible parallelism issues etc. You may be very surprised with the results. For example, most of the CPU cycles might be spent waiting for memory bus. Hope it helps!

UPDATE: Actually, if you run later versions of Linux, it might provide CLOCK_MONOTONIC_RAW, which is a hardware-based clock that is not a subject to NTP adjustments. Here is a small piece of code you can use:

十级心震 2024-12-11 18:37:32

请注意,clock() 返回以时钟周期为单位的执行时间,而不是挂钟时间。将两个 clock_t 值的差除以 CLOCKS_PER_SEC 将差值转换为秒。 CLOCKS_PER_SEC 的实际值是一个实现质量问题。如果它很低(例如 50),您的进程必须运行 20 毫秒才能从 clock() 中返回非零值。确保您的代码运行时间足够长,可以看到 clock() 增加。

Note that clock() returns the execution time in clock ticks, as opposed to wall clock time. Divide a difference of two clock_t values by CLOCKS_PER_SEC to convert the difference to seconds. The actual value of CLOCKS_PER_SEC is a quality-of-implementation issue. If it is low (say, 50), your process would have to run for 20ms to cause a nonzero return value from clock(). Make sure your code runs long enough to see clock() increasing.

末骤雨初歇 2024-12-11 18:37:32

我通常这样做:

clock_t start = clock();
clock_t end;

//algo

end = clock();
printf("%f", (double)(end - start));

I usually do it this way:

clock_t start = clock();
clock_t end;

//algo

end = clock();
printf("%f", (double)(end - start));
习ぎ惯性依靠 2024-12-11 18:37:32

考虑下面的代码:

#include <stdio.h>
#include <time.h>

int main()
{
    clock_t t1, t2;
    t1 = t2 = clock();

    // loop until t2 gets a different value
    while(t1 == t2)
        t2 = clock();

    // print resolution of clock()
    printf("%f ms\n", (double)(t2 - t1) / CLOCKS_PER_SEC * 1000);

    return 0;
}

输出:

$ ./a.out 
10.000000 ms

您的算法运行的时间可能比该时间短。
使用gettimeofday 获得更高分辨率的计时器。

Consider the code below:

#include <stdio.h>
#include <time.h>

int main()
{
    clock_t t1, t2;
    t1 = t2 = clock();

    // loop until t2 gets a different value
    while(t1 == t2)
        t2 = clock();

    // print resolution of clock()
    printf("%f ms\n", (double)(t2 - t1) / CLOCKS_PER_SEC * 1000);

    return 0;
}

Output:

$ ./a.out 
10.000000 ms

Might be that your algorithm runs for a shorter amount of time than that.
Use gettimeofday for higher resolution timer.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文