用 C 计算算法运行时间
我正在使用 c 中的 time.h lib 来查找运行算法所需的时间。代码结构如下:-
#include <time.h>
int main()
{
time_t start,end,diff;
start = clock();
//ALGORITHM COMPUTATIONS
end = clock();
diff = end - start;
printf("%d",diff);
return 0;
}
开始和结束的值始终为零。是不是clock()函数不起作用?请帮忙。 提前致谢。
I am using the time.h lib in c to find the time taken to run an algorithm. The code structure is somewhat as follows :-
#include <time.h>
int main()
{
time_t start,end,diff;
start = clock();
//ALGORITHM COMPUTATIONS
end = clock();
diff = end - start;
printf("%d",diff);
return 0;
}
The values for start and end are always zero. Is it that the clock() function does't work? Please help.
Thanks in advance.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
并不是说它不起作用。事实上,确实如此。但这不是测量时间的正确方法,因为
clock ()
函数返回程序使用的处理器时间的近似值。我不确定其他平台,但在 Linux 上你应该使用 < code>clock_gettime () 带有CLOCK_MONOTONIC
标志 - 这将为您提供实际的墙上时间。另外,您可以阅读 TSC,但请注意,如果您有多个- 处理器系统和您的进程没有固定到特定的核心。如果你想分析和优化你的算法,我建议你使用一些性能测量工具。我使用英特尔的 vTune 一段时间了,非常满意。它不仅会向您显示哪个部分使用了最多的周期,还会突出显示内存问题、可能的并行性问题等。您可能会对结果感到非常惊讶。例如,大部分 CPU 周期可能用于等待内存总线。希望有帮助!更新:实际上,如果您运行更高版本的 Linux,它可能会提供
CLOCK_MONOTONIC_RAW
,这是一个基于硬件的时钟,不受 NTP 调整的影响。这是您可以使用的一小段代码:Not that it doesn't work. In fact, it does. But it is not the right way to measure time as the
clock ()
function returns an approximation of processor time used by the program. I am not sure about other platforms, but on Linux you should useclock_gettime ()
withCLOCK_MONOTONIC
flag - that will give you the real wall time elapsed. Also, you can read TSC, but be aware that it won't work if you have a multi-processor system and your process is not pinned to a particular core. If you want to analyze and optimize your algorithm, I'd recommend you use some performance measurement tools. I've been using Intel's vTune for a while and am quite happy. It will show you not only what part uses the most cycles, but highlight memory problems, possible parallelism issues etc. You may be very surprised with the results. For example, most of the CPU cycles might be spent waiting for memory bus. Hope it helps!UPDATE: Actually, if you run later versions of Linux, it might provide
CLOCK_MONOTONIC_RAW
, which is a hardware-based clock that is not a subject to NTP adjustments. Here is a small piece of code you can use:请注意,
clock()
返回以时钟周期为单位的执行时间,而不是挂钟时间。将两个clock_t
值的差除以CLOCKS_PER_SEC
将差值转换为秒。CLOCKS_PER_SEC
的实际值是一个实现质量问题。如果它很低(例如 50),您的进程必须运行 20 毫秒才能从clock()
中返回非零值。确保您的代码运行时间足够长,可以看到clock()
增加。Note that
clock()
returns the execution time in clock ticks, as opposed to wall clock time. Divide a difference of twoclock_t
values byCLOCKS_PER_SEC
to convert the difference to seconds. The actual value ofCLOCKS_PER_SEC
is a quality-of-implementation issue. If it is low (say, 50), your process would have to run for 20ms to cause a nonzero return value fromclock()
. Make sure your code runs long enough to seeclock()
increasing.我通常这样做:
I usually do it this way:
考虑下面的代码:
输出:
您的算法运行的时间可能比该时间短。
使用
gettimeofday
获得更高分辨率的计时器。Consider the code below:
Output:
Might be that your algorithm runs for a shorter amount of time than that.
Use
gettimeofday
for higher resolution timer.