QueryPerformanceCounter 结果不正确还是人为错误?
我有一个 C 程序的以下代码,显示了 system() 调用运行所需的微秒数:
#include <stdio.h>
#include <stdlib.h>
#include <profileapi.h>
long long measure(char* command) {
// define variables
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;
// get the frequency of the counter
QueryPerformanceFrequency(&Frequency);
// get the current count
QueryPerformanceCounter(&StartingTime);
// run our command
system(command);
// get the end of the count
QueryPerformanceCounter(&EndingTime);
// calculate the difference in counts
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
// scale to microseconds
ElapsedMicroseconds.QuadPart *= 1000000;
// divide by the frequency of the counter
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
return ElapsedMicroseconds.QuadPart;
}
int main() {
// measure the time it takes to run the command "ls"
long long time = measure("echo hello");
// print the time elapsed
printf("%lld\n", time);
return 0;
}
现在,当我运行该程序时,我在进行数学计算时得到了 16-20 毫秒之间的时间,但是我得到在 PowerShell 中使用 Measure-Command {echo hello |外默认}。这让我怀疑我在 QueryPerformanceCount 上做错了什么。我的时间跨度比我应该的要长得多。
我附上了其中一个实例的图像,显示出很大的差异。
我的代码有什么问题吗?
谢谢
I have the following code for a C program that displays the number of microseconds it took for a system() call to run:
#include <stdio.h>
#include <stdlib.h>
#include <profileapi.h>
long long measure(char* command) {
// define variables
LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;
// get the frequency of the counter
QueryPerformanceFrequency(&Frequency);
// get the current count
QueryPerformanceCounter(&StartingTime);
// run our command
system(command);
// get the end of the count
QueryPerformanceCounter(&EndingTime);
// calculate the difference in counts
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
// scale to microseconds
ElapsedMicroseconds.QuadPart *= 1000000;
// divide by the frequency of the counter
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;
return ElapsedMicroseconds.QuadPart;
}
int main() {
// measure the time it takes to run the command "ls"
long long time = measure("echo hello");
// print the time elapsed
printf("%lld\n", time);
return 0;
}
Now, when I run the program I get somewhere between 16-20 milliseconds when I do the math, however I get much lower times in PowerShell with Measure-Command {echo hello | Out-Default}
. This leads me to suspect I am doing something wrong with QueryPerformanceCount. I am getting much larger timespans than I should be.
I attached an image of one of the instances showing a large difference.
Am I doing anything wrong with my code?
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
使用系统创建一个新进程,这就是为什么它需要更长的时间。
在 PowerShell 示例中,您没有创建新的 cmd.exe 进程,并且在 PowerShell 及其所有模块已加载时测量性能。
(来自我的评论的答案)
Using system creates a new process, that's why it's taking longer.
At the PowerShell example, you're not creating a new cmd.exe process, and performance is measured when PowerShell and all of its modules are already loaded.
(Answer from my comment)