perf 可以显示原始样本计数吗?
我希望 perf 输出原始样本计数而不是百分比。这对于确定我是否加速了我试图优化的函数很有用。
需要明确的是,我想做一些类似的事情
perf record ./a.out
perf report
,看看 perf 对 a.out 中的每个函数进行了多少次采样。
Shark 可以在 Mac 上执行此操作,(我相信)Xperf 也可以。在 Linux 上用 perf 可以实现吗?
I'd like perf to output raw sample counts rather than percentages. This is useful for determining whether I've sped up a function I'm trying to optimize.
To be clear, I'd like to do something like
perf record ./a.out
perf report
and see how many times perf sampled each function in a.out.
Shark can do this on Mac, as can (I believe) Xperf. Is this possible on Linux with perf?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
perf 报告(版本 2.6.35.7)现在支持 -n 标志,它可以满足我的需求。
perf report (version 2.6.35.7) now supports the -n flag, which does what I want.
您想看看对函数的更改是否会产生影响。
我想您也希望获得任何帮助来找出需要更改的功能。
这两个目标并不相同。
许多工具为您提供了尽可能广泛的统计数据或计数器,就好像拥有更多统计数据将有助于实现任何一个目标一样。
您可以使用 RotateRight/Zoom 或任何可以让您在挂钟时间上堆叠样本的工具吗?最好在用户控制下?这样的工具将为您提供在任何例程或代码行上花费的时间和百分比,特别是包含时间。
包含时间如此重要的原因是,执行的每一行代码都负责一定的时间部分,这样,如果该行不存在,则不会花费该时间部分,并且总体时间将会减少通过这个分数。在这段时间里,无论是花费在一个大块还是数千个小块中,该行代码都位于调用堆栈上,堆栈样本将以等于其分数的速率发现它。这就是为什么堆栈采样在查找值得优化的代码方面如此有效,无论它是由叶指令还是调用树中的调用组成。
就个人而言,此链接给出了我使用该方法的方式和原因,这并不花哨,但比我见过的任何方法或工具都更有效或更有效。 这里有一个讨论。
You want to see if your changes to a function made a difference.
I presume you also want whatever help you can get in finding out which function you need to change.
Those two objectives are not the same.
Many tools give you as broad a set of statistics or counters as they can dream up, as if having more statistics will help either goal.
Can you get hold of RotateRight/Zoom, or any tool that gives you stack samples on wall-clock time, preferably under user control? Such a tool will give you time and percent spent in any routine or line of code, in particular inclusive time.
The reason inclusive time is so important is that every single line of code that is executed is responsible for a certain fraction of time, such that if the line were not there, that fraction of time would not be spent, and overall time would be reduced by that fraction. During that fraction of time, whether it is spent in one big chunk or thousands of little chunks, that line of code is on the call stack, where stack samples will spot it, at a rate equal to its fraction. That is why stack sampling is so effective in finding code worth optimizing, whether it consists of leaf instructions or calls in the call tree.
Personally, this link gives the how and why of the method I use, which is not fancy, but is as or more effective than any method or tool I've seen. Here's a discussion.