使用 Eclipse 在 Windows 上分析 C 代码
我知道我可以在 Linux 上使用 gprof
和 kprof
分析我的代码。 Windows 上是否有与这些应用程序类似的替代方案?
I know I can profile my code with gprof
and kprof
on Linux. Is there a comparable alternative to these applications on Windows?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
商业软件:
免费软件:
这些商业替代方案通过“检测”(添加指令)来更改编译的代码,并使用添加的指令执行计时。这意味着它们会导致您的应用程序严重减慢。
这些免费替代方案使用采样,这意味着它们不太详细,但速度非常快。在实践中,我发现 Very Sleepy 非常适合快速查看应用程序中的性能问题。
Commercial software:
Free software:
These commercial alternatives change the compiled code by 'instrumenting' (adding instructions) to it and perform the timing withing the added instructions. This means that they cause your application to slow down seriously.
These free alternatives use sampling, meaning they are less detailed, but very fast. In practice I found that especially Very Sleepy is very good to have a quick look at performance problems in your application.
gprof 的 MinGW 端口与 Linux 变体的工作方式几乎相同。您可以获取完整的 MinGW 安装(我认为 gprof 已包含在内,但不确定)或从 MinGW binutils 包 获取 gprof。
对于 Eclipse,有 TPTP 但据我所知,它不支持分析 C/C++。
There's a MinGW port of gprof that works just about the same as the Linux variant. You can either get a full MinGW installation (I think gprof is included but not sure) or get gprof from the MinGW binutils package.
For Eclipse, there's TPTP but it doesn't support profiling C/C++ as far as I know.
是的,您可以使用 Visual Studio 分析代码
Yes, you can profile code with Visual Studio
分析的原因是什么?您想要 a) 测量时间并获取调用图,还是 b) 找到需要更改的内容以使代码更快? (这些不一样。)
如果(b)您可以使用这个技巧,使用 Eclipse 中的“暂停”按钮。
补充:也许这有助于传达一些有关性能问题实际情况以及您可以在哪里找到它们的经验。以下是一些简单的示例:
插入排序(顺序 n^2),其中排序的项目是字符串,并通过字符串比较函数进行比较。热点在哪里?在字符串比较中。问题出在哪里?在调用 string-compare 的排序中。如果n=10,这不是问题,但如果n=1000,突然需要很长时间。调用 string-compare 的地方是“冷”的,但这就是问题所在。调用堆栈的少量样本可以确定地确定它。
加载插件的应用程序需要很长时间才能启动。分析器表示基本上其中的所有内容都是“冷”的。测量 I/O 时间的东西说它几乎是所有 I/O 时间,这看起来就像你所期望的那样,所以它可能看起来毫无希望。但是,堆栈样本显示,在读取插件 dll 的资源部分以将字符串常量翻译为本地语言的过程中,大约 20 层深的堆栈花费了大量时间。进一步调查,您会发现大多数正在翻译的字符串并不是真正需要翻译的字符串。它们只是被放置在“以防万一”可能需要翻译的情况下,并且从未被认为可能会导致性能问题。解决该问题可以节省大量时间。
因此,通常会考虑“热点”和“瓶颈”,但大多数程序,尤其是较大的程序,往往会以函数调用请求实际上不需要完成的工作的形式出现性能问题。幸运的是,它们在所花费的时间期间将自己显示在调用堆栈上。
What's the reason for profiling? Do you want to a) measure times and get a call graph, or b) find things to change to make the code faster? (These are not the same.)
If (b) you can use this trick, using the Pause button in Eclipse.
Added: Maybe it would help to convey some experience of what performance problems are actually like, and where you can expect to find them. Here are some simple examples:
An insertion sort (order n^2) where the items being sorted are strings, and are compared by a string-compare function. Where is the hot-spot? in string-compare. Where is the problem? In the sort where string-compare is called. If n=10 it's not a problem, but if n=1000, suddenly it takes a long time. The point where string-compare is called is "cold", but that's where the problem is. A small number of samples of the call stack pinpoint it with certainty.
An app that loads plugins takes a long time to start up. A profiler says basically everything in it is "cold". Something that measures I/O time says it is almost all I/O time, which seems like what you might expect, so it might seem hopeless. But, stack samples show a large percentage of time is spent with the stack about 20 layers deep in the process of reading the resource part of plugin dlls for the purpose of translating string constants into the local language. Investigating further, you find that most of the strings being translated are not the the kind that actually need translation. They were just put in "in case" they might need translation, and were never thought to be something that could cause a performance problem. Fixing that issue brings a hefty time savings.
So it is common to think in terms of "hotspots" and "bottlenecks", but most programs, especially the larger ones, tend to have performance problems in the form of function calls requesting work that doesn't really need to be done. Fortunately they display themselves on the call stack during the time that they are spending.