分析器检测与采样

发布于 2024-10-11 12:29:55 字数 294 浏览 4 评论 0原文

我正在对分析器进行一项研究,主要是检测和采样。 我想出了以下信息:

  • 采样:停止程序的执行,获取 PC,从而推断出程序正在
  • 检测:添加一些开销代码 到程序中,这样它就会增加 一些了解该程序的指示

如果上述信息有误,请纠正我。

之后,我查看了执行时间,有人说检测比采样花费更多时间!这是正确的吗?

如果是,那是为什么?在采样中,您必须付出进程之间上下文切换的代价,而在后者中,您在同一个程序中没有成本

我是否遗漏了一些东西?

干杯! =)

I am doing a study to between profilers mainly instrumenting and sampling.
I have came up with the following info:

  • sampling: stop the execution of program, take PC and thus deduce were the program is
  • instrumenting: add some overhead code
    to the program so it would increment
    some pointers to know the program

If the above info is wrong correct me.

After this I was looking at the time of execution and some said that instrumenting takes more time than sampling! is this correct?

if yes why is that? in sampling you have to pay the price of context switching between processes while in the latter your in the same program no cost

Am i missing something?

cheers! =)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

白日梦 2024-10-18 12:29:55

采样分析器生成的中断通常会给总执行时间增加微不足道的时间,除非采样间隔非常短(例如< 1 ms)。

使用插装分析可能会产生很大的开销,例如,在多次调用的小叶函数上,因为与函数的执行时间相比,对插装库的调用可能很重要。

The interrupts generated by a sampling profiler generally add an insignficant amount of time to the total execution time, unless you have a very short sampling interval (e.g. < 1 ms).

With instrumented profiling there can be a large overhead, e.g. on small leaf functions that get called many times, as the calls to the instrumentation library can be significant compared to the execution time of the function.

谢绝鈎搭 2024-10-18 12:29:55

这取决于你想变得多么传统。

gprof 可以完成您提到的这两件事。 以下是对此的一些评论。

有一种思想流派认为分析就是衡量。测量什么?好吧,任何东西——只是测量。随之而来的想法是,您想要获得的是正在发生的事情的“全局”。
这所学校主要着眼于寻找“慢功能”,而没有明确定义它的含义,并告诉您在那里进行优化。

另一个学校说你真的在调试。您想要精确地定位某种类型的错误 - 这些错误不会使程序不正确,而是会花费太长时间。这些都不是大局。它们是代码中非常精确的点,在这些点上发生的事情会花费比必要的更多的时间。
具体多多少并不重要。重要的是它的位置可以固定。
从这个角度来看,分析开销是无关紧要的,测量的准确性也是如此。
衡量的目的是看看节省了多少时间。

我认为,成功跨越两个阵营的分析器是 Zoom,因为它对墙上的调用堆栈进行采样-时钟时间,并在行/指令级别呈现堆栈上的时间百分比。其他一些分析器也这样做,但大多数不这样做。

我在第二所学校,这是一个示例您可以用它完成什么。

这里有更简短的说明问题的讨论。

It depends how conventional you want to be.

gprof does both those things you've mentioned. Here are some comments on that.

There is a school of thought that says profiling is about measuring. Measuring what? Well, anything - just measuring. Along with this goes the idea that what you want to get is a "big picture" of what's happening.
This school looks mostly at trying to find "slow functions", without clearly defining what that even means, and telling you to look there to optimize.

Another school says that you are really debugging. You want to precisely locate bugs of a certain kind - ones that don't make the program incorrect, rather they take too long. These are not big-picture things. They are very precise points in the code where something is happening that costs a lot more time than necessary.
Exactly how much more is not important. What's important is that it is located so it can be fixed.
In this viewpoint, profiling overhead is irrelevant, and so is accuracy of measurement.
What measuring is for is seeing how much time was saved.

One profiler that, I think, successfully spans both camps, is Zoom, because it samples the call stack, on wall-clock time, and presents, at the line/instruction level, percent of time on the stack. Some other profilers do this also, but most don't.

I'm in the second school, and here's an example of what you can accomplish with it.

Here's a more brief discussion of the issues.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文