检测并查明性能回归

发布于 2024-12-08 12:17:38 字数 660 浏览 1 评论 0原文

是否有任何已知技术(以及与之相关的资源,例如研究论文或博客文章)描述如何动态以编程方式检测导致性能回归的代码部分,如果可能的话,在JVM 或其他虚拟机环境(可以相对容易地应用检测等技术)?

特别是,当项目拥有大型代码库和大量提交者(例如操作系统、语言或某些框架)时,有时很难找出导致性能下降的更改。像这篇这样的论文在描述如何方面有很大帮助检测性能下降(例如在某个代码片段中),但不检测如何动态地查找项目中因某些提交而更改并导致性能下降的代码片段。

我认为这可以通过对程序的各个部分进行检测来检测导致性能回归的确切方法来完成,或者至少缩小性能回归的可能原因的范围。

有谁知道有关此的任何文章,或者任何使用此类性能回归检测技术的项目?

编辑:

我指的是 这些行,但对代码库本身进行了进一步分析。

Are there any known techniques (and resources related to them, like research papers or blog entries) which describe how do dynamically programatically detect the part of the code that caused a performance regression, and if possible, on the JVM or some other virtual machine environment (where techniques such as instrumentation can be applied relatively easy)?

In particular, when having a large codebase and a bigger number of committers to a project (like, for example, an OS, language or some framework), it is sometimes hard to find out the change that caused a performance regression. A paper such as this one goes a long way in describing how to detect performance regressions (e.g. in a certain snippet of code), but not how to dynamically find the piece of the code in the project that got changed by some commit and caused the performance regression.

I was thinking that this might be done by instrumenting pieces of the program to detect the exact method which causes the regression, or at least narrowing the range of possible causes of the performance regression.

Does anyone know about anything written about this, or any project using such performance regression detection techniques?

EDIT:

I was referring to something along these lines, but doing further analysis into the codebase itself.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

〃温暖了心ぐ 2024-12-15 12:17:39

也许不完全是您所要求的,但在我从事的一个具有极端性能要求的项目中,我们使用单元测试框架编写了性能测试,并将它们粘合到我们的持续集成环境中。

这意味着每次签入时,我们的 CI 服务器都会运行测试来验证我们的功能速度没有超出我们可接受的范围。

它并不完美 - 但它确实使我们能够随着时间的推移密切关注我们的关键性能统计数据,并且它捕获了影响性能的签入。

定义性能的“可接受的边界”与其说是一门科学,不如说是一门艺术 - 在我们的 CI 驱动测试中,我们基于硬件规范采用了相当简单的方法;如果性能测试超过 100 个并发用户的响应时间超过 1 秒,我们的构建就会失败。这解决了一系列简单的性能问题,并给了我们对“生产”硬件的相当大的信心。

我们明确地在签入之前没有运行这些测试,因为这会减慢开发周期 - 迫使开发人员在签入之前运行相当长时间的测试,从而鼓励他们不要过于频繁地签入。我们也没有信心在不部署到已知硬件的情况下能够获得有意义的结果。

Perhaps not entirely what you are asking, but on a project I've worked on with extreme performance requirements, we wrote performance tests using our unit testing framework, and glued them into our continuous integration environment.

This meant that every check-in, our CI server would run tests that validated we hadn't slowed down the functionality beyond our acceptable boundaries.

It wasn't perfect - but it did allow us to keep an eye on our key performance statistics over time, and it caught check-ins that affected the performance.

Defining "acceptable boundaries" for performance is more an art than a science - in our CI-driven tests, we took a fairly simple approach, based on the hardware specification; we would fail the build if the performance tests exceeded a response time of more than 1 second with 100 concurrent users. This caught a bunch of lowhanging fruit performance issues, and gave us a decent level of confidence on "production" hardware.

We explicitly didn't run these tests before check-in, as that would slow down the development cycle - forcing a developer to run through fairly long-running tests before checking in encourages them not to check in too often. We also weren't confident we'd get meaningful results without deploying to known hardware.

半暖夏伤 2024-12-15 12:17:39

使用 YourKit 等工具,您可以拍摄测试或应用程序性能故障的快照。如果再次运行该应用程序,您可以比较性能故障以找出差异。

性能分析与其说是一门科学,不如说是一门艺术。我不相信你会找到一个工具可以准确地告诉你问题是什么,你必须使用你的判断。

例如,假设您有一个方法,它比以前花费的时间要长得多。是因为该方法已更改,还是因为以不同的方式调用它,或者更频繁地调用它。您必须使用自己的判断。

With tools like YourKit you can take a snapshot of the performance breakdown of a test or application. If you run the application again, you can compare performance breakdowns to find differences.

Performance profiling is more of an art than a science. I don't believe you will find a tool which tells you exactly what the problem is, you have to use your judgement.

For example, say you have a method which is taking much longer than it used to do. Is it because the method has changed or because it is being called a different way, or much more often. You have to use some judgement of your own.

向日葵 2024-12-15 12:17:39

JProfiler 允许您查看检测方法的列表,您可以按平均执行时间、固有时间、调用次数等对这些方法进行排序。我认为,如果在版本中保存此信息,则可以深入了解回归。当然,如果测试不完全相同,分析数据将不准确。

JProfiler allows you to see list of instrumented methods which you can sort by average execution time, inherent time, number of invocations etc. I think if this information is saved over releases one can get some insight into regression. Offcourse the profiling data will not be accurate if the tests are not exactly same.

ヤ经典坏疍 2024-12-15 12:17:39

有些人知道有一种技术可以找到(而不是测量)花费过多时间的原因。

这很简单,但是它非常有效。

本质上是这样的:

如果代码很慢,那是因为它花费了一些 F 的分数(比如 20%、50% 或90%)的时间在做一些不必要的事情,从某种意义上说,如果你知道它是什么,你就会把它扔掉,并节省那一小部分时间。

一般情况下,它的速度很慢,在任何随机的纳秒内,它执行 X 的概率是 F。

因此,只需访问它几次,然后询问它在做什么。
并询问它为什么这样做。

典型的应用程序几乎将所有时间都花在等待某些 I/O 完成或某些库函数返回上。

如果您的程序中的某些内容花费了太多时间(确实如此),那么您几乎可以肯定是一个或几个函数调用,您会在调用堆栈上发现这些函数调用是出于糟糕的原因而完成的。

以下是有关该主题的更多信息

Some people are aware of a technique for finding (as opposed to measuring) the cause of excess time being taken.

It's simple, but it's very effective.

Essentially it is this:

If the code is slow it's because it's spending some fraction F (like 20%, 50%, or 90%) of its time doing something X unnecessary, in the sense that if you knew what it was, you'd blow it away, and save that fraction of time.

During the general time it's being slow, at any random nanosecond the probability that it's doing X is F.

So just drop in on it, a few times, and ask it what it's doing.
And ask it why it's doing it.

Typical apps are spending nearly all their time either waiting for some I/O to complete, or some library function to return.

If there is something in your program taking too much time (and there is), it is almost certainly one or a few function calls, that you will find on the call stack, being done for lousy reasons.

Here's more on that subject.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文