性能规则测试的正确做法是什么?

发布于 2024-12-05 20:55:28 字数 521 浏览 0 评论 0原文

我知道我们正在做的是不正确/奇怪的做法。

我们有一个在应用程序的许多地方构建的对象,其构建的滞后会严重影响我们的性能。

我们想要一个大门来阻止签到,这会对这个建筑的性能产生太大的不利影响......
因此,我们所做的是创建一个单元测试,其基本如下:

myStopwatch.StartNew()
newMyObject = New myObject()
myStopwatch.Stop()
Assert(myStopwatch.ElapsedMilliseconds < 100)

或者:如果构建时间超过 100 毫秒,则失败

从某种意义上说,如果签入影响此性能,则不会提交,这是“有效”的太消极了...然而,它本质上是一个糟糕的单元测试,因为它可能会间歇性地失败...例如,如果我们的构建服务器由于某种原因恰好很慢。

针对一些答案;我们明确希望我们的大门拒绝影响此性能的签入,我们不想检查日志或观察数据趋势。

衡量登机口性能的正确方法是什么?

I know that what we're doing is incorrect/strange practice.

We have an object that is constructed in many places in the app, and lags in its construction can severely impact our performance.

We want a gate to stop check-ins which affect this construction's performance too adversely...
So what we did was create a unit test which is basically the following:

myStopwatch.StartNew()
newMyObject = New myObject()
myStopwatch.Stop()
Assert(myStopwatch.ElapsedMilliseconds < 100)

Or: Fail if construction takes longer than 100ms

This "works" in the sense that check-ins will not commit if they impact this performance too negatively... However it's inherently a bad unit test because it can fail intermittently... If, for example, our build-server happens to be slow for whatever reason.

In response to some of the answers; we explicitly want our gates to reject check-ins that impact this performance, we don't want to check logs or watch for trends in data.

What is the correct way to meter performance in our check-in gate?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

So尛奶瓶 2024-12-12 20:55:28

为了避免机器依赖性,您可以首先构建一个具有已知可接受构建时间的“参考对象”。然后将构造对象的时间与参考对象的时间进行比较。

这可能有助于防止过载服务器上的错误故障,因为参考代码也会变慢。我还会多次运行测试,并且只需要其中的 X% 通过即可。 (因为有许多外部事件可能会减慢代码速度,但没有一个会加快速度。)

To avoid the machine dependence, you could first time the construction of a "reference object" which has a known acceptable construction time. Then compare the time to construct your object to the reference object's time.

This may help prevent false failures on an overloaded server since the reference code will also be slower. I'd also run the test several times and only require X% of them to pass. (since there are many external events which can slow down code, but none that will speed it up. )

無心 2024-12-12 20:55:28

首先我想说:难道你不能允许某些逻辑延迟运行,而不是在构造函数/初始化中执行所有逻辑吗?或者你可以对对象进行分区吗?一个有用的指标是 LCOM4

其次,你可以缓存这些实例吗?在之前的项目中我们也遇到过类似的情况,我们决定将对象缓存几分钟。这带来了一些其他较小的问题,但应用程序的性能却飙升。

最后,我确实认为这是一种很好的方法,但我会取平均值,而不仅仅是一个样本(操作系统可能只是在那时决定运行其他东西,并且可能需要超过 100 毫秒)。
此外,这种方法的一个问题是,如果您更新硬件并忘记更新它,您可能会在没有意识到的情况下添加更多逻辑。

我认为更好的方法,但实现起来更棘手,是存储运行 N 次迭代所需的时间,如果该值增加超过 X%,则构建失败。这样做的好处是,由于您存储了需要多长时间,因此您可以从中生成图表并查看趋势。

First I would say: Can't you allow some of that logic be lazily run rather than executing all of it in the constructor / initialization? Or can you partition the object? An useful metric for this is LCOM4.

Secondly, can you cache those instances? In a previous project we had a similar situation, and we decided to cache the object for a few minutes. This brought some other smaller issues, but the performance of the app skyrocketed.

And last, I do think it's a good approach, but I would take an average, rathen than just one sample (the OS might just at that time decide to run something else and it might take more than 100ms).
Also, one issue with this approach is, if you update your hardware and forget to update this, you might add even more logic, without realizing.

I think a better approach, but more a bit more tricky to implement, is to store how long it takes to run N iterations, and if that value increases more than X% you fail the build. The benefit of this, is that since you store how long it takes, you can generate a graph from it and see the trend.

不美如何 2024-12-12 20:55:28

我认为您不应该真正以阻止签入的方式执行此操作,因为在签入过程中需要完成太多工作。签入需要快速,因为您的开发人员在运行时无法执行其他操作。

该单元测试必须在开发人员坐下来等待的同时进行编译和运行。正如您所指出的,测试的一次迭代不足以产生一致的结果。需要运行多少次才能可靠? 10?运行 10 次迭代会使签入时间增加最多 1 秒,但在我看来仍然不够可靠。如果将迭代次数增加到 100 次,您会得到更好的结果,但这会增加 10 秒的签入时间。

另外,如果两个开发人员同时签入代码会发生什么?第二个测试是否必须等待第一个测试完成才能开始,或者测试是否会同时运行?第一种情况很糟糕,因为第二个开发人员必须等待两倍的时间。第二种情况很糟糕,因为您可能无法通过这两项测试。

我认为更好的选择是在签入完成后运行单元测试,如果失败,则将其传达给某人。您可以在每次签入后运行测试,但这仍然有可能让两个人同时签入。我认为每 N 分钟运行一次测试会更好。这样你就能相当快地找到它。

可以这样做,以便它阻止签入,但您必须确保它仅在该对象(或依赖项)更改时运行,这样就不会减慢每次提交的速度。您还必须确保测试一次不会运行多次。

至于具体的测试,我认为除了通过多次迭代运行测试以获得更准确的结果之外,您别无选择。我不想依赖任何少于 5 或​​ 10 秒的测试(因此 50 到 100 次迭代)。

I don't think that you should really do this in such a way as to block check ins because it is too much work to be done during the check in process. Check ins need to be fast because your developers can do nothing else whilst they run.

This unit test would have to compile and run whilst the developer sits and waits for it. As you pointed out, one iteration of the test is not good enough to produce consistant results. How many times would it need to be run to be reliable? 10? A run of 10 iterations would increase the check in time by up to 1 second and still isn't reliable enough in my opinion. If you increased that to 100 iterations you'd get a better result but that's adding 10 seconds to the check in time.

Also, what happens if two developers check in code at the same time? Does the second one have to wait for the first test to complete before theirs starts or would the tests be run simultaneously? The first scenario is bad because the second developer has to wait twice as long. The second scenario is bad as you'd be likely to fail both tests.

I think that a better option would have the unit test be run after the check in has completed and, if it fails, have it communicate this to somebody. You could have the test run after each check in but that still has the potential for two people to check in at the same time. I think that it would be better to run the test every N minutes. That way you'd be able to track it down fairly quickly.

You could do it so that it blocks check ins but you'd have to make sure that it only runs when that object (or a dependancy) changes so that don't slow down every commit. You'd also have to make sure that the test isn't run more than once at a time.

As to the specific test, I don't think that you can get away with anything other than run the test through a number of iterations to get a more accurate result. I wouldn't like to rely on anything less than a 5 or 10 second test (so 50 to 100 iterations).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文