为开发人员建立性能测试实验室

发布于 2024-07-13 18:57:28 字数 530 浏览 9 评论 0原文

我们的产品在性能方面声誉不佳。 嗯,这是一个已有 13 年历史的大型企业应用程序,需要更新一下,特别是性能的提升。

我们决定在此版本中战略性地解决性能问题。 我们正在评估一些关于如何做到这一点的选项。

我们确实拥有经验丰富的负载测试工程师,配备了市场上最好的工具,但通常他们在版本开发生命周期的后期获得稳定版本,因此在最后的版本中,开发人员没有足够的时间来修复所有发现的问题。 (是的,我知道我们需要提供更早的稳定版本,我们也在研究这个过程,但这不在我的范围内)

我推动的方向之一是建立一个安装了夜间构建的实验室环境,以便开发人员可以测试其代码对性能的影响。 我希望这个环境能够通过模拟真实用户体验的脚本不断加载。 在这个加载的环境中,每个开发人员都必须编写一个特定的脚本来测试他的代码(即现实环境中的单个用户体验)。 我想生成一份报告,显示每次迭代对现有功能的影响以及新功能的性能。

我有点担心我的目标太高,结果会变得太复杂。

对于这样的想法你怎么看? 有人有建立这样的环境的经验吗? 你能分享一下你的经验吗?

Our product earned bad reputation in terms of performance. Well, it's a big enterprise application, 13 years old, that needs a refreshment treat, and specifically a boost in its performance.

We decided to address the performance problem strategically in this version. We are evaluating a few options on how to do that.

We do have an experienced load test engineers equipped with the best tools in the market, but usually they get a stable release late in the version development life cycle, therefore in the last versions developers didn't have enough time to fix all their findings. (Yes, I know we need to deliver earlier a stable versions, we are working on this process as well, but it's not in my area)

One of the directions I am pushing is to set up a lab environment installed with the nightly build so developers can test the performance impact of their code.
I'd like this environment to be constantly loaded by scripts simulating real user's experience. On this loaded environment each developer will have to write a specific script that tests his code (i.e. single user experience in a real world environment). I'd like to generate a report that shows each iteration impact on existing features, as well as performance of new features.

I am a bit worried that I'm aiming too high, and it it will turn out to become too complicated.

What do you think of such an idea?
Does anyone have an experience with setting up such an environment?
Can you share your experience?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

北斗星光 2024-07-20 18:57:28

这听起来是个好主意,但老实说,如果您的组织无法建立专门为此目的而雇用的昂贵的负载测试团队,那么它永远不会让您的想法发挥作用。

首先寻找容易实现的目标。 在此过程的早期,为性能测试团队提供可用的夜间构建。

事实上,如果这个版本完全与性能有关,为什么不让团队直接使用这个版本来解决上一个版本迭代后期出现的所有性能问题。

编辑:“开发人员没有责任测试代码的性能”是一条评论。 是的。 我个人希望每个开发人员都有一份 YourKit java profiler(它便宜且有效)并知道如何使用它。 然而,不幸的是,性能调优是一项非常非常有趣的技术活动,当您想要更好地开发功能时,可能会花费大量时间来做这件事。

如果您的开发团队反复开发明显缓慢的代码,那么性能教育或更好的程序员是唯一的答案,而不是更昂贵的过程。

It sounds like a good idea, but in all honesty, if your organisation can't get a build to the expensive load test team it has employed just for this purpose, then it will never get your idea working.

Go for the low hanging fruit first. Get a nightly build available to the performance testing team earlier in the process.

In fact, if this version is all about performance, why not have the team just take this version to address all the performance issues that came late in the iteration for the last version.

EDIT: "Don't developers have a responsibility to performance test code" was a comment. Yes, true. I personally would have every developer have a copy of YourKit java profiler (it's cheap and effective) and know how to use it. However, unfortunately performance tuning is a really, really fun technical activity and it is possible to spend a lot of time doing this when you would be better developing features.

If your developer team are repeatedly developing noticeably slow code then education on performance or better programmers is the only answer, not more expensive process.

三月梨花 2024-07-20 18:57:28

生产力的最大提升之一是夜间运行的自动化构建系统(这称为持续集成)。 昨天犯下的错误今天一早就会被发现,那时我还精神饱满,而且我可能还记得昨天做了什么(而不是几周/几个月后)。

所以我建议首先实现这一点,因为它是其他任何事情的基础。 如果你不能可靠地构建你的产品,你会发现很难稳定开发过程。

完成此操作后,您将拥有创建性能测试所需的所有知识。

不过有一个建议:不要试图一次性实现所有目标。 一步一步地工作,解决一个又一个问题。 如果有人提出“我们也必须这样做”,您必须像处理任何其他功能请求一样进行分类:这有多重要? 有多危险? 实施需要多长时间? 我们将获得多少收益?

推迟艰巨但重要的任务,直到解决了基础问题。

One of the biggest boost in productivity is an automated build system which runs overnight (this is called Continuous Integration). Errors made yesterday are caught today early in the morning, when I'm still fresh and when I might still remember what I did yesterday (instead of several weeks/months later).

So I suggest to make this happen first because it's the very foundation for anything else. If you can't reliably build your product, you will find it very hard to stabilize the development process.

After you have done this, you will have all the knowledge necessary to create performance tests.

One piece of advice though: Don't try to achieve everything at once. Work one step at a time, fix one issue after the other. If someone comes up with "we must do this, too", you must do the same triage as you do with any other feature request: How important is this? How dangerous? How long will it take to implement? How much will we gain?

Postpone hard but important tasks until you have sorted out the basics.

平安喜乐 2024-07-20 18:57:28

每晚构建是性能测试的正确方法。 我建议您需要每晚自动运行的脚本。 然后将结果记录在数据库中并提供定期报告。 您确实需要两种类型的报告:

  • 每个指标随时间变化的图表。 这将帮助您了解趋势
  • 每个指标与基线的比较。 您需要知道一天中某些内容何时急剧下降或何时超过性能阈值。

其他一些建议:

  • 确保您的机器与您的预期环境相似。 池中有低端和高端机器。
  • 一旦开始测量,切勿更换机器。 你需要比较喜欢的东西。 您可以添加新机器,但不能修改任何现有机器。

Nightly builds are the right approach to performance testing. I suggest you require scripts that run automatically each night. Then record the results in a database and provide regular reports. You really need two sorts of reports:

  • A graph of each metric over time. This will help you see your trends
  • A comparison of each metric against a baseline. You need to know when something drops dramatically in a day or when it crosses a performance threshold.

A few other suggestions:

  • Make sure your machines vary similarly to your intended environment. Have low and high end machines in the pool.
  • Once you start measuring, never change the machines. You need to compare like to like. You can add new machines, but you can't modify any existing ones.
治碍 2024-07-20 18:57:28

我们建立了一个小型测试台,进行健全性测试 - 即,当按下按钮时,应用程序是否启动并按预期工作,进行验证工作等。我们的应用程序是一个网络应用程序,我们使用 Watir 一个基于 ruby​​ 的工具包来驱动浏览器。 这些运行的输出被创建为 Xml 文档,并且我们的 CI 工具(巡航控制)可以将结果、错误和性能作为每个构建日志的一部分输出。 整个过程运行良好,并且可以扩展到多台 PC 上进行适当的负载测试。

然而,我们之所以做到这一切,是因为我们的身体多于工具。 有一些大型压力测试工具可以满足您所需的一切。 它们很昂贵,但比手卷所花费的时间要少。 我们遇到的另一个问题是让我们的开发人员编写 Ruby/Watir 测试,最终落到了一个人的身上,因此测试工作几乎成为了瓶颈。

We built a small test bed, to do sanity testing - ie did the app fire up and work as expected when the buttons were pushed, did the validation work etc. Ours was a web app and we used Watir a ruby based toolkit to drive the browser. The output from those runs are created as Xml documents, and the our CI tool (cruise control) could output the results, errors and performance as part of each build log. The whole thing worked well, and could have been scaled onto multiple PCs for proper load testing.

However, we did all that because we had more bodies than tools. There are some big end stress test harnesses that will do everything you need. They cost, but that will be less than the time spent to hand roll. Another issue we had was getting our Devs to write Ruby/Watir tests, in the end that fell to one person and the testing effort was pretty much a bottleneck because of that.

烦人精 2024-07-20 18:57:28

夜间构建非常出色,实验室环境也非常出色,但我认为您可能面临将性能测试与直接错误测试混为一谈的危险。

确保您的实验室条件是独立且稳定的(即一次仅改变一个因素,无论是您的应用程序还是 Windows 更新)并且硬件能够反映您的目标。 请记住,您的基准比较只能在实验室内部进行。

由编写代码的开发人员编写的测试脚本往往是有害的。 它不能帮助您消除实现时的误解(因为测试脚本中也会出现同样的误解),并且实际发现问题的动力有限。 更好的方法是采用 TDD 方法,首先作为一个组(或一个单独的组)编写测试,但如果做不到这一点,您仍然可以通过协作编写脚本来改进流程。 希望您在设计阶段拥有一些用户故事,并且可以重播日志以获得真实世界的体验(应用程序有所不同)。

Nightly builds are excellent, lab environments are excellent, but you're in danger of muddling performance testing with straight up bug testing I think.

Ensure your lab conditions are isolated and stable (i.e. you vary only one factor at a time, whether that's your application or a windows update) and the hardware is reflective of your target. Remember that your benchmark comparisons will only be bulletproof internally to the lab.

Test scripts written by the developers who wrote the code tends to be a toxic thing to do. It doesn't help you drive out misunderstandings at implementation (since the same misunderstanding will be in the test script), and there is limited motivation to actually find problems. Far better is to take a TDD approach and write the tests first as a group (or a separate group), but failing that you can still improve the process by writing the scripts collaboratively. Hopefully you have some user-stories from your design stage, and it may be possible to replay logs for real world experience (app varying).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文