如何保持自动化测试快速?
自动化测试必须快速反映实时项目状态。 这个想法是:
- 在对存储库进行任何提交之后,都会执行自动构建(尽可能快地完成)。
- 如果构建成功,则启动自动化测试。 一定要快。
这是我知道的最好的方法来查明您的更改是否会破坏任何内容。
起初,快速构建似乎很困难,但我们设法将其保持在 100 秒左右。 105(!) 个项目的解决方案 (MSVS 2008 C#)。
测试似乎没那么简单(我们使用 NUnit FW)。 单元测试不是一个大问题。 正是集成测试杀死了我们。 并不是它们速度较慢(任何关于如何使它们更快的想法都值得赞赏),而是事实是环境必须设置得慢得多(atm ~1000 秒)!
我们的集成测试使用 web/win 服务(到目前为止有 19 个),需要重新部署以反映最新的更改。 这包括重新启动服务和大量 HDD R/W 活动。
任何人都可以分享有关如何组织/优化环境和工作流程以加快自动化测试阶段的经验吗? “低级”瓶颈和解决方法是什么?
PS 书籍和广泛的文章受到欢迎,但更受欢迎的是现实世界的工作解决方案。
Automated tests MUST be fast to reflect real time project state. The idea is that:
- after any commit to repository automated build is performed (as fast as it can be done).
- if build succeeded automated tests are started. MUST be fast.
This is the best way i know to find out if your changes break anything.
At first it seemed that making a build fast is hard, but we managed to keep it around 100 sec. for a solution of 105(!) projects (MSVS 2008 C#).
Tests appeared to be not that simple (we use NUnit FW). Unit testing is not a big problem. It is integration tests that kills us. And not the fact that they are slower (any ideas on how to make them faster are much appreciated) but the fact that the environment must be set up which is MUCH slower(atm ~1000 sec)!
Our integration tests use web/win services (19 so far) that needs to be redeployed in order to reflect latest changes. That includes restarting services and a lot of HDD R/W activity.
Can anyone share the experience on how environment and the work flow should/can be organized/optimized to fasten automated testing phase. What are the "low level" bottlenecks and workarounds.
P.S. books and broad articles are welcome, but real world working solutions are much more appreciated.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(7)
好吧,至少你知道该把注意力集中在哪里......你知道时间都花在哪里了吗?
显然,任何解决方案都取决于这里的具体情况。
在这种情况下,我使用了三种解决方案:
使用更多机器。 也许您可以将服务分区到两台机器上? 这会让您的设置时间减少 1/2 吗?
使用更快的机器? 在一种情况下,我知道团队通过升级硬件(多个 CPU、快速 RAID 存储、更多 RAM,这些工作)将集成测试的执行时间从 18 小时缩短到 1 小时。 当然,他们花费了大约 1 万美元,但这是值得的。
使用内存数据库进行集成测试。 是的,我知道您也希望针对真实数据库进行测试,但也许您可以最初针对内存版本运行测试以获得快速反馈。
Well at least you know where to focus... Do you know where that time is being spent?
Obviously any solution is going to depend on the specifics here.
There are three solutions that I've used in this sort of situation:
use more machines. Perhaps you could partition your services onto two machines? Would that let you drop your setup time in 1/2?
use faster machines? In one situation I know of at team cut their integration test executing down from something like 18 hrs to 1 hr by upgrading the hardware (multiple CPUs, fast RAID storage, more RAM, the works). Sure it cost them on the order of $10k USD but it was worth it.
use an in memory database for integration test. Yes I know you'll want to have tests against the real database too, but perhaps you could run the tests initially against an in memory version to get fast feedback.
这种情况的最佳解决方案是在重置环境之前备份环境的重影映像并恢复映像。 这对于所花费的时间来说更有意义。
The best solution for the situation is to back up the ghost images of the environment and restore the image, before resetting the environment. This will make more sense to the time being spent.
构建机器人:http://buildbot.net/trac
如果您正在进行持续集成(自动化测试),我强烈推荐这一点。 通过快速配置,我们的所有单元测试都会在每次提交时运行,并且较长的集成测试会在一天中定期运行(我上次检查了 3 次,但这可以很容易地更改)。
Buildbot: http://buildbot.net/trac
I can not recommend this enough if you're doing Continuous Integration (automated testing). With a quick configuration all of our unit tests are run each time there is a commit, and the longer integration tests get run periodically through the day (3 times last I checked, but this can be easily changed).
我建议进行几次高级端到端测试,如果其中任何一项失败,请运行“更高分辨率”测试。
考虑通过电话提供技术支持......
您的计算机可以工作吗?
如果是,完成。
如果没有,您的计算机是否可以打开?
...
对于我的单元测试,我有一些快速测试,例如“我的计算机可以工作吗?” 如果这些通过了,我不会执行我的套件的其余部分。 如果这些测试中的任何一个失败,我都会执行相关的一组较低级别的测试,这些测试可以让我更高分辨率地了解该故障。
我的观点是,运行一套全面的顶级测试应该需要不到半秒的时间。
这种方法给了我速度和细节。
I'd suggest having several high level end to end tests, and if any one of those fails, run the 'higher resolution' tests.
Think of doing tech support over the phone...
does your computer work?
if yes, done.
If no, does your computer turn on at all?
...
For my unit testing, I have a few fast tests like "does my computer work?" if those pass, I don't execute the rest of my suite. If any of those tests fails, I execute the associated suite of lower level tests that give me a higher resolution view into that failure.
My view is that running a comprehensive suite of top level tests should take less than half a second.
This approach gives me both speed and detail.
我整理了一个关于Turbo-Charged 测试套件。 后半部分针对 Perl 开发人员,但前半部分可能对您有用。 我对你们的软件了解不够,不知道它是否合适。
基本上,它涉及加快测试套件中数据库使用速度并在单个进程中运行测试以避免不断重新加载库的技术。
I've put together a presentation on Turbo-Charged Test Suites. The second half is aimed at Perl developers, but the first half might prove useful to you. I don't know enough about your software to know if it's appropriate.
Basically, it deals with techniques for speeding up the database usage in test suites and running tests in a single process to avoid constant reloading of libraries.
我们使用 .NET 和 NUnit,它支持类别(可以放在测试中的属性)。 然后,我们进行长时间运行的测试并将它们放入 NightlyCategory 中,以便它们仅在夜间构建期间运行,而不是在我们想要快速运行的连续构建中运行。
We use .NET and NUnit, which supports categories (an attribute you can put on a test). Then we take long running tests and put them in a NightlyCategory so that they only get run during nightly builds and not in the continuous builds that we want to run fast.
您可以采取多种优化策略来提高测试的吞吐量,但您需要问自己此测试的目标是什么,以及为什么需要快速。
有些测试需要时间。 这是生活的事实。 集成测试通常需要时间,并且您通常必须设置环境才能进行测试。 如果您设置环境,您将希望拥有一个尽可能接近最终生产环境的环境。
您有两个选择:
根据我的经验,最好有一个正确的集成环境,可以发现错误,并充分代表最终的生产环境。 我通常选择选项 2 (1)。
人们很容易说我们会一直测试所有内容,但实际上您需要一个策略。
(1) 除非存在大量仅在集成中发现的错误,在这种情况下,请忘记我所说的一切:-)
There are a number of optimization strategies you can do to improve the throughput of tests, but you need to ask yourself what the goal of this testing is, and why it needs to be fast.
Some tests take time. This is a fact of life. Integration tests usually take time, and you usually have to set up an environment in order to be able to do them. If you set up an environment, you will want to have an environment which is as close to the final production environment as possible.
You have two choices:
In my experience, it's better to have an integration environment which is correct, and finds bugs, and represents the final production environment adequately. I usually choose option 2 (1).
It's very tempting to say that we'll test everything all of the time, but in realilty you need a strategy.
(1) Except if there are loads of bugs which are only found in integration, in which case, forget everything I said :-)