进行集成和验收测试(自动化)的效率陷阱

发布于 2024-10-10 16:28:29 字数 1431 浏览 4 评论 0原文

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

蓝眼泪 2024-10-17 16:28:30

我们在我的工作中进行验收 TDD。

当我第一次开始工作时,我被告知只要工作及时且可预测地完成,我就可以实施我想要的任何政策。过去进行过单元测试后,我意识到我们经常遇到的问题之一是集成错误。有些可能需要相当长的时间才能修复,而且常常令人惊讶。在扩展应用程序的功能时,我们会遇到一些细微的错误。

我决定通过更多地关注我们应该提供的最终结果功能来避免过去遇到的这些问题。我们将编写测试来测试验收行为,不仅在单元级别,而且在整个系统级别。我想这样做,因为归根结底,我不关心设备是否正常工作,我关心整个系统是否正常工作。我们发现进行自动化验收测试有以下好处。

  • 我们绝不回归最终用户功能,因为它已经过明确测试。
  • 重构更容易,因为我们不必更新一堆单元测试。我们只需要确保我们的验收测试仍然通过。
  • “单元”的整合被隐含地涵盖了。
  • 这些测试成为所需最终用户功能的非常清晰的定义。
  • 集成问题较早暴露出来,并不令人意外。

以这种方式进行的一些权衡

  • 测试在模拟、存根、固定装置等的使用方面可能会更加复杂。
  • 测试对于缩小哪个“单元”有缺陷的范围不太有用。

我们还使我们的测试套件可以通过持续集成服务器运行,该服务器对部署进行标记和打包。与大多数 CI 设置一样,它会在每次提交时运行。

关于您的观点/疑虑:

设置:整个网络应用程序是
引导(就像会看到的那样
来自最终用户)。

我们倾向于做出的一种妥协是在单元测试的同一进程空间中运行测试。我们的入口点是应用程序堆栈的顶部。我们懒得尝试将应用程序作为服务器运行,因为这会增加复杂性,并且不会增加太多覆盖范围。

测试条目:HTTP 调用本身。浏览器
可以作为测试执行者参与(例如
硒)

我们所有的自动化测试都是通过模拟 HTTP GET、POST、PUT 或 DELETE 来驱动的。不过,我们实际上并没有为此使用浏览器,而是以特定 HTTP 调用映射的方式调用应用程序堆栈的顶部,效果很好。

断言目标:测试输出是
完整呈现的响应(HTML 和
其他工件,如 javascript)。
对数据库的断言(例如数据得到
插入)也可以包括在内。

我认为这是自动化验收测试真正发挥作用的地方。您断言的是您想要保证正在实现的最终用户功能。

控制器测试接近一般情况
系统行为(例如提交登录
表单,密码验证,成功
登录)。这非常接近
端到端测试就可以了。到底
可能会发生“双重测试”
效率极低。

实际上,我们很少进行单元测试,几乎完全依赖于我们的自动化验收测试。因此,我们没有太多的双重测试。

控制器是更多的白盒测试
并且往往很脆,因为它们
依赖于较低层的许多依赖项
层(与非常细的层不同
粒度单元测试)。正因为如此
设置维护控制器
测试工作量很大,端到端测试
整个应用程序的启动位置
因为黑匣子更琐碎并且有
优势是更接近生产。

它们可能有更多的依赖性,但可以通过使用模拟和固定装置来减轻这些依赖性。我们通常还使用两种执行模式来实现我们的测试。非托管模式,测试运行完全连接到网络、数据库等。托管模式,测试运行时模拟了非托管资源。尽管您的断言是正确的,但创建和维护测试需要花费更多的精力。

We do Acceptance TDD at my work.

When I first started I was told I could implement whatever policies I wanted so long as the work was completed in a timely and predictable fashion. Having done unit testing in the past I realized that one of the problem we always ran into were integration bugs. Some could take quite a long time to fix and were often a surprise. We would run into subtle bugs we introduced while extending the app's functionality.

I decide to avoid those issue I had run into in the past by focusing more on the the end result features that we were suppose to deliver. We would write tests that tested the acceptance behavior, not just at the unit level, but at the whole system level. I wanted to do that because at the end of the day I don't care of the unit works correctly, I care that the entire system works correctly. We found the following benefits to doing automated acceptance tests.

  • We NEVER regress end user functionality because it is explicitly tested for.
  • Refactors are easier because we don't have to update a bunch of unit tests. We just have to make sure our acceptance test still passes.
  • The integration of the "units" are implicitly covered.
  • The tests become a very clear definition of required end user functionality.
  • Integration issues are exposed earlier and are less of a surprise.

Some of the trade offs to doing it this way

  • Tests can be more complex in terms of usage of mocks, stubs, fixtures, etc.
  • Tests are less useful for narrowing down which "unit" has the defect.

We also make our test suite runnable via a Continuous Integration server which tags and packages for deployment. It runs with every commit as with most CI setups.

With regard to your points/concerns:

Setup: The whole webapp is
bootstrapped (like it would be seen
from end-user).

One compromise we do tend to make is to run the test in the same process space ala unit tests. Our entry point is the top of the app stack. We don't bother to try and run the app as a server because that adds to the complexity and doesn't add much in terms of coverage.

Test Entry: HTTP call itself. Browser
can be involved as test executer (e.g.
Selenium)

All of our automated tests are driven by a simulating a HTTP GET, POST, PUT, or DELETE. We don't actually use a browser for this though, a call into the top of the app stack the way the particular HTTP call get's mapped in works just fine.

Assert Targets: The test output is the
complete rendered response (HTML and
other artifacts like javascript).
Asserts on the database (e.g. data got
inserted) can also be included.

I think this where automated acceptance tests really shine. What you assert is the end user functionality you want to guarantee that you are implementing.

Controller tests are close to general
system behaviour (e.g. submit login
form, password validation, successful
login). This is very close what an
End-to-End test would do. In the end
"double-testing" could happen, which
is highly inefficient.

We actually do very little unit testing and rely almost solely on our automated acceptance tests. As a result we don't have much in the way of double testing.

Controller are more white-boxed tests
and tend to be brittle because they
rely on many dependencies of lower
layers (in difference to very fine
grained unit-tests). Beause of this
setting up maintaining Controller
tests is high effort, End-to-End test
where the whole application is started
as black box is more trivial and have
advantage being closer to production.

They may have more dependencies, but those can be mitigated through the usage of mocks and fixtures. We also usually implement our test with 2 modes of execution. Unmanaged mode where the tests runs fully wired to the network, dbs, etc. And Managed mode where the test runs with the unmanaged resources mocked out. Although you are correct in your assertion that the tests can be alot more effort to create and maintain.

白衬杉格子梦 2024-10-17 16:28:30

开发人员应该对其更改/实现的部分进行集成测试。在集成测试中,我的意思是他们应该看看他们实现的功能是否真的按预期工作。如果你不这样做,你怎么知道你刚刚完成的东西真的有效呢?单元测试本身并不是最终目标——重要的是产品。

应该这样做是为了加快错误查找速度。毕竟,集成测试需要很长时间才能执行(至少在我的公司,由于复杂性,执行所有集成测试需要 1-2 天)。早点发现 bug 比晚点发现更好。

Developer should do integration tests of the part that he changed/implemented. Under integration tests, I meant that they should see if the functionality they implemented really works as expected. If you don't do this, how do you know that what you just finished really works? Unit tests by itself is not the final goal - it is the product that matters.

This should be done in order to speed up bugs finding. After all, integration tests takes long to execute (at least in my company because of complexity it takes 1-2 days to execute all integration tests). Finding bugs earlier is better then later.

眼睛会笑 2024-10-17 16:28:30

通过集成测试(实际上是单元测试)来测试也由系统测试测试的行为,可以通过缩小缺陷的位置来帮助调试。如果您的系统具有组件 ABC 并且未通过系统测试用例,但程序集 AB 通过了类似的集成测试用例,则缺陷可能位于组件 C 中

Having integration tests (and, indeed, unit tests) that test behaviour that is also tested by a system test helps debugging, by narrowing the location of a defect. If your system has components A-B-C and fails a system test-case, but the assembly A-B passes a similar integration test-case, the defect is probably in component C.

毁虫ゝ 2024-10-17 16:28:30

考虑到这篇文章讨论的是测试陷阱,我想让您了解我最近的书《常见系统和软件测试陷阱》,该书由 Addison Wesley 于上个月出版。它记录了 92 个测试陷阱,分为 14 类。每个陷阱都包括描述、潜在的适用性、特征症状、潜在的负面后果、潜在的原因,以及避免陷阱和爬出陷阱的建议(如果您已经陷入其中)。请在 Amazon.com 上查看:http://www.amazon.com/Common-System-Software-Testing-Pitfalls/dp/0133748553/ref=la_B001HQ006A_1_1 ?s=books&ie=UTF8&qid=1389613893&sr=1-1

Considering that this post is dealing with testing pitfalls, I would like to make you aware of my most recent book, Common System and Software Testing Pitfalls, which was published last month by Addison Wesley. It documents 92 testing pitfalls organized into 14 categories. Each pitfall includes description, potential applicability, characteristic symptoms, potential negative consequences, potential causes, and recommendations for avoiding the pitfall and climbing out if you have already fallen in. Check it out on Amazon.com at: http://www.amazon.com/Common-System-Software-Testing-Pitfalls/dp/0133748553/ref=la_B001HQ006A_1_1?s=books&ie=UTF8&qid=1389613893&sr=1-1

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文