每个测试中可重用的模拟与模拟

发布于 2024-10-10 17:49:31 字数 685 浏览 2 评论 0原文

我们的团队正在逐步适应 TDD,并努力寻求单元测试的最佳实践。我们的测试代码使用依赖注入。我们的测试通常遵循 Arrange-Act-Assert 类型的布局,其中我们使用 Moq 模拟 Arrange 部分中的依赖关系。

理论上,单元测试应该是你重构时保护你的盾牌。但它正在变成阻止我们这样做的锚。我正在努力找出我们的流程失败的地方。

考虑一个简化的示例:

  • XRepository.Save 的签名和行为/契约已更改。
  • XController.Save 使用 XRepository.Save,因此它被重构以使用新接口。但从外部来看,它的公共合同并没有改变。

我希望控制器测试不需要重构,而是向我证明我的新控制器实现遵循未更改的合同。但我们在这里失败了,因为事实并非如此。

每个控制器测试都会动态模拟存储库接口。它们都需要改变。此外,由于每个测试不想模拟所有接口和方法,因此我们发现我们的测试与特定实现相关,因为它需要知道要模拟哪些方法。

我们拥有的测试越多,重构的难度就会成倍增加!或者更准确地说,我们模拟接口的次数越多。

所以我的问题是:

  1. 在每个测试中使用即时模拟与为每个接口制作可重复使用的手工模拟相比有什么偏好吗?

  2. 根据我的故事,我是否遗漏了一些原则或陷入了常见的陷阱?

谢谢!

Our team is in the process of easing into TDD and struggling with best practices for unit tests. Our code under test uses dependency injection. Our tests generally follow the Arrange-Act-Assert kind of layout where we mock dependencies in the Arrange section with Moq.

Theoretically, unit tests should be a shield that protects you when you refactor. But it's turning into an anchor that prevents us from doing so. I'm trying to nail down where our process failure is.

Consider the simplified example:

  • XRepository.Save has it's signature and behavior/contract changed.
  • XController.Save uses XRepository.Save so it is refactored to use the new interface. But externally it's public contract has not changed.

I would expect that controller tests do not need to be refactored, but instead prove to me that my new controller implementation honors the unchanged contract. But we have failed here as this is not the case.

Each controller test mocks the repository interface on the fly. They all need to be changed. Furthermore, since each test does not want to mock all interfaces and methods, we find our test tied to the particular implementation because it needs to know what methods to mock.

It becomes exponentially more difficult to refactor for the more tests we have! Or more accurately, the more times that we mock an interface.

So my questions:

  1. Any preference for using on-the-fly mocks in each test vs making a reusable hand-crafted mock for each interface?

  2. Given my story, am I missing some principle or falling into a common pitfall?

Thanks!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

暖伴 2024-10-17 17:49:31

你并没有遗漏任何原则,但这是一个常见的问题。我认为每个团队都会以自己的方式解决(或不解决)这个问题。

副作用

对于任何具有副作用的函数,您都会继续遇到此问题。我发现对于副作用函数,我必须进行测试以确保以下部分或全部:

  • 它被调用/未被调用
  • 它被调用的次数
  • 传递给它的参数是什么
  • 调用顺序

在测试中确保这一点通常意味着违反封装(我与实现交互并了解)。无论何时执行此操作,您总是会隐式地将测试与实现耦合起来。这将导致您每当更新要公开/测试的实现部分时都必须更新测试。

可重用模拟

我使用可重用模拟取得了很好的效果。代价是它们的实现更加复杂,因为它需要更加完整。您确实降低了更新测试以适应重构的成本。

接受 TDD

另一种选择是更改您正在测试的内容。因为这实际上是关于改变你的测试策略,所以不能轻易进入。您可能需要先做一些分析,看看它是否真的适合您的情况。

我曾经使用单元测试进行 TDD。我遇到了一些我认为我们不应该处理的问题。特别是在重构方面,我注意到我们通常必须更新许多测试。这些重构不是在代码单元内,而是主要组件的重构。我知道很多人会说问题是频繁的大变化,而不是单元测试。巨大的变化部分是我们的规划/架构的结果,这可能是有一定道理的。然而,也正是由于商业决策导致了方向的改变。这些和其他合法原因导致需要对代码进行大量更改。最终结果是,由于所有测试更新,大型重构变得更慢、更痛苦。

由于单元测试未涵盖的集成问题,我们还遇到了错误。我们通过手动验收测试做了一些。实际上,我们做了相当多的工作来使验收测试尽可能减少接触。它们仍然是手动的,我们觉得单元测试和验收测试之间有太多交叉,应该有一种方法来降低实施这两种测试的成本。

随后公司进行了裁员。突然之间,我们没有同样数量的资源来投入编程和维护。我们被迫为我们所做的一切(包括测试)获得最大的回报。我们首先添加所谓的部分堆栈测试来解决我们遇到的常见集成问题。事实证明它们非常有效,以至于我们开始减少经典的单元测试。我们还摆脱了手动验收测试(Selenium)。我们慢慢地提高了测试开始的位置,直到我们基本上进行了验收测试,但没有浏览器。我们将模拟特定控制器的 GET、POST 或 PUT 方法并检查验收标准。

  • 数据库已正确更新
  • 返回了正确的 HTTP 状态代码
  • 返回的页面:
    • 有效的 html 4.01 严格
    • 包含我们想要发送回用户的信息

我们最终减少了错误。具体来说,几乎所有的集成错误以及由于大型重构而导致的错误几乎完全消失了。

这是需要权衡的。事实证明,对于我们的情况来说,利远远大于弊。缺点:

  • 测试通常比较复杂,几乎每个人都会测试一些副作用。
  • 我们可以判断何时出现问题,但它不像单元测试那样有针对性,因此我们必须进行更多调试才能找出问题所在。

You're not missing any principle, but it is a common problem. I think each team solves it (or not) in their own way.

Side Effects

You will continue to have this issue with any function which has side effects. I have found for side effect functions I have to make tests that assure some or all of the following:

  • That it was/was not called
  • The number of times it was called
  • What arguments were passed to it
  • Order of calls

Assuring this in test usually means violating encapsulation (I interact and know with the implementation). Anytime you do this, you will always implicitly couple the test to the implementation. This will cause you to have to update the test whenever you update the implementation portions that you are exposing/testing.

Reusable Mocks

I've used reusable mocks to great effect. The trade-off is that their implementation is more complex because it needs to be more complete. You do mitigate the cost of updating tests to accommodate refactors.

Acceptance TDD

Another option is to change what you're testing for. Since this is really about changing your testing strategy it is not something to enter into lightly. You may want to do a little analysis first and see if it would really be fit for your situation.

I used to do TDD with unit tests. I ran into issues that I felt we shouldn't have had to deal with. Specifically around refactors I noticed we usually had to update many tests. These refactors were not within a unit of code, but rather the restructuring of major components. I know many people will say the problem was the frequent large changes, not the unit testing. There is probably some truth to the large changes being partially a result of our planning/architecture. However, it was also due to business decisions that caused changes in directions. These and other legitimate causes had the effect of necessitating large changes to the code. The end result was large refactors became slower and more painful as a result of all the test updates.

We also ran into bugs due to integration issues that unit tests did not cover. We did some by manual acceptance testing. We actually did quite a bit of work to make the acceptance tests as low-touch as possible. They were still manual, and we felt like there was so much crossover between the unit tests and acceptance tests that there should be a way to mitigate the cost of implementing both.

Then the company had layoffs. All of a sudden we didn't have the same amount of resources to throw at programming and maintenance. We were pushed to get the biggest return for everything we did including testing. We started by adding what we called partial stack tests to cover common integration problems we had. They turned out to be so effective that we started doing less classic unit testing. We also got rid of the manual acceptance tests (Selenium). We slowly pushed up where the tests started testing until we were essentially doing acceptance tests, but without the browser. We would simulate a GET, POST, or PUT method to a particular controller and check the acceptance criteria.

  • The database was updated correctly
  • The correct HTTP status code was returned
  • A page was returned that:
    • was valid html 4.01 strict
    • contained the information we wanted to send back to the user

We ended up having less bugs. Specifically, almost all the integration bugs, and bugs due to large refactors disappeared almost completely.

There were trade-offs. It just turned out that the pros far outweighed the cons for our situation. Cons:

  • The test was usually more complicated, and almost everyone tests some side effects.
  • We can tell when something breaks, but it's not as targeted as the unit tests so we do have to do more debugging to track down where the problem is.
各自安好 2024-10-17 17:49:31

我自己也曾在此类问题上苦苦挣扎,但没有一个我认为可靠的答案,但这是一种尝试性的思考方式。我观察到两种单元测试。

  1. 有些测试使用公共接口,如果我们要充满信心地重构,这些测试非常重要,它们证明我们履行了与客户的合同。这些测试最好通过手工制作的可重用模拟来处理,该模拟处理一小部分测试数据。
  2. 有“覆盖”测试。这些往往是为了证明当依赖项行为不当时我们的实现行为正确。我认为这些需要动态模拟来激发特定的实现路径。

I've struggled with this kind of issue myself and don't have an answer that I feel is solid, but here a tentative way of thinking. I observe two kinds of Unit tests

  1. There are tests where exercise the public Interface, these are very important if we are to refactor with confidence, they prove that we honour our contract to our clients. These tests are best served by a hand-crafted reusable mock which deals with a small subset of test data.
  2. There are "coverage" tests. These tend to be to prove that our implementation behaves correctly when dependencies misbehave. These I think need on the fly mocks to provoke particular implementation paths.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文