TDD 实践:区分真正的失败和未实现的功能

发布于 2024-12-13 19:27:16 字数 614 浏览 0 评论 0原文

如果您正处于 TDD 迭代过程中,您如何知道哪些测试失败是因为现有代码确实不正确,哪些测试失败是因为测试本身或功能尚未实现?请不要说:“你只是不在乎,因为你必须同时解决这两个问题。”我已经准备好摆脱这种心态。

我编写测试的一般做法如下:

  • 首先,我构建测试套件的整体或部分总体结构。也就是说,我只检查并写下测试的名称,以提醒我打算实现的功能。我通常(至少在 python 中)只是从只有一行的每个测试开始:self.fail()。这样,我就可以通过列出我认为我想要测试的每个功能(例如一次 11 个测试)来驾驭意识流。

  • 其次,我选择一个测试并实际编写测试逻辑。

  • 第三,我运行测试运行程序并看到 11 次失败 - 10 次只是 self.fail(),1 次是真正的 AssertionError。

  • 第四,我编写使测试通过的代码。

  • 第五,我运行测试运行程序,看到 1 次通过,10 次失败。

  • 第六,我转到步骤 2。

理想情况下,我不想看到通过、失败和异常的测试,而是希望有第四种可能性:NotImplemented 。

这里的最佳实践是什么?

If you are in the middle of a TDD iteration, how do you know which tests fail because the existing code is genuinely incorrect and which fail because either the test itself or the features haven't been implemented yet? Please don't say, "you just don't care, because you have to fix both." I'm ready to move past that mindset.

My general practice for writing tests is as follows:

  • First, I architect the general structure of the test suite, in whole or in part. That is - I go through and write only the names of tests, reminding me of the features that I intend to implement. I typically (at least in python) simply start with each testing having only one line: self.fail(). This way, I can ride a stream of consciousness through listing every feature I think I will want to test - say, 11 tests at a time.

  • Second, I pick one test and actually write the test logic.

  • Third, I run the test runner and see 11 failures - 10 that simply self.fail() and 1 that is a genuine AssertionError.

  • Fourth, I write the code that causes my test to pass.

  • Fifth, I run the test runner and see 1 pass and 10 failures.

  • Sixth, I go to step 2.

Ideally, instead of seeing tests in terms of passes, failures, and exceptions, I'd like to have a fourth possibility: NotImplemented.

What's the best practice here?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

彼岸花ソ最美的依靠 2024-12-20 19:27:16

许多 TDD 工具都有 PENDING 测试与 FAILING 测试的概念。我认为 unittest2 也做出了这种区分。

(我认为你如何做到这一点是写:

def test_this_thing(self):
  pass

...但这来自记忆...

[编辑:在2.7的unittest或unittest2中,你可以用@skip标记测试,或者@unittest.expectedFailure 装饰器。 rel="nofollow">请参阅相关文档

A number of TDD tools have the idea of PENDING tests vs FAILING tests. I think unittest2 makes this distinction too.

(I think how you do this is write:

def test_this_thing(self):
  pass

... but this is from memory...

[EDIT: In 2.7's unittest, or unittest2, you can mark test with the @skip, or @unittest.expectedFailure decorator. See the documentation on this

甜宝宝 2024-12-20 19:27:16

我用一张纸创建一个测试列表(用便笺本来跟踪测试,这样我就不会错过它们)。我希望您不要一次性编写所有失败的测试(因为随着每个红绿重构周期出现新知识,这可能会导致一定程度的混乱)。

要将测试标记为 TO-DO 或未实现,您还可以使用 [Ignore("PENDING")][Ignore("TODO")] 的等效项来标记测试。例如,NUnit 会将此类测试显示为黄色而不是失败。所以红色意味着测试失败,黄色意味着 TODO。

I use a piece of paper to create a test list (scratchpad to keep track of tests so that I don't miss out on them). I hope you're not writing all the failing tests at one go (because that can cause some amount of thrashing as new knowledge comes in with each Red-Green-Refactor cycle).

To mark a test as TO-DO or Not implemented, you could also mark the test with the equivalent of a [Ignore("PENDING")] or [Ignore("TODO")]. NUnit for example would so such tests as yellow instead of failed. So Red implies test failure, Yellow implies TODO.

拔了角的鹿 2024-12-20 19:27:16

大多数项目都有一个层次结构(例如项目->包->模块->类),如果您可以有选择地对任何级别上的任何项目运行测试,或者如果您的报告详细介绍了这些部分,您可以看到状态相当清楚。大多数时候,当整个包或类失败时,这是因为它尚未实现。

此外,在许多测试框架中,您可以通过删除注释/修饰或重命名执行测试的方法/函数来禁用单个测试用例。这样做的缺点是不向您显示实现进度,但如果您决定使用固定且特定的前缀,您可能可以轻松地从测试源树中 grep 该信息。

话虽如此,我还是欢迎一个能够做出这种区分的测试框架,除了更标准的测试用例状态代码(如 PASS、WARNING 和 FAILED)之外,还具有 NOT_IMPLMENTED。我想有些人可能有它。

Most projects would have a hierarchy (e.g. project->package->module->class) and if you can selectively run tests for any item on any of the levels or if your report covers these parts in detail you can see the statuses quite clearly. Most of the time, when an entire package or class fails, it's because it hasn't been implemented.

Also, In many test frameworks you can disable individual test cases by removing annotation/decoration from or renaming a method/function that performs a test. This has the disadvantage of not showing you the implementation progress, though if you decide on a fixed and specific prefix you can probably grep that info out of your test source tree quite easily.

Having said that, I would welcome a test framework that does make this distinction and has NOT_IMPLEMENTED in addition to the more standard test case status codes like PASS, WARNING and FAILED. I guess some might have it.

世界如花海般美丽 2024-12-20 19:27:16

我现在还意识到,unittest.expectedFailure 装饰器完成的功能符合我的需求。我一直认为这个装饰器更适合需要某些环境条件的测试,而这些条件在运行测试的生产环境中可能不存在,但实际上在这种情况下也是有意义的。

I also now realize that the unittest.expectedFailure decorator accomplishes functionality congruent with my needs. I had always thought that this decorator was more for tests that require certain environmental conditions that might not exist in the production environment where the test is being run, but it actually makes sense in this scenario too.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文