If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.
So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.
Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.
Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.
I don't have a lot of experience with TDD (just started recently), but I think while practicing TDD, tests and actual code go together. Remember Red-Green-Refactor. So I would write just enough tests to cover my current functionality. Writing tests upfront for future requirements might not be a good idea.
Maybe someone with more experience can provide a better perspective.
Tests for future functionality can exist (I have BDD specs for things I'll implement later), but should either (a) not be run, or (b) run as non-error "pending" tests.
The system isn't expected to make them pass (yet): they're not valid tests, and should not stand as a valid indication of system functionality.
发布评论
评论(3)
如果您正确执行 TDD,您将拥有一个持续集成服务器(例如 Cruise Control 或 TeamCity 或 TFS),它会构建您的代码并在您每次签入时运行所有测试。如果任何测试失败,则构建也会失败。
所以不,你不需要提前编写测试。您为今天正在做的事情编写测试,并在测试通过时进行检查。
失败的测试是噪音。如果您知道失败的测试失败了,那么您将很难注意到另一个(合法的)失败已经潜入。如果您努力始终让所有测试都通过,那么即使一个测试失败是一个很大的警告信号——它告诉你是时候放弃一切并修复该错误了。但如果你总是说“哦,没关系,我们总是有几百个失败的测试”,那么当真正的错误出现时,你就不会注意到。您否定了进行测试的主要好处。
此外,现在为几年后都不会使用的东西编写测试是愚蠢的。你正在推迟你现在应该做的事情,如果那些未来的功能被削减,你就是在浪费工作。
If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.
So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.
Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.
Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.
我对 TDD 没有太多经验(最近才开始),但我认为在练习 TDD 时,测试和实际代码是结合在一起的。记住红绿重构。所以我会编写足够的测试来覆盖我当前的功能。为未来的需求预先编写测试可能不是一个好主意。
也许有更多经验的人可以提供更好的视角。
I don't have a lot of experience with TDD (just started recently), but I think while practicing TDD, tests and actual code go together. Remember Red-Green-Refactor. So I would write just enough tests to cover my current functionality. Writing tests upfront for future requirements might not be a good idea.
Maybe someone with more experience can provide a better perspective.
未来功能的测试可以存在(我有稍后将实现的 BDD 规范),但应该 (a) 不运行,或者 (b) 作为非错误“待处理”测试运行。
系统预计不会让它们通过(尚未):它们不是有效的测试,并且不应作为系统功能的有效指示。
Tests for future functionality can exist (I have BDD specs for things I'll implement later), but should either (a) not be run, or (b) run as non-error "pending" tests.
The system isn't expected to make them pass (yet): they're not valid tests, and should not stand as a valid indication of system functionality.