测试驱动开发初步实施
TDD 的一个常见做法是小步前进。但困扰我的一件事是我看到一些人所做的事情,他们只是硬编码值/选项,然后重构以使其正常工作。例如……
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
然后你尽最大可能让它通过:
class Calculator
def self.multiply(a, b)
return 8
而且它确实通过了!
人们为什么要这样做?是为了确保他们实际上在正确的类中实现该方法还是其他什么?因为这似乎是一种肯定会引入错误的方法,并且如果你忘记了某些东西,就会产生错误的信心。这是一个好的做法吗?
A common practice of TDD is that you make tiny steps. But one thing which is bugging me is something I've seen a few people do, where by they just hardcode values/options, and then refactor later to make it work properly. For example…
describe Calculator
it should multiply
assert Calculator.multiply(4, 2) == 8
Then you do the least possible to make it pass:
class Calculator
def self.multiply(a, b)
return 8
And it does!
Why do people do this? Is it to ensure they're actually implementing the method in the right class or something? Cause it just seems like a sure-fire way to introduce bugs and give false-confidence if you forget something. Is it a good practice?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这种做法被称为“假装成功”。换句话说,放入假的实现,直到放入真正的实现变得更简单为止。你问我们为什么这样做。
我这样做有很多原因。一是简单地确保我的测试正在运行。有可能配置错误,这样当我按下神奇的“运行测试”键时,我实际上并没有运行我认为正在运行的测试。如果我按下按钮,它是红色的,然后放入假实现,它是绿色的,我知道我真的在运行我的测试。
这种做法的另一个原因是保持快速的红/绿/重构节奏。这就是驱动 TDD 的心跳,并且它具有快速的周期非常重要。重要的是让你感受到进步,重要的是你知道自己处于什么位置。有些问题(显然不是这个问题)无法一下子解决,但我们必须尽快解决它们。 “Fake it 'til you make it”是确保及时取得进展的一种方法。另请参阅流程。
This practice is known as "Fake it 'til you make it." In other words, put fake implementations in until such time as it becomes simpler to put in a real implementation. You ask why we do this.
I do this for a number of reasons. One is simply to ensure that my test is being run. It's possible to be configured wrong so that when I hit my magic "run tests" key I'm actually not running the tests I think I'm running. If I press the button and it's red, then put in the fake implementation and it's green, I know I'm really running my tests.
Another reason for this practice is to keep a quick red/green/refactor rhythm going. That is the heartbeat that drives TDD, and it's important that it have a quick cycle. Important so you feel the progress, important so you know where you're at. Some problems (not this one, obviously) can't be solved in a quick heartbeat, but we must advance on them in a heartbeat. Fake it 'til you make it is a way to ensure that timely progress. See also flow.
有一种思想流派对于培训程序员使用 TDD 很有用,它认为不应该有任何原本不属于单元测试一部分的源代码行。通过首先将通过测试的算法编码到测试中,您可以验证您的核心逻辑是否有效。然后,将其重构为生产代码可以使用的内容,并编写集成测试来定义交互,从而定义包含此逻辑的对象结构。
此外,宗教 TDD 坚持会告诉您,不应该有任何逻辑编码是由单元测试中的断言验证的需求没有具体说明的。恰当的例子;此时,系统中乘法的唯一测试是断言答案必须是 8。因此,此时答案始终是 8,因为要求没有告诉您任何不同。
这看起来非常严格,并且在像这样的简单案例的背景下,毫无意义;为了验证一般情况下的正确功能,当您作为一个聪明的人“知道”乘法应该如何工作并且可以轻松设置生成和测试乘法表的测试时,您将需要无限数量的单元测试达到一定的限度,这将使您确信它可以在所有必要的情况下发挥作用。然而,在涉及更多算法的更复杂场景中,这成为对 YAGNI 优势的有用研究。如果需求表明您需要能够将记录 A 保存到数据库,并且省略了保存记录 B 的能力,那么您必须得出“您不需要”保存记录 B 的能力,直到满足需求进来说明了这一点。如果您在知道需要之前实现了保存记录 B 的功能,那么如果事实证明您永远不需要这样做,那么您就浪费了时间和精力将其构建到系统中;您的代码没有任何商业目的,无论如何仍然可以“破坏”您的系统,因此需要维护。
即使在更简单的情况下,如果您的代码超出了您“知道”的过于简单或具体的要求,那么您最终可能会编写超出您需要的代码。假设您正在为字符串代码实现某种解析器。要求规定字符串代码“AA”= 1,“AB”= 2,这就是要求的限制。但是,您知道该系统中的完整代码库包括其他 20 个代码,因此您包括解析整个库的逻辑和测试。你回到客户那里,期待你支付时间和材料费用,客户说“我们没有要求这个;我们只使用我们在测试中指定的两个代码,所以我们不会支付你的费用”额外的工作”。他们是完全正确的;从技术上讲,你试图通过向他们没有要求和不需要的代码收费来欺骗他们。
There is a school of thought, which can be useful in training programmers to use TDD, that says you should not have any lines of source code that were not originally part of a unit test. By first coding the algorithm that passes the test into the test, you verify that your core logic works. Then, you refactor it out into something your production code can use, and write integration tests to define the interaction and thus the object structure containing this logic.
Also, religious TDD adherence would tell you that there should be no logic coded that a requirement, verified by an assertion in a unit test, does not specifically state. Case in point; at this time, the only test for multiplication in the system is asserting that the answer must be 8. So, at this time, the answer is ALWAYS 8, because the requirements tell you nothing different.
This seems very strict, and in the context of a simple case like this, nonsensical; to verify correct functionality in the general case, you would need an infinite number of unit tests, when you as an intelligent human being "know" how multiplication is supposed to work and could easily set up a test that generated and tested a multiplication table up to some limit that would make you confident it would work in all necessary cases. However, in more complex scenarios with more involved algorithms, this becomes a useful study in the benefits of YAGNI. If the requirement states that you need to be able to save record A to the DB, and the ability to save record B is omitted, then you must conclude "you ain't gonna need" the ability to save record B, until a requirement comes in that states this. If you implement the ability to save record B before you know you need to, then if it turns out you never need to then you have wasted time and effort building that into the system; you have code with no business purpose, that regardless can still "break" your system and thus requires maintenance.
Even in the simpler cases, you may end up coding more than you need if you code beyond requirements that you "know" are too light or specific. Let's say you were implementing some sort of parser for string codes. The requirements state that the string code "AA" = 1, and "AB" = 2, and that's the limit of the requirements. But, you know the full library of codes in this system include 20 others, so you include logic and tests that parse the full library. You go back the the client, expecting your payment for time and materials, and the client says "we didn't ask for that; we only ever use the two codes we specified in the tests, so we're not paying you for the extra work". And they would be exactly right; you've technically tried to bilk them by charging for code they didn't ask for and don't need.