YAGNI 在编写测试时也适用吗?

发布于 2024-07-22 13:30:43 字数 105 浏览 8 评论 0原文

当我编写代码时,我只编写我需要的函数。

这种方法也适用于编写测试吗?

为了安全起见,我应该提前为我能想到的每个用例编写测试,还是应该只为遇到的用例编写测试?

When I write code I only write the functions I need as I need them.

Does this approach also apply to writing tests?

Should I write a test in advance for every use-case I can think of just to play it safe or should I only write tests for a use-case as I come upon it?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(11

溺深海 2024-07-29 13:30:43

我认为当你编写一个方法时,你应该测试预期的和潜在的错误路径。 这并不意味着您应该扩展您的设计以涵盖所有潜在用途 - 将其留到需要时使用,但您应该确保您的测试已经定义了面对无效参数或其他条件时的预期行为。

据我了解,YAGNI 的意思是您不应该开发尚不需要的功能。 从这个意义上说,您不应该编写一个测试来促使您开发不需要的代码。 不过,我怀疑这不是您要问的。

在这种情况下,我更关心是否应该编写涵盖意外用途的测试 - 例如,由于传递 null 或超出范围的参数而导致的错误 - 或者重复仅在数据方面有所不同的测试,而不是在功能方面有所不同。 对于前一种情况,正如我上面指出的,我会说是的。 您的测试将记录您的方法在出现错误时的预期行为。 这对于使用您的方法的人来说是重要的信息。

对于后一种情况,我不太能给你明确的答案。 您当然希望您的测试保持 DRY —— 不要编写一个简单地重复另一个测试的测试,即使它有不同的数据。 或者,除非您运用数据的边缘情况,否则您可能不会发现潜在的设计问题。 一个简单的例子是计算两个整数之和的方法:如果将 maxint 作为两个参数传递,会发生什么? 如果您只有一次测试,那么您可能会错过这一行为。 显然,这与上一点有关。 只有您才能确定何时确实需要进行测试。

I think that when you write a method you should test both expected and potential error paths. This doesn't mean that you should expand your design to encompass every potential use -- leave that for when it's needed, but you should make sure that your tests have defined the expected behavior in the face of invalid parameters or other conditions.

YAGNI, as I understand it, means that you shouldn't develop features that are not yet needed. In that sense, you shouldn't write a test that drives you to develop code that's not needed. I suspect, though, that's not what you are asking about.

In this context I'd be more concerned with whether you should write tests that cover unexpected uses -- for example, errors due passing null or out of range parameters -- or repeating tests that only differ with respect to the data, not the functionality. In the former case, as I indicated above, I would say yes. Your tests will document the expected behavior of your method in the face of errors. This is important information to people who use your method.

In the latter case, I'm less able to give you a definitive answer. You certainly want your tests to remain DRY -- don't write a test that simply repeats another test even if it has different data. Alternatively, you may not discover potential design issues unless you exercise the edge cases of your data. A simple example is a method that computes a sum of two integers: what happens if you pass it maxint as both parameters? If you only have one test, then you may miss this behavior. Obviously, this is related to the previous point. Only you can be sure when a test is really needed or not.

╰つ倒转 2024-07-29 13:30:43

是的,YAGNI 绝对适用于编写测试。

举个例子,我不会编写测试来检查任何属性。 我假设属性以某种方式工作,直到我找到一个与正常情况不同的属性为止,我不会对它们进行测试。

您应该始终考虑编写任何测试的有效性。 如果编写测试对您没有明显的好处,那么我建议您不要这样做。 然而,这显然是非常主观的,因为你可能认为不值得的事情别人可能认为非常值得付出努力。

另外,我会编写测试来验证输入吗? 绝对地。 不过,我会在一定程度上做到。 假设您有一个具有 3 个整数参数的函数,并且它返回一个双精度值。 您将围绕该函数编写多少测试。 我会在这里使用 YAGNI 来确定哪些测试将为您带来良好的投资回报率,哪些测试是无用的。

Yes YAGNI absolutely applies to writing tests.

As an example, I, for one, do not write tests to check any Properties. I assume that properties work a certain way, and until I come to one that does something different from the norm, I won't have tests for them.

You should always consider the validity of writing any test. If there is no clear benefit to you in writing the test, then I would advise that you don't. However, this is clearly very subjective, since what you might think is not worth it someone else could think is very worth the effort.

Also, would I write tests to validate input? Absolutely. However, I would do it to a point. Say you have a function with 3 parameters that are ints and it returns a double. How many tests are you going to write around that function. I would use YAGNI here to determine which tests are going to get you a good ROI, and which are useless.

孤檠 2024-07-29 13:30:43

根据需要编写测试。 测试就是代码。 预先编写一堆(最初失败的)测试打破了 TDD 的红/修复/绿循环,并且使识别有效失败与未编写代码变得更加困难。

Write the test as you need it. Tests are code. Writing a bunch of (initially failing) tests up front breaks the red/fix/green cycle of TDD, and makes it harder to identify valid failures vs. unwritten code.

表情可笑 2024-07-29 13:30:43

您应该为在此开发阶段要实现的用例编写测试。

这具有以下好处:

  1. 您的测试有助于定义此阶段的功能。
  2. 您知道何时完成此阶段,因为所有测试都已通过。

You should write the tests for the use cases you are going to implement during this phase of development.

This gives the following benefits:

  1. Your tests help define the functionality of this phase.
  2. You know when you've completed this phase because all of your tests pass.
三生一梦 2024-07-29 13:30:43

理想情况下,您应该编写涵盖所有代码的测试。 否则,其余的测试就会失去价值,最终您将重复调试该代码段。

所以不行。 YAGNI 不包括测试:)

You should write tests that cover all your code, ideally. Otherwise, the rest of your tests lose value, and you will in the end debug that piece of code repeatedly.

So, no. YAGNI does not include tests :)

春风十里 2024-07-29 13:30:43

当然,为您不确定是否会实现的用例编写测试是没有意义的——这一点对任何人来说都应该是显而易见的。

对于您知道将要实现的用例,测试用例的回报会递减,即,当您可以用一半的工作覆盖所有重要和关键路径时,尝试覆盖每个可能的模糊角落案例并不是一个有用的目标 - 假设,当然,忽视很少发生的错误的成本是可以承受的; 在编写航空电子软件时,我当然不会满足于低于 100% 的代码和分支覆盖率。

There is of course no point in writing tests for use cases you're not sure will get implemented at all - that much should be obvious to anyone.

For use cases you know will get implemented, test cases are subject to diminishing returns, i.e. trying to cover each and every possible obscure corner case is not a useful goal when you can cover all important and critical paths with half the work - assuming, of course, that the cost of overlooking a rarely occurring error is endurable; I would certainly not settle for anything less than 100% code and branch coverage when writing avionics software.

魂归处 2024-07-29 13:30:43

您可能会在这里遇到一些差异,但一般来说,编写测试的目标(对我来说)是确保所有代码都按其应有的方式运行,没有副作用,以可预测的方式并且没有缺陷。 那么,在我看来,您讨论的只为遇到的用例编写测试的方法并没有真正的好处,实际上可能会造成伤害。

如果您忽略的被测单元的特定用例导致最终软件出现严重缺陷怎么办? 在这种情况下,花在开发测试上的时间除了给你带来虚假的安全感之外,还有什么好处吗?

(郑重声明,这是我使用代码覆盖率来“衡量”测试质量时遇到的问题之一——这种衡量方法如果低,可能表明您测试得不够,但如果高,则应该不要假设你是坚如磐石的。测试常见情况,测试边缘情况,然后考虑单元的所有 ifs、ands 和 buts 并测试它们。)

温和更新

我应该注意到我'我的观点可能与这里许多人不同。 我经常发现我正在编写库风格的代码,即将在多个项目中为多个不同的客户端重用的代码。 因此,我通常不可能肯定地说某些用例根本不会发生。 我能做的最好的事情就是记录它们不是预期的(因此可能需要随后更新测试),或者 - 这是我的偏好:) - 只是编写测试。 我经常发现选项 #2 更适合日常使用,因为当我在新应用程序 Y 中重用组件 X 时,我更有信心。在我看来,信心就是自动化测试所有关于。

You'll probably get some variance here, but generally, the goal of writing tests (to me) is to ensure that all your code is functioning as it should, without side effects, in a predictable fashion and without defects. In my mind, then, the approach you discuss of only writing tests for use cases as they are come upon does you no real good, and may in fact cause harm.

What if the particular use case for the unit under test that you ignore causes a serious defect in the final software? Has the time spent developing tests bought you anything in this scenario beyond a false sense of security?

(For the record, this is one of the issues I have with using code coverage to "measure" test quality -- it's a measurement that, if low, may give an indication that you're not testing enough, but if high, should not be used to assume that you are rock-solid. Get the common cases tested, the edge cases tested, then consider all the ifs, ands and buts of the unit and test them, too.)

Mild Update

I should note that I'm coming from possibly a different perspective than many here. I often find that I'm writing library-style code, that is, code which will be reused in multiple projects, for multiple different clients. As a result, it is generally impossible for me to say with any certainty that certain use cases simply won't happen. The best I can do is either document that they're not expected (and hence may require updating the tests afterward), or -- and this is my preference :) -- just writing the tests. I often find option #2 is for more livable on a day-to-day basis, simply because I have much more confidence when I'm reusing component X in new application Y. And confidence, in my mind, is what automated testing is all about.

与风相奔跑 2024-07-29 13:30:43

您当然应该推迟为尚未实现的功能编写测试用例。 测试应该只针对现有功能或您将要添加的功能编写。

但是,用例与功能不同。 您只需要测试您已识别的有效用例,但可能会发生很多其他事情,并且您希望确保这些输入得到合理的响应(这很可能是错误消息) 。

显然,您不会获得所有可能的用例; 如果可以的话,就不用担心计算机安全了。 您至少应该得到更合理的那些,当问题出现时,您应该将它们添加到用例中进行测试。

You should certainly hold off writing test cases for functionality you're not going to implement yet. Tests should only be written for existing functionality or functionality you're about to put in.

However, use cases are not the same as functionality. You only need to test the valid use cases that you've identified, but there's going to be a lot of other things that might happen, and you want to make sure those inputs get a reasonable response (which could well be an error message).

Obviously, you aren't going to get all the possible use cases; if you could, there'd be no need to worry about computer security. You should get at least the more plausible ones, and as problems come up you should add them to the use cases to test.

乖不如嘢 2024-07-29 13:30:43

我认为这里的答案是,就像在很多地方一样,这取决于情况。 如果一个函数提供的合同声明它执行 X,并且我看到它有相关的单元测试等,我倾向于认为它是一个经过良好测试的单元并按原样使用它,即使我不这样做在其他地方也以同样的方式使用它。 如果特定的使用模式未经测试,那么我可能会遇到令人困惑或难以追踪的错误。 出于这个原因,我认为测试应该涵盖一个单元的所有(或大部分)已定义的、记录的行为。

如果您选择进行更多增量测试,我可能会在文档注释中添加该函数“仅测试[某些类型的输入],其他输入的结果未定义”。

I think the answer here is, as it is in so many places, it depends. If the contract that a function presents states that it does X, and I see that it's got associated unit tests, etc., I'm inclined to think it's a well-tested unit and use it as such, even if I don't use it that exact way elsewhere. If that particular usage pattern is untested, then I might get confusing or hard-to-trace errors. For this reason, I think a test should cover all (or most) of the defined, documented behavior of a unit.

If you choose to test more incrementally, I might add to the doc comments that the function is "only tested for [certain kinds of input], results for other inputs are undefined".

不即不离 2024-07-29 13:30:43

我经常发现自己在编写测试(TDD),以应对我不希望正常程序流程调用的情况。 “假装它直到你成功”的方法通常让我从一个空输入开始 - 足以让我了解函数调用应该是什么样子,它的参数将具有什么类型以及它将是什么类型返回。 需要明确的是,我不会只是将 null 发送到测试中的函数;而是将 null 发送给测试中的函数。 我将初始化一个类型变量来保存空值; 这样,当 Eclipse 的 Quick Fix 为我创建函数时,它已经具有正确的类型。 但我不期望程序正常向函数发送 null 的情况并不罕见。 所以,可以说,我正在编写一个我 AGN 的测试。 但如果我从价值观开始,有时它就太大了。 我从一开始就设计 API 并推动其真正的实现。 因此,通过缓慢开始并假装它直到我成功,有时我会为我不希望在生产代码中看到的情况编写测试。

I frequently find myself writing tests, TDD, for cases that I don't expect the normal program flow to invoke. The "fake it 'til you make it" approach has me starting, generally, with a null input - just enough to have an idea in mind of what the function call should look like, what types its parameters will have and what type it will return. To be clear, I won't just send null to the function in my test; I'll initialize a typed variable to hold the null value; that way when Eclipse's Quick Fix creates the function for me, it already has the right type. But it's not uncommon that I won't expect the program normally to send a null to the function. So, arguably, I'm writing a test that I AGN. But if I start with values, sometimes it's too big a chunk. I'm both designing the API and pushing its real implementation from the beginning. So, by starting slow and faking it 'til I make it, sometimes I write tests for cases I don't expect to see in production code.

愿得七秒忆 2024-07-29 13:30:43

如果您采用 TDD 或 XP 风格,那么您不会像您所说的那样“提前”编写任何内容,您将在任何给定时刻处理非常精确的功能,因此您将编写所有必要的测试,以确保该功能按您的预期工作。

测试代码与“代码”本身类似,您不会为应用程序的每个用例提前编写代码,那么为什么要提前编写测试代码呢?

If you're working in a TDD or XP style, you won't be writing anything "in advance" as you say, you'll be working on a very precise bit of functionality at any given moment, so you'll be writing all the necessary tests in order make sure that bit of functionality works as you intend it to.

Test code is similar with "code" itself, you won't be writing code in advance for every use cases your app has, so why would you write test code in advance ?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文