单元测试和验收测试是否足够?

发布于 2024-07-21 00:07:16 字数 193 浏览 7 评论 0原文

如果我对每个类和/或成员函数进行单元测试,并对每个用户故事进行验收测试,我是否有足够的测试来确保项目按预期运行?

例如,如果我对某个功能进行了单元测试和验收测试,我是否仍然需要集成测试,或者单元测试和验收测试应该涵盖相同的基础? 测试类型之间是否存在重叠?

我在这里谈论自动化测试。 我知道对于易用性等方面仍然需要手动测试。

If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?

For instance if I have unit tests and acceptance tests for a feature do I still need integration tests or should the unit and acceptance tests cover the same ground? Is there overlap between test types?

I'm talking about automated tests here. I know manual testing is still needed for things like ease of use, etc.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(12

痴意少年 2024-07-28 00:07:16

如果我对每个类和/或成员函数进行单元测试,并对每个用户故事进行验收测试,我是否有足够的测试来确保项目按预期运行?

不,测试只能验证你的想法。 不是你没有想到的。

If I have unit tests for each class and/or member function and acceptance tests for every user story do I have enough tests to ensure the project functions as expected?

No. Tests can only verify what you have thought of. Not what you haven't thought of.

长不大的小祸害 2024-07-28 00:07:16

我建议阅读《Code Complete》第二版中的第 20 - 22 章。 它很好地涵盖了软件质量。

以下是一些关键点的快速细分(所有功劳归于 McConnell,2004 年)

章节20 - 软件质量格局:

  • 没有任何一种缺陷检测技术本身是完全有效的。
  • 您越早发现缺陷,它与代码其余部分的交织就越少,造成的损害也就越少

< a href="http://www.cc2e.com/page.aspx?hid=157" rel="nofollow noreferrer">第 21 章 - 协作构建:

  • 协作开发实践往往会发现更高比例的缺陷 结
  • 协作开发实践往往会比测试发现不同类型的错误,这意味着您需要同时使用评审和测试来确保软件的质量。
  • 对编程的成本通常与检查和测试的成本大致相同。生成相似质量的代码

第 22 章 - 开发人员测试:

  • 自动化测试通常很有用对于回归测试来说是必不可少的
  • 改进测试过程的最佳方法是使其成为常规,对其进行测量,并使用您所学到的知识来改进它
  • 在代码之前编写测试用例与编写测试用例花费相同的时间和精力在代码之后,但它缩短了缺陷检测-调试-纠正周期(测试驱动开发)

至于如何制定单元测试,您应该考虑基础测试、数据流分析、边界分析等。所有这些本书中对此进行了非常详细的解释(其中还包括许多其他参考资料以供进一步阅读)。

也许这并不完全是您所要求的,但我想说自动化测试绝对不足以作为一种策略。 您还应该考虑结对编程、正式审查(或非正式审查,具体取决于项目的规模)和测试脚手架以及自动化测试(单元测试、回归测试等)。

I'd recommend reading chapters 20 - 22 in the 2nd edition of Code Complete. It covers software quality very well.

Here's a quick breakdown of some of the key points (all credit goes to McConnell, 2004)

Chapter 20 - The Software-Quality Landscape:

  • No single defect-detection technique is completely effective by itself
  • The earlier you find a defect, the less intertwined it will become with the rest of your code and the less damage it will cause

Chapter 21 - Collaborative Construction:

  • Collaborative development practices tend to find a higher percentage of defects than testing and to find them more efficiently
  • Collaborative development practices tend to find different kinds of errors than testing does, implying that you need to use both reviews and testing to ensure the quality of your software
  • Pair programming typically costs the about the same as inspections and produces similar quality code

Chapter 22 - Developer Testing:

  • Automated testing is useful in general and is essential for regression testing
  • The best way to improve your testing process is to make it regular, measure it, and use what you learn to improve it
  • Writing test cases before the code takes the same amount of time and effort as writing the test cases after the code, but it shortens defect-detection-debug-correction-cycles (Test Driven Development)

As far as how you are formulating your unit tests, you should consider basis testing, data-flow analysis, boundary analysis etc. All of these are explained in great detail in the book (which also includes many other references for further reading).

Maybe this isn't exactly what you were asking, but I would say automated testing is definitely not enough of a strategy. You should also consider such things as pair programming, formal reviews (or informal reviews, depending on the size of the project) and test scaffolding along with your automated testing (unit tests, regression testing etc.).

一杯敬自由 2024-07-28 00:07:16

多个测试周期的想法是在情况发生变化时尽早发现问题。

单元测试应由开发人员完成,以确保单元独立工作

验收测试应由客户进行,以确保系统满足要求。

然而,这两点之间发生了一些变化,也应该进行测试。 这就是在交付给客户之前将单元集成到产品中。

这首先应该由产品创建者而不是客户来测试。 一旦你与客户接触,事情就会变慢,所以在他们肮脏的小手接触之前你能做的修复越多越好。

在大商店(像我们的)中,在可交付产品发生变化的每个点上都有单元测试、集成测试、全球化测试、主构建测试等。 只有修复了所有高严重性错误(并且制定了修复低优先级错误的计划)后,我们才会向测试版客户发布该产品。

我们不想给他们提供一个有问题的产品,因为在那个阶段修复错误比我们内部所做的任何事情都要昂贵得多(特别是在管理方面)。

The idea of multiple testing cycles is to catch problems as early as possible when things change.

Unit tests should be done by the developers to ensure the units work in isolation.

Acceptance tests should be done by the client to ensure the system meets the requirements.

However, something has changed between those two points that should also be tested. That's the integration of units into a product before being given to the client.

That's something that should first be tested by the product creator, not the client. The minute you invlove the client, things slow down so the more fixes you can do before they get their grubby little hands on it, the better.

In a big shop (like ours), there are unit tests, integration tests, globalization tests, master-build tests and so on at each point where the deliverable product changes. Only once all high severity bugs are fixed (and a plan for fixing low priority bugs is in place) do we unleash the product to our beta clients.

We do not want to give them a dodgy product simply because fixing a bug at that stage is a lot more expensive (especially in terms of administrivia) than anything we do in-house.

南渊 2024-07-28 00:07:16

仅仅根据是否对每种方法和功能进行了测试,确实不可能知道您是否进行了足够的测试。 通常,我会将测试与覆盖率分析结合起来,以确保我的所有代码路径都在单元测试中得到执行。 即使这还不够,但它可以作为您可能在何处引入了测试未执行的代码的指南。 这应该表明需要编写更多测试,或者,如果您正在进行 TDD,则需要放慢速度并更加自律。 :-)

测试应该涵盖好路径和坏路径,尤其是在单元测试中。 您的验收测试可能或多或少涉及不良路径行为,但至少应该解决可能出现的常见错误。 根据故事的完整性,验收测试可能足够,也可能不足够。 验收测试和故事之间通常存在多对一的关系。 如果每个故事只有一个自动验收测试,那么除非您有不同的故事用于替代路径,否则您的测试可能还不够。

It's really impossible to know whether or not you have enough tests based simply on whether you have a test for every method and feature. Typically I will combine testing with coverage analysis to ensure that all of my code paths are exercised in my unit tests. Even this is not really enough, but it can be a guide to where you may have introduced code that isn't exercised by your tests. This should be an indication that more tests need to be written or, if you're doing TDD, you need to slow down and be more disciplined. :-)

Tests should cover both good and bad paths, especially in unit tests. Your acceptance tests may be more or less concerned with the bad path behavior but should at least address common errors that may be made. Depending on how complete your stories are, the acceptance tests may or may not be adequate. Often there is a many-to-one relationship between acceptance tests and stories. If you only have one automated acceptance test for every story, you probably don't have enough unless you have different stories for alternate paths.

一瞬间的火花 2024-07-28 00:07:16

多层测试非常有用。 单元测试以确保各个部分的行为; 集成表明合作单位集群按预期进行合作,“验收”测试表明程序按预期运行。 每个人都可以在开发过程中发现问题。 重叠本身并不是一件坏事,尽管太多会造成浪费。

也就是说,可悲的事实是,你永远无法确保产品的行为“符合预期”,因为预期是一种变化无常的人性事物,很难转化为纸上谈兵。 良好的测试覆盖率不会阻止客户说“这不完全是我的想法......”。 频繁的反馈循环对此有所帮助。 将频繁的演示视为添加到手动混音中的“健全性测试”。

Multiple layers of testing can be very useful. Unit tests to make sure the pieces behave; integration to show that clusters of cooperating units cooperate as expected, and "acceptance" tests to show that the program functions as expected. Each can catch problems during development. Overlap per se isn't a bad thing, though too much of it becomes waste.

That said, the sad truth is that you can never ensure that the product behaves "as expected", because expectation is a fickle, human thing that gets translated very poorly onto paper. Good test coverage won't prevent a customer from saying "that's not quite what I had in mind...". Frequent feedback loops help there. Consider frequent demos as a "sanity test" to add to your manual mix.

幻梦 2024-07-28 00:07:16

可能不会,除非您的软件真的非常简单并且只有一个组件。

单元测试非常具体,您应该用它们彻底覆盖所有内容。 在这里寻求高代码覆盖率。 然而,它们一次只涵盖一项功能,而不涵盖事物如何协同工作。 验收测试应该只涵盖客户真正关心的高层次内容,虽然它会发现事物如何协同工作的一些错误,但它不会发现所有内容,因为编写此类测试的人不会深入了解系统。

最重要的是,这些测试可能不是由测试人员编写的。 单元测试应该由开发人员编写,并由开发人员(理想情况下也由构建系统)频繁运行(最多每隔几分钟,取决于编码风格)。 验收测试通常由客户或代表客户的某人编写,考虑对客户重要的事情。 但是,您还需要由测试人员编写的测试,像测试人员一样思考(而不是像开发人员或客户)。

您还应该考虑以下类型的测试,这些测试通常由测试人员编写:

  • 功能测试,它将涵盖功能部分。 这可能包括 API 测试和组件级测试。 您通常也希望这里有良好的代码覆盖率。
  • 集成测试,将两个或多个组件放在一起以确保它们能够协同工作。 例如,当另一个组件需要对象的计数(“第 n 个对象”,从 1 开始)时,您不希望一个组件输出该对象在数组中的位置(从 0 开始)。 这里,重点不是代码覆盖率,而是组件之间的接口(通用接口,不是代码接口)的覆盖率。
  • 系统级测试,将所有内容放在一起并确保其端到端运行。
  • 测试非功能性特性,例如性能、可靠性、可扩展性、安全性和用户友好性(还有其他特性;并非所有特性都与每个项目相关)。

Probably not, unless your software is really, really simple and has only one component.

Unit tests are very specific, and you should cover everything thoroughly with them. Go for high code-coverage here. However, they only cover one piece of functionality at a time and not how things work together. Acceptance tests should cover only what the customer really cares about at a high level, and while it will catch some bugs in how things work together, it won't catch everything as the person writing such tests will not know about the system in depth.

Most importantly, these tests may not be written by a tester. Unit tests should be written by developers and run frequently (up to every couple minutes, depending on coding style) by the devs (and by the build system too, ideally). Acceptance tests are often written by the customer or someone on behalf of the customer, thinking about what matters to the customer. However, you also need tests written by a tester, thinking like a tester (and not like a dev or customer).

You should also consider the following sorts of tests, which are generally written by testers:

  • Functional tests, which will cover pieces of functionality. This may include API testing and component-level testing. You will generally want good code-coverage here as well.
  • Integration tests, which put two or more components together to make sure that they work together. You don't want one component to put out the position in the array where the object is (0-based) when the other component expects the count of the object ("nth object", which is 1-based), for example. Here, the focus is not on code coverage but on coverage of the interfaces (general interfaces, not code interfaces) between components.
  • System-level testing, where you put everything together and make sure it works end-to-end.
  • Testing for non-functional features, like performance, reliability, scalability, security, and user-friendliness (there are others; not all will relate to every project).
无远思近则忧 2024-07-28 00:07:16

集成测试适用于当您的代码与其他系统(例如第三方应用程序)或其他内部系统(例如环境、数据库等)集成时。使用集成测试来确保代码的行为仍然符合预期。

Integration tests are for when your code integrates with other systems such as 3rd party applications, or other in house systems such as the environment, database etc. Use integration tests to ensure that the behavior of the code is still as expected.

花开柳相依 2024-07-28 00:07:16

简而言之,没有。

首先,您的故事卡应该有接受标准。 也就是说,由产品所有者与分析师一起指定的验收标准,指定所需的行为,如果满足,则故事卡将被接受。

验收标准应驱动自动化单元测试(通过 TDD 完成)和每天运行的自动化回归/功能测试。 请记住,我们希望将缺陷移至左侧,也就是说,我们越早发现它们,修复它们的成本就越低、速度就越快。 此外,持续测试使我们能够充满信心地进行重构。 这是保持可持续发展步伐所必需的。

此外,您还需要自动化性能测试。 每天或夜间运行分析器可以深入了解 CPU 和内存的消耗以及是否存在内存泄漏。 此外,像 loadrunner 这样的工具将使您能够在系统上放置反映实际使用情况的负载。 您将能够测量生产环境(如运行 loadrunner 的机器)的响应时间以及 CPU 和内存消耗。

自动化性能测试应反映应用程序的实际使用情况。 您可以测量业务事务的数量(即,Web 应用程序是否单击页面以及对用户的响应或对服务器的往返)。 并确定此类交易的组合以及它们每秒到达的数量。 此类信息将使您能够正确设计测试应用程序性能所需的自动化 Loadrunner 测试。 通常情况下,一些性能问题将追溯到应用程序的实现,而其他性能问题将由服务器环境的配置决定。

请记住,您的应用程序将接受性能测试。 问题是,第一次性能测试是在发布软件之前还是之后进行? 相信我,最容易出现性能问题的地方是生产环境。 性能问题可能是最难修复的,可能会导致向所有用户部署失败,从而取消项目。

最后,还有用户验收测试(UAT)。 这些是由生产所有者/业务合作伙伴设计的测试,用于在发布之前测试整个系统。 一般来说,由于所有其他测试,应用程序在 UAT 期间返回零缺陷的情况并不少见。

In short no.

To begin with, your story cards should have acceptance criteria. That is, acceptance criteria specified by the product owner in conjunction with the analyst specifying the behavior required and if meet, the story card will be accepted.

The acceptance criteria should drive the automated unit test (done via TDD) and the automated regression/ functional tests which should be run daily. Remember we want to move defects to the left, that is, the sooner we find ‘em the cheaper and faster they are to fix. Furthermore, continuous testing enables us to refactor with confidence. This is required to maintain a sustainable pace for development.

In addition, you need automated performance test. Running a profiler daily or overnight would provide insight into the consumption of CPU and memory and if any memory leaks exist. Furthermore, a tool like loadrunner will enable you to place a load on the system that reflects actual usage. You will be able to measure response times and CPU and memory consumption on the production like machine running loadrunner.

The automated performance test should reflect actual usage of the app. You measure the number of business transactions (i.e., if a web application the clicking on a page and the response to the users or round trips to the server). and determine the mix of such transaction along with the reate they arrive per second. Such information will enable you to design properly the automated loadrunner test required to performance test the application. As is often the case, some of the performance issues will trace back to the implementation of the application while other will be determined by the configuration of the server environment.

Remember, your application will be performance tested. The question is, will the first performance test happen before or after you release the software. Believe me, the worse place to have a performance problem is in production. Performance issues can be the hardest to fix and can cause a deployed to all users to fail thus cancelling the project.

Finally, there is User Acceptance Testing (UAT). These are test designed by the production owner/ business partner to test the overall system prior to release. In generally, because of all the other testing, it is not uncommon for the application to return zero defects during UAT.

可遇━不可求 2024-07-28 00:07:16

这取决于您的系统的复杂程度。 如果您的验收测试(满足客户的要求)从前到后测试您的系统,那么不,您不会。

但是,如果您的产品依赖于其他层(例如后端中间件/数据库),那么您确实需要进行测试来证明您的产品可以愉快地进行端到端链接。

正如其他人评论的那样,测试不一定证明项目按预期运行,而只是证明您期望它如何运行。

频繁向客户反馈循环和/或以客户理解的方式编写/可解析的测试(例如在 BDD 风格)确实很有帮助。

It depends on how complex your system is. If your acceptance tests (which satisfy the customers requirements) exercise your system from front to back, then no you don't.

However, if your product relies on other tiers (like backend middleware/database) then you do need a test that proves that your product can happily link up end-to-end.

As other people have commented, tests don't necessarily prove the project functions as expected, just how you expect it to work.

Frequent feedback loops to the customer and/or tests that are written/parsable in a way the customer understands (say for example in a BDD style ) can really help.

你列表最软的妹 2024-07-28 00:07:16

如果我对每个班级都有单元测试
和/或会员功能和接受
我对每个用户故事进行测试
足够的测试来确保项目
功能如预期吗?

这足以表明您的软件功能正确,至少与您的测试覆盖率是足够的一样。 现在,根据您正在开发的内容,肯定有重要的非功能性需求,请考虑可靠性、性能和可扩展性。

If I have unit tests for each class
and/or member function and acceptance
tests for every user story do I have
enough tests to ensure the project
functions as expected?

This is enough to show your software is functionally correct, at least as much as your test coverage is sufficient. Now, depending on what you're developing, there certainly are non-functional requirements that matter, think about reliability, performance and scability.

似梦非梦 2024-07-28 00:07:16

从技术上讲,全套验收测试应该涵盖一切。 话虽如此,对于大多数“足够”的定义来说,它们还不够“足够”。 通过单元测试和集成测试,您可以更早地以更本地化的方式发现错误/问题,从而更容易分析和修复它们。

考虑到一整套手动执行的测试以及写在纸上的说明足以验证一切是否按预期工作。 但是,如果您可以自动化测试,那么您的情况会好得多,因为它使测试变得更加容易。 纸质版是“完整”的,但还不够“足够”。 同样,每一层测试都为“足够”的价值增添更多。

还值得注意的是,不同的测试集倾向于从不同的“观点”测试产品/代码。 就像 QA 可能会发现开发人员从未想过要测试的错误一样,一组测试可能会发现另一组测试找不到的东西。

Technically, a full suit of acceptance tests should cover everything. That being said, they're not "enough" for most definitions of enough. By having unit tests and integration tests, you can catch bugs/issues earlier and in a more localized manner, making them much easier to analyze and fix.

Consider that a full suit of manually executed tests, with the directions written on paper, would be enough to validate that everything works as expected. However, if you can automate the tests, you'd be much better off because it makes doing the testing that much easier. The paper version is "complete", but not "enough". In the same way, each layer of tests add more to the value of "enough".

It's also worth noting that the different sets of tests tend to test the product/code from a different "viewpoint". Much the same way QA may pick up bugs that dev never thought to test for, one set of tests may find things the other set wouldn't.

念三年u 2024-07-28 00:07:16

如果手中的系统很小,验收测试甚至可以由客户手动进行。

单元测试和小型集成测试(由类似单元的测试组成)可帮助您构建可持续的系统。

不要尝试为系统的每个部分编写测试。 这是脆弱的(容易破碎)并且势不可挡。

确定系统中需要花费大量时间进行手动测试的关键部分,并仅为该部分编写验收测试,以使每个人都轻松完成任务。

Acceptance testing can even be made manually by the client if the system in hand is small.

Unit and small integration tests (consisting of unit like tests) are there for you to build a sustainable system.

Don't try to write test for each part of the system. That is brittle (easy to break) and overwhelming.

Decide on the critical parts of the system that takes too much amount of time to manually test and write acceptance tests only for that parts to make things easy for everyone.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文