“当且仅当测试团队通过了 X% 的测试用例时才发布产品”这句话真的让我很困扰。团队可能需要考虑制定更好的发布标准,该标准不仅仅取决于测试通过率。例如,这些场景是否已知、理解、考虑(并测试)?当然,并不是所有的错误都会得到修复,但是那些被推迟或未修复的错误是否得到了正确的分类?您是否达到了压力测试和绩效目标?您是否对潜在威胁进行了威胁建模并考虑了缓解措施?是否有 x 数量的客户(内部/外部)部署了版本并在发布之前提供了反馈(即“dogfood”)?开发人员是否了解来自现场的错误以及测试人员是否了解创建回归单元测试?需求团队是否了解这些出现的错误,以了解为什么没有考虑到这些场景?功能之间是否存在规范、开发或测试中未考虑到的关键集成点?
The statement "Product released if and only if X% of testcases from Testing team passes" really bothers me. The team may want to consider having better release criteria which is gated on more than just test pass rates. For example, are the scenarios known, understood, accounted for (and tested)? Certainly not all bugs will be fixed, but are the ones that have been postponed or not fixed been triaged correctly? Have you reached your stress testing and performance goals? Have you threat modelled and accounted for mitigations to potential threats? Have x amount of customers (internal/external) deployed builds and provided feedback prior to release (i.e. "dogfood")? Do developers understand the bugs coming from the field and the testers to create regression unit tests? Does the requirements team understand these bugs coming in to see why the scenarios weren't accounted for? Are there key integration points between features which weren't accounted for in specs, development, or testing?
A few suggestions to the team would be to first do a postmortem on the issues found and understand where it broke, and strive to push quality upstream as much as possible. Make sure the requirements team, devs, and testers are communicating frequently and well throughout the planning, dev, and testing cycle to make sure everyone is on the same page and knows who is doing what. You would be amazed at how much product quality can be gained when people actually talk to each other during development!
Bugs can enter the system at both the requirements and development steps. The requirements team could make some mistakes or over-simplifying assumptions when creating the requirements, and the developers could misinterpret the requirements or make their own assumptions.
To improve things, the customer should sign off on the requirements before development proceeds, and should be involved, at least to some extent, in monitoring development to ensure things are on the right track.
The first question in my mind would be, "how do the defects stack up against the requirements?"
If the requirement reads, "OK button should be blue" and the defect is "OK button is green", I would blame development and test -- clearly, neither read the requirements. On the other hand, if the complaint is, "OK button is not yellow", clearly, there was an issue with requirements gathering or your change-control process.
There's no easy answer to this question. A system can have a large number of defects with responsibility spread between everyone involved in the process -- after all, a "defect" is just another way of saying "unmet customer expectation". Expectations, in themselves, are not always correct.
"Product released if and only if X% of test cases from Testing team passes" - is one the criteria for release. In this case "Coverage of Tests" in written TCs is very important. It needs good review of TCs whether any functionality or scenario is missed or not. If anything is missed in TCs there might possibility to find bugs as some of requirement is not covered in test cases.
It also needs some ad-hoc testing as well as Exploratory testing to find uncover bugs in TCs. And it also needs to define "Exit criteria" for testing.
If customer/client finds any bug/defect it is necessary to investigate as: i) What type of bug is found? ii) Is there any Test case written regarding that? iii) If there is test case(s) regarding that executed properly? iv) If it is absent in TCs why it was missed? and so on
After investigation decision can be taken who should be blamed. If it is very simple and open-eyed bug/defect, definitely testers should be blamed.
发布评论
评论(4)
“当且仅当测试团队通过了 X% 的测试用例时才发布产品”这句话真的让我很困扰。团队可能需要考虑制定更好的发布标准,该标准不仅仅取决于测试通过率。例如,这些场景是否已知、理解、考虑(并测试)?当然,并不是所有的错误都会得到修复,但是那些被推迟或未修复的错误是否得到了正确的分类?您是否达到了压力测试和绩效目标?您是否对潜在威胁进行了威胁建模并考虑了缓解措施?是否有 x 数量的客户(内部/外部)部署了版本并在发布之前提供了反馈(即“dogfood”)?开发人员是否了解来自现场的错误以及测试人员是否了解创建回归单元测试?需求团队是否了解这些出现的错误,以了解为什么没有考虑到这些场景?功能之间是否存在规范、开发或测试中未考虑到的关键集成点?
给团队的一些建议是,首先对发现的问题进行事后分析,了解问题出在哪里,并努力尽可能地将质量推向上游。确保需求团队、开发人员和测试人员在整个规划、开发和测试周期中经常良好地沟通,以确保每个人都在同一页面上并知道谁在做什么。您会惊讶地发现,当人们在开发过程中真正相互交谈时,可以提高产品质量!
The statement "Product released if and only if X% of testcases from Testing team passes" really bothers me. The team may want to consider having better release criteria which is gated on more than just test pass rates. For example, are the scenarios known, understood, accounted for (and tested)? Certainly not all bugs will be fixed, but are the ones that have been postponed or not fixed been triaged correctly? Have you reached your stress testing and performance goals? Have you threat modelled and accounted for mitigations to potential threats? Have x amount of customers (internal/external) deployed builds and provided feedback prior to release (i.e. "dogfood")? Do developers understand the bugs coming from the field and the testers to create regression unit tests? Does the requirements team understand these bugs coming in to see why the scenarios weren't accounted for? Are there key integration points between features which weren't accounted for in specs, development, or testing?
A few suggestions to the team would be to first do a postmortem on the issues found and understand where it broke, and strive to push quality upstream as much as possible. Make sure the requirements team, devs, and testers are communicating frequently and well throughout the planning, dev, and testing cycle to make sure everyone is on the same page and knows who is doing what. You would be amazed at how much product quality can be gained when people actually talk to each other during development!
Bug 可能在需求和开发步骤中进入系统。需求团队在创建需求时可能会犯一些错误或过度简化假设,而开发人员可能会误解需求或做出自己的假设。
为了改进事情,客户应该在开发继续之前签署需求,并且应该至少在某种程度上参与监控开发,以确保事情走上正确的轨道。
Bugs can enter the system at both the requirements and development steps. The requirements team could make some mistakes or over-simplifying assumptions when creating the requirements, and the developers could misinterpret the requirements or make their own assumptions.
To improve things, the customer should sign off on the requirements before development proceeds, and should be involved, at least to some extent, in monitoring development to ensure things are on the right track.
我脑海中的第一个问题是“缺陷如何与需求相匹配?”
如果需求是“确定按钮应该是蓝色的”,而缺陷是“确定按钮是绿色的”,我会责怪开发和测试——显然,都不读需求。另一方面,如果投诉是“确定按钮不是黄色”,显然,需求收集或变更控制流程存在问题。
这个问题没有简单的答案。一个系统可能存在大量缺陷,责任分散在参与该过程的每个人之间——毕竟,“缺陷”只是“未满足客户期望”的另一种说法。期望本身并不总是正确的。
The first question in my mind would be, "how do the defects stack up against the requirements?"
If the requirement reads, "OK button should be blue" and the defect is "OK button is green", I would blame development and test -- clearly, neither read the requirements. On the other hand, if the complaint is, "OK button is not yellow", clearly, there was an issue with requirements gathering or your change-control process.
There's no easy answer to this question. A system can have a large number of defects with responsibility spread between everyone involved in the process -- after all, a "defect" is just another way of saying "unmet customer expectation". Expectations, in themselves, are not always correct.
“当且仅当测试团队通过 X% 的测试用例时才发布产品” - 是发布的标准之一。在这种情况下,书面 TC 中的“测试覆盖率”非常重要。需要对TC进行仔细审查,是否有遗漏任何功能或场景。如果 TC 中遗漏了任何内容,则可能会发现错误,因为测试用例中未涵盖某些要求。
它还需要一些临时测试和探索性测试来发现 TC 中的错误。它还需要定义测试的“退出标准”。
如果客户/客户发现任何错误/缺陷,则有必要进行以下调查: i) 发现什么类型的错误? ii)是否有相关的测试用例? iii) 是否有关于正确执行的测试用例? iv) 如果它在 TC 中不存在,为什么会被遗漏?等等
调查之后就可以决定谁应该受到指责。如果它是非常简单且令人惊讶的错误/缺陷,那么测试人员绝对应该受到指责。
"Product released if and only if X% of test cases from Testing team passes" - is one the criteria for release. In this case "Coverage of Tests" in written TCs is very important. It needs good review of TCs whether any functionality or scenario is missed or not. If anything is missed in TCs there might possibility to find bugs as some of requirement is not covered in test cases.
It also needs some ad-hoc testing as well as Exploratory testing to find uncover bugs in TCs. And it also needs to define "Exit criteria" for testing.
If customer/client finds any bug/defect it is necessary to investigate as: i) What type of bug is found? ii) Is there any Test case written regarding that? iii) If there is test case(s) regarding that executed properly? iv) If it is absent in TCs why it was missed? and so on
After investigation decision can be taken who should be blamed. If it is very simple and open-eyed bug/defect, definitely testers should be blamed.