I would use the requirement numbering scheme already in place rather than creating a new one. I would document the following items for each requirement:
Requirement Status: This can be phrased in many different ways but you are tyring to communicate if the requirement was completed as listed, completed in a modified variant of what was listed or was simply not able to be completed at all.
Requirement Comment: Describes the previously listed requirement status. This is the "why" that will explain those items that were not able to fully meet the requirements.
Date completed: This is mostly for future product planning but also servers as a historical reference.
A couple of other points to remember:
Requirements may be reviewed by the customer, especially if the customer was the source of the requirements. Hence, this document needs to be as accurate and as informative as possible. (It's also another reason you don't change the requirement numbering scheme unless you have to.)
Your testing department (assuming you have one) should be using these documents for their test planning and they need to know what requirments were met, which ones weren't and most importantly which ones changed and how.
Lastly, unless you're putting on a dog and pony show for someone you shouldn't need screenshots as part of requirement documentation. You also shouldn't need to provide "proof" of completion. The testing department will do that for you.
there are some techniques to convert your requirements into test cases. But those depend on how your requirements are documented. If you already have made a scenario based requirements analysis then it would be very easy: Just create a sequence diagram for every path of your scenario, write/do a test -> done. Besides the documentation created that way should also impress your lecturer.
If you don't have scenarios, you should create some out of your use cases.
The downside here is that it is very work intensive and should only be used in cases that justify its use (a thesis for example ;))
REQ-1 italicized requirement text
text discussing how the software has
fulfilled the requirements, possibly
with a picture:
-----------------------
| |
| |
| |
| |
| |
-----------------------
REQ-2 italicized requirement text
etc...
List, one by one, the requirements numbers with the requirements line, then text and/or screenshots proving it does so.
Have the requirement number on the left in bold, then have the requirement text tabbed in and italicized. Align the proof text/screenshots with the requirement text, leaving the left column clear for just the requirement numbers. EG:
REQ-1 italicized requirement text
text discussing how the software has
fulfilled the requirements, possibly
with a picture:
-----------------------
| |
| |
| |
| |
| |
-----------------------
REQ-2 italicized requirement text
etc...
You should group into chapters or sections based upon logical program areas, and start the section or chapter with a blurb about how the whole program area meets the requirements (be general
I would keep it simple and add the following columns:
Delivery Satisfied requirement - with a drop down list containing Yes, No, Open
Comment - any comment regarding the delivery, such as 'need to define message size', 'Does not fully satisfy in the layout of the message, but accepted by the client', etc.
Date completed - when the change was delivered
Date satisfied - when the change was accepted
With the use of requirement ID's, I'm assuming they point back to the docs containing more detailed info including layouts, screen shots, etc.
We would normally have a test plan in place in which each item can be ticked-off if satisfactory. The plan would be based on the original requirements (functional or non-functional) for example:
Requirement: The users account should be locked after three attempts to login with an incorrect password.
Test: Attempt to login more than three times with an incorrect password. Is the user account now locked?
We would do this for each requirement and re-run the plans for each Release Candidate. Some of the tests are automated but we do have the luxuary of a test team to perform manual testing as well!
Based on the results of running these test plans and User Acceptance Testing we would sign-off the RC as correct and fit for release.
Note that sometimes we will sign-off for release even if some items in the test plan do not pass, it all depends on the nature of the items!
例如,不要说“加载文件应该很快”,而是说“在 Z 的硬件上,大小为 X 的文件应在不超过 Y 毫秒内加载”或类似的内容。
The formal way to validate requirements is with testing - usually acceptance testing.
The idea is: for every requirement, there should be one or more tests that validate the requirement. In a formal development situation, the customer would sign off on the acceptance tests at the same time they sign off on the requirements.
Then, when the product is complete, you present the results of the acceptance tests and the customer reviews them before accepting the final product.
If you have requirements that cannot be tested, then they probably are badly written.
e.g. don't say "loading files shall be fast", say "an file of size X shall be loaded in not more than Y milliseconds on hardware of Z" or something like that.
发布评论
评论(6)
我会使用已有的需求编号方案,而不是创建一个新的方案。 我会为每个需求记录以下项目:
需要记住的其他几点:
最后,除非您要为某人举办狗和小马表演,否则您不需要屏幕截图作为需求文档的一部分。 您也不应该需要提供完成的“证明”。 测试部门将为您做这件事。
I would use the requirement numbering scheme already in place rather than creating a new one. I would document the following items for each requirement:
A couple of other points to remember:
Lastly, unless you're putting on a dog and pony show for someone you shouldn't need screenshots as part of requirement documentation. You also shouldn't need to provide "proof" of completion. The testing department will do that for you.
有一些技术可以将您的需求转换为测试用例。
但这些取决于您的需求的记录方式。
如果您已经进行了基于场景的需求分析,那么这将非常简单:只需为场景的每个路径创建一个序列图,编写/执行测试 -> 即可。 完毕。
此外,以这种方式创建的文档也应该给您的讲师留下深刻的印象。
如果您没有场景,则应该根据用例创建一些场景。
这里的缺点是它的工作量很大,并且只应在证明其使用合理的情况下使用(例如论文;))
there are some techniques to convert your requirements into test cases.
But those depend on how your requirements are documented.
If you already have made a scenario based requirements analysis then it would be very easy: Just create a sequence diagram for every path of your scenario, write/do a test -> done.
Besides the documentation created that way should also impress your lecturer.
If you don't have scenarios, you should create some out of your use cases.
The downside here is that it is very work intensive and should only be used in cases that justify its use (a thesis for example ;))
一一列出需求编号和需求行,然后用文本和/或屏幕截图证明这一点。
将左侧的需求编号以粗体显示,然后将需求文本标记为斜体。 将校样文本/屏幕截图与要求文本对齐,将左栏保留为仅显示要求编号。 EG:
您应该根据逻辑程序区域将其分组为章节,并以关于整个程序区域如何满足要求的简介(一般性的)开始章节或章节。
List, one by one, the requirements numbers with the requirements line, then text and/or screenshots proving it does so.
Have the requirement number on the left in bold, then have the requirement text tabbed in and italicized. Align the proof text/screenshots with the requirement text, leaving the left column clear for just the requirement numbers. EG:
You should group into chapters or sections based upon logical program areas, and start the section or chapter with a blurb about how the whole program area meets the requirements (be general
我会保持简单并添加以下列:
通过使用需求 ID,我假设它们指向包含更多内容的文档详细信息,包括布局、屏幕截图等。
I would keep it simple and add the following columns:
With the use of requirement ID's, I'm assuming they point back to the docs containing more detailed info including layouts, screen shots, etc.
我们通常会制定一个测试计划,如果满意,可以在其中勾选每个项目。 该计划将基于原始要求(功能或非功能),例如:
要求: 用户帐户应在使用错误密码尝试登录三次后被锁定。
测试:尝试使用不正确的密码登录三次以上。 用户帐户现在被锁定了吗?
我们将为每个需求执行此操作,并为每个候选版本重新运行计划。 有些测试是自动化的,但我们也有足够的测试团队来执行手动测试!
根据运行这些测试计划和用户验收测试的结果,我们将签署 RC 正确并适合发布。
请注意,有时即使测试计划中的某些项目没有通过,我们也会签核发布,这完全取决于项目的性质!
We would normally have a test plan in place in which each item can be ticked-off if satisfactory. The plan would be based on the original requirements (functional or non-functional) for example:
Requirement: The users account should be locked after three attempts to login with an incorrect password.
Test: Attempt to login more than three times with an incorrect password. Is the user account now locked?
We would do this for each requirement and re-run the plans for each Release Candidate. Some of the tests are automated but we do have the luxuary of a test team to perform manual testing as well!
Based on the results of running these test plans and User Acceptance Testing we would sign-off the RC as correct and fit for release.
Note that sometimes we will sign-off for release even if some items in the test plan do not pass, it all depends on the nature of the items!
验证需求的正式方法是测试——通常是验收测试。
这个想法是:对于每一个需求,都应该有一个或多个测试来验证需求。 在正式的开发情况下,客户将在签署需求的同时签署验收测试。
然后,当产品完成后,您将展示验收测试的结果,并由客户在接受最终产品之前进行审查。
如果您有无法测试的需求,那么它们可能写得不好。
例如,不要说“加载文件应该很快”,而是说“在 Z 的硬件上,大小为 X 的文件应在不超过 Y 毫秒内加载”或类似的内容。
The formal way to validate requirements is with testing - usually acceptance testing.
The idea is: for every requirement, there should be one or more tests that validate the requirement. In a formal development situation, the customer would sign off on the acceptance tests at the same time they sign off on the requirements.
Then, when the product is complete, you present the results of the acceptance tests and the customer reviews them before accepting the final product.
If you have requirements that cannot be tested, then they probably are badly written.
e.g. don't say "loading files shall be fast", say "an file of size X shall be loaded in not more than Y milliseconds on hardware of Z" or something like that.