关于自动化方法
我应该自动化在 PowerBuilder 中开发的应用程序。为了测试这个应用程序,我们使用 Rational Robot 作为功能测试工具。我们预计每个版本的应用程序中至少有 40 -50% 的变更控制。发布趋势每年至少安排3次。
该产品针对每个客户都有不同的设置。据此推导了场景。尽管如果有任何变化,那将是功能特性和界面方面。指出这一点,需要进行自动化。确定了几个稳定的领域(即没有发生重大变化的领域)以实现自动化。这对于继续自动化可行吗?
你能建议我如何去做这件事吗?
I am supposed to automate an application which is developed in PowerBuilder. In order to test this Application we are using Rational Robot as a functional testing tool. We expect at least 40 -50% of change control in the Application for each release. Release trends are scheduled at least 3 times in a year.
The product has different setup for each client. Accordingly scenario has been derived. Although if is there any change occurs, it would be in functional feature and also in interface. Pointing to that, need to proceed with automation. Identified few areas which are stabilized (i.e., where no major changes occurs) to automate. Will that be feasible for proceeding with Automation?
Could you please suggest me how to go about on this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我看到你最后一个问题的答案涉及一位顾问花了两天时间采访开发团队,然后用三天时间编写一份报告。而且,在某些情况下,我会说该报告是初步的、介绍性的和仓促的。不过,让我提出一些可能有助于管理团队期望的想法。
测试自动化非常适合检查据称未触及的功能回归。框架更改或数据库更改之类的事情可能会导致未触及的代码崩溃。对于规避风险的环境(例如银行、药品处方),自动化投资是非常值得的。
然而,我经常看到的是对努力的低估。要真正测试一个单元(例如,一个窗口)中的所有功能,您需要查看规范,设计测试每个功能点的测试,规划您的数据(进入窗口的数据是什么,您将如何确保每次开始测试时数据符合预期,完成测试时数据是什么,如何确保数据(包括不可见数据)正确),然后编写脚本并调试所有测试。我不确定专业测试人员怎么说(我是一名开发人员,但已经学习了自动化测试工具课程),但如果您不打算在开发自动化上花费同样多的精力当您花费在相同功能的应用程序开发上时进行测试,我认为您很快就会感到沮丧。除此之外,不断变化的功能意味着不断变化的测试脚本,而自动化测试可能会成为一笔巨大的成本。 (所以,告诉你的经理,自动化测试并不意味着你按下一个按钮,事情就得到了测试。)
这并不是说你不能在测试上花费更少的精力并获得一些成果 em> 结果,但一分钱一分货。有一个打开和关闭应用程序中所有窗口的脚本提供了一些值,但它不会告诉您框架中实现的新行为正在窗口 X 上被覆盖,或者数据库更改打乱了下拉数据窗口中项目的排序,或者报告完成时间从五秒变为五小时。然而,再次强调,不要低估这一努力。这是一种新工具,具有新的语言和特性,需要弄清楚和掌握。
自动化测试可能是一笔巨大的投资。如果失败的成本很高,例如处方不当的药物导致死亡,那么投资是值得的。但是,对于功能周转率较高(就像我认为您所描述的那样)并且故障后果不太重要的情况,您可能需要考虑将测试自动化与额外的手动测试资源的成本/收益进行比较。
祝你好运,
特里
I've seen the answer to your last question involve a consultant spending two days interviewing the development team, then three days developing a report. And, in some cases, I'd say the report was preliminary, introductory, and rushed. However, let me throw out a few ideas that may help manage expectations on your team.
Testing automation is great for checking for regressions in functionality that allegedly isn't being touched. Things like framework changes or database changes can cause untouched code to crash. For risk-averse environments (e.g. banking, pharmaceutical prescriptions), the investment in automation is well worth the effort.
However, what I've seen often is an underestimation of the effort. To really test all the functionality in a unit (let's say a window, for example), you need to review the specifications, design tests that test each functional point, plan your data (what is the data going into the window, how will you make sure this data is as expected when you start the test each time, what is the data when you've finished your tests, how will you ensure the data, including non-visible data, is correct), then script and debug all your tests. I'm not sure what professional testers say (I'm a developer by trade, but have taken a course in an automated testing tool), but if you aren't planning for the same amount of effort to be expended on developing the automation tests as you're spending on application development for the same functionality, I think you'll quickly become frustrated. Add to that the changing functionality means changing testing scripts, and automated testing can become a significant cost. (So, tell your manager that automated testing doesn't mean you push a button and things get tested. < grin > )
That's not to say that you can't expend less effort on testing and get some results, but you get what you pay for. Having a script that opens and closes all the windows in the app provides some value, but it won't tell you that a new behaviour implemented in the framework is being overridden on window X, or that a database change has screwed up the sequencing of items in a drop down DataWindow, or a report completion time goes from five seconds to five hours. However, again, don't underestimate the effort. This is a new tool with a new language and idiosyncrasies that need to be figured out and mastered.
Automated testing can be a great investment. If the cost of failure is significant, like a badly prescribed drug causing a death, then the investment is worth it. However, for cases where there is a high turn over of functionality (like I think you're describing) and the consequences of a failure is less critical, you might want to consider comparing the cost/benefit of testing automation with additional manual testing resources.
Good luck,
Terry
补充一下 @Terry 所说的:听起来您对于自动化尤其是 Rational Robot 都是新手。
需要记住的一件事是,测试自动化是软件开发,需要如此对待。这意味着您需要专门从事自动化工作的人员,他们是可靠的程序员,并且拥有所使用工具(在本例中为机器人)方面的专业知识。
如果您的团队不具备一般编程/自动化技能和特定的机器人技能,您将需要雇用该人员或对现有员工进行这些技能方面的培训。
Adding on to what @Terry said: It sounds like you are both new to automation in general and to Rational Robot in particular.
One thing to keep in mind is that test automation is software development and needs to be treated as such. That means you need personnel dedicated to the automation effort who are solid programmers and have expertise in the tool being used (Robot in this case).
If your team does not have the general programming/automation skills and the specific Robot skills, you are going to need to hire that personnel or get existing staff trained in those skill sets.