UI自动化最佳实践
我们开发了一些 UI 自动化测试用例。目前我们正在正在开发的应用程序上执行这些操作。根据我们的观察,在执行过程中,大多数脚本由于应用程序相关的性能问题而失败(例如窗口未正确加载/窗口加载时间比预期更长等)
因此,为了避免这种情况,在执行过程中,我们计划检查哪个步骤失败并计划再次重新执行相同的步骤,以检查窗口是否正确加载,如果正确则继续执行。但我感觉由于这种方法,一些与应用程序性能相关的问题可能会被掩盖,并且不确定我们是否应该遵循这种方法。
我想知道这是否可以算作最佳实践。
We have developed some UI automation test cases. Currently we are executing those on application which is under development. As per our observation, during execution, majority of scripts are failing due to application related performance issues (like window did not load properly / window took more time than expected to load etc.)
So to avoid this, during execution, we are planning to check which step is failed and planning to re-execute the same again, to check if window is loaded properly and if yes continue execution. But I have feeling that due to this approach some of the application performance related issues may get masked and am not sure whether we should follow such approach or not.
I would like to know whether it can be count as a best practice.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果您实现某种机制来重试刚刚失败的操作,您将不断陷入困境,因为有时,由于应用程序处于意外的 UI 状态或类似情况而无法重试。
通常,每个应用程序都有一个预期的、最差的响应时间。将该时间用作播放配置的最大超时时间。
始终尝试预测何时会发生什么,并相应地编写脚本。让您的脚本容忍意外的 UI 状态(例如长时间延迟等)只会使您的测试工作变得更像是“被动”自动化工作。
作为一种相当粗鲁的措施,您可以设计一个恢复方案,至少重试该操作一次(或在特定时间段内)。这可以帮助您获得“稳定”的播放,而无需找出要使用的超时。
但一般来说:如果一个窗口需要很长时间才能显示,那么它就是一个缺陷。如果你的超时太低,那么它就是你的测试机器人配置中的一个错误。如果没有定义“花费太长时间”的含义,请获取性能要求。
因此: 相应地修复。
这是我的 2(好吧 - 3)美分:)
If you implement some mechanism for re-trying the operation that just failed, you'll keep falling in holes because sometimes, a re-try is not possible due to the app being in an unexpected UI state, or similar things.
Usually, each application has an expected, and a worst, response time. Take that time and use it as the maximum timeout for playback configuration.
Always try to predict what should happen when, and script accordingly. Making your script tolerate unexpected UI states (like long delay, etc.) just makes your testing effort become more of an "passive" automation effort.
As a rather rude measure, you could design a recovery scenario that retries the operation at least once (or for a specific period of time). This could help you getting a "stable" playback without finding ou what timeouts to use.
But generally: If a windows takes too long to show up, it is a defect. If your timeout is too low, it is a bug -- in your test robot config. If it is not defined what "takes too long" means, get the performance requirements.
Thus: Fix accordingly.
That's my 2 (OK -- 3) cents :)
不是“最好”,但有效实践。
脚本必须是可移植的。从一个环境到另一个环境(我们都知道,测试环境比 UAT/预生产或生产慢得多) - 维护工作量最少/为零。
因此:
关于 GUI 步骤自动化的一小部分,这里有一个需要记住的一般启发式和缩写词: SEED NATALI。
SEED NATALI 缩写代表如下。
谢谢,
阿尔伯特·加里耶夫
http://automation-beyond.com/
Not the "best" but working practice.
Scripts must be portable. From environment to environment (and we all know, that test environments are much slower than UAT/Pre-prod, or Production) - with minimal / zero effort on maintenance.
Therefore:
With regards to the little piece of GUI Step Automation, here's a general heuristic and acronym to remember: SEED NATALI.
SEED NATALI acronym stands for the following.
Thank you,
Albert Gareev
http://automation-beyond.com/
如果目标是执行功能测试,
定义应用程序在不同环境中的响应时间的基准会很有帮助,例如,如果您有一个 Web 应用程序,则最大加载时间定义为 20 秒,而对于其他应用程序,则为 10 秒。同样,一旦你有了明确的基准,你就可以现场发现问题。
请注意,在定义应用程序的基准时,需要考虑许多标准(例如网络带宽、服务器类型)。
If the objective is to perform functional test than,
It would be helpful to define bench mark on the response time taken by the application in different Environment, For example, If you have an web application, the Max load time is defined as 20sec and for Other application it is 10 sec. Similarly Once you have a clear benchmark You are on the floor to catch the issues.
Please note while defining the benchmark of an application there are many criteria( like network bandwidth, Server Types) which needs to be taken into consideration while defining benchmark.
如果您现在在应用程序开发的某个阶段添加重试,其中性能尚未稳定,则应确保在应用程序稳定后将其删除。
QTP 足以测试单个用户的桌面应用程序或客户端服务器应用程序的性能,如果您想测试客户端服务器应用程序(例如 Web)上的许多用户的性能,也许您应该考虑使用像 LoadRunner 这样的负载测试工具。
If you're adding the retries now for a phase in the application development where the performance isn't stable yet, you should make sure to remove them when the application stabilizes.
QTP is sufficient for testing the performance of desktop applications or client server applications for a single user, if you want to test the performance for many users on a client server applications (e.g. web) perhaps you should consider using a load testing tool like LoadRunner.