测试两个 Web 服务的相等性
有一个 vb.net Web 服务项目,我认为已成功转换为 C# Web 服务。 (全部都是 asmx 文件 - 还没有 WCF)
我想比较这两个 Web 服务是否相等 - 以确保没有无意的错误潜入。
比较两个 Web 服务是否相等的最佳方法是什么?我正在考虑编写一个客户端,它将向两种网络方法发送请求(例如 - ajax ),并比较结果..但我希望可能存在用于此类目的的现有解决方案..请让我知道什么是最好的方法..
There was a vb.net web service project that I think I successfully converted to c# web service. (all are asmx files - no WCF yet)
I want to compare these two web services for equality - to make sure that no inadvertent bugs have crept in.
What is the best way to compare two web services for equality?? I am thinking of writing a client that will send requests (for example - ajax ) to both the web methods, and compare the result.. but I am hopeful that there might be existing solutions that are being used for such purposes.. Please let me know what is the best way..
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我想你们的服务可不是一件小事;考虑到这一点,我认为如果不深入了解有关现有服务性质的更多细节,您就无法从任何人那里得到好的答案。
不过,我最好的选择是使用组合测试方法。我将从现有的服务模式开始,逐步建立一个模型,将尽可能多的负面测试用例与正面测试用例分开,以获得更好的覆盖范围。当它发生在我身上时,我使用这个工具集来注释服务模式(来自 WSDL)并自动生成测试模型,或从头开始创建组合测试模型。这在现实生活中的服务中几乎总是需要的,以确保组合引擎生成良好的测试用例。无论采用哪种方法,所有测试用例都会捕获在 Excel 文件中。然后,我使用该工具为每个测试用例生成请求、执行服务调用并在另一个 Excel 文件中捕获请求/响应。针对旧版本运行给出了基线的“预期”结果集;将其与迁移版本的结果进行比较,最终使用选择性标准(消除 GUID、时间戳、事务 ID 等)得出结论。
I assume that your service is not a trivial one; with that in mind, I don't think you can get a good answer from anyone without engaging into more details about the nature of the existing service.
My best bet though would be to use a combinatorial testing approach. I would start from the existing service schema and work my way up to a model that separates as much as possible negative from positive test cases - for better coverage. When it happened to me, I used this tool set to either annotate the service schema (from the WSDL) and automatically generate the test model, or to create a combinatorial test model from scratch. This is almost always neded with real life services, to make sure that the combinatorial engine generates good test cases. Regardless of the approach, all test cases are captured in an Excel file. I then use the tool to generate the request, execute the service call and capture both the request/response in another Excel file, for each test case. Running against the old version gives me the baseline's "expected" result set; comparing that with the results from the migrated version, eventually using selective criteria (eliminating GUIDs, timestamps, transaction ids, etc.), gives the verdict.