在集成测试中,为了测试而将异步流程替换为同步流程是否有意义?
在集成测试中,异步流程(方法、外部服务)会产生非常困难的测试代码。相反,如果我为了测试而分解出异步部分并创建一个依赖项并将其替换为同步部分,这会是一件“好事”吗?
通过用同步流程替换异步流程,我不是本着集成测试的精神进行测试吗?我想我假设集成测试是指接近真实情况的测试。
In integration tests, asynchronous processes (methods, external services) make for a very tough test code. If instead, I factored out the async part and create a dependency and replace it with a synchronous one for the sake of testing, would that be a "good thing"?
By replacing the async process with a synchronous one, am I not testing in the spirit of integration testing? I guess I'm assuming that integration testing refers to testing close to the real thing.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
好问题。
在单元测试中,这种方法是有意义的,但对于集成测试,您应该测试真实的系统,因为它在现实生活中的行为。这包括任何异步操作及其可能产生的任何副作用 - 这是最有可能存在错误的地方,并且可能是您应该集中测试而不是将其排除在外的地方。
我经常使用“waitFor”方法,轮询以查看是否已收到答案,如果没有,则在一段时间后超时。这种模式的一个很好的实现是 JUnitConditionRunner。例如:
Nice question.
In a unit test this approach would make sense but for integration testing you should be testing the real system as it will behave in real-life. This includes any asynchronous operations and any side-effects they may have - this is the most likely place for bugs to exist and is probably where you should concentrate your testing not factor it out.
I often use a "waitFor" approach where I poll to see if an answer has been received and timeout after a while if not. A good implementation of this pattern, although java-specific you can get the gist, is the JUnitConditionRunner. For example:
我们有许多自动化单元测试,它们发送异步请求并需要测试输出/结果。我们处理它的方式是实际执行所有测试,就好像它是实际应用程序的一部分一样,换句话说,异步请求保持异步。但是测试工具同步运行:它发送异步请求,休眠一段时间(我们期望生成结果的最长期限),如果仍然没有结果可用,则测试已经失败了。有回调,因此在几乎所有情况下,测试都会在超时到期之前被唤醒并继续运行,但超时意味着失败(或预期性能的变化)不会停止/停止整个测试套件。
这有几个优点:
最后一点可能需要少量解释。性能测试很重要,并且经常被排除在测试计划之外。这些单元测试的运行方式最终会比我们重新安排代码以同步执行所有操作花费更长的时间(运行时间)。然而,通过这种方式,可以隐式测试性能,并且测试更忠实于它们在应用程序中的使用情况。另外,我们所有的消息队列基础设施都经过“免费”测试。
编辑:添加了有关回调的注释
We have a number of automated unit tests that send off asynchronous requests and need to test the output/results. The way we handle it is to actually perform all of testing as if it were part of the actual application, in other words asynchronous requests remain asynchronous. But the test harness acts synchronously: It sends off the asynchronous request, sleeps for [up to] a period of time (the maximum in which we would expect a result to be produced), and if still no result is available, then the test has failed. There are callbacks, so in almost all cases the test is awakened and continues running before the timeout has expired, but the timeouts mean that a failure (or change in expected performance) will not stall/halt the entire test suite.
This has a few advantages:
The last point may need a small amount of explanation. Performance testing is important, and it is often left out of test plans. The way these unit tests are run, they end up taking a lot longer (running time) than if we had rearranged the code to do everything synchronously. However this way, performance is tested implicitly, and the tests are more faithful to their usage in the application. Plus all of our message queueing infrastructure gets tested "for free" along the way.
Edit: Added note about callbacks
你在测试什么?你们班级对某些刺激的反应是什么?在什么情况下,合适的模拟不能完成这项工作?
您的测试可以执行类似的操作,
请注意,这里没有真正的异步处理,并且模拟本身除了允许验证正确请求的接收之外不需要任何逻辑。对于 Orchestrator 的单元测试来说,这已经足够了。
在测试 BPEL 流程时,我使用了这个想法的变体 WebSphere 进程服务器。
What are you testing? The behaviour of your class in response to certain stimuli? In which case don't suitable mocks do the job?
Your test can do something like
Note that there's no true asynch processing here, and the mock itself need have no logic other than to allow verification of the receipt of the correct request. For a Unit test of the Orchestrator this is sufficient.
I used this variation on the idea when testing BPEL processes in WebSphere Process Server.