基于分布式SOA架构的系统压力测试
我们目前有一个包含 20 个 SOA 服务的系统,并且有一个主 mysql 数据库和 2 个从节点。目前我们的数据库中有 10 GB 的数据。我们有一个要求,表中的数据将显着增加。我们希望在实施之前对系统进行压力测试。对于这种分布式环境,什么样的压力测试有意义?
此外,在执行压力测试时,我可以查看延迟和指标,例如为 90% 的服务请求提供服务的延迟是多少。还有其他好的服务指标吗?我应该寻找 mysql 数据库的哪些指标?
谢谢
We currently have a system with 20 SOA services and having a single master mysql database and 2 slave nodes. We currently have 10 GB of data in the database. We have a requirement in which the data in a table is going to be significantly increased. We want to stress test the system before proceeding with the implementation. What kind of stress testing makes sense for this kind of distributed environment?
Also, when performing the stress testing, I can look out for latency and metrics like what is the latency for servicing 90% of the requests for the services. Are there any other good metrics for services? What metrics should I look for the mysql database?
Thank you
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这里有一些想法:
JMeter 应该为您提供所需的所有延迟信息。我喜欢使用的 JMeter 的另一个有用的“现实世界”数据点是 90% 查询时间,这是大于或等于 90% 测试响应的响应时间值。
Here are a few ideas:
JMeter should give you all the latency info you need. Another useful "real world" data point form JMeter that I like to use is the 90% query time, which is the response time value that's greater than or equal to 90% of the test responses.
此场景中的想法仍然是尝试对生产中使用的页面请求和帖子进行建模。不同之处在于仅使用 10Gb 当前生产数据的副本来运行负载测试。
然后模拟额外的数据并运行相同的负载测试。您将能够比较使用服务的页面的响应或直接检查服务调用。
然后,您可以看到额外的数据将对您的服务呼叫产生什么影响。
最重要的指标是您期望(或已测量)最常调用的调用的响应时间。
如果发现性能问题,可以分析有关数据库和服务器本身的其他统计信息。
The idea in this scenario is still to try and model the page requests and posts as they are used in production. The difference is to run a load test with just a copy of the 10Gb of current production data.
Then simulate the extra data and run the same load test. You will be able to compare the responses of the pages that uses the services or check the service calls directly.
You can then see what effect the extra data will have on your service calls.
The metrics that are most important are the response times for the calls that you expect (or have measured) to be called most frequently.
Other statistics on the database and servers themselves can be analysed if you discover a performance issue.