为什么使用 Jmeter 进行负载测试与使用 HP Load runner 进行负载测试时输出存在差异?
场景
这是我们正在对 Web 应用程序进行负载测试的 。该应用程序部署在两台 VM 服务器上,并通过硬件负载平衡器分配负载。
这里使用了两种工具 1. HP Load Runner(一种昂贵的工具)。 2. JMeter -
开发团队使用免费的JMeter对大量用户进行测试。它也没有像 Load Runner 那样的任何许可限制。
测试如何进行? 使用一些参数调用 URL,Web 应用程序读取参数、处理结果并生成 pdf 文件。
运行测试时,我们发现对于 60 秒内分布的 1000 个用户的负载,我们的应用程序需要 4 分钟才能生成 1000 个文件。 现在,当我们通过 JMeter 传递相同的 url 时,1000 个用户的启动时间为 60 秒, 应用程序需要 1 分 15 秒才能生成 1000 个文件。
我很困惑为什么性能会有如此巨大的差异。
Load runner 在两台服务器上都安装了 rstat 守护程序。
有什么线索吗?
Here is the scenario
We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.
There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free
JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.
How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.
When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.
I am baffled here as to why this huge difference in performance.
Load runner has rstat daemon installed on both servers.
Any clues ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
这里确实有四种可能性:
至于上面关于 JMETER 更快的评论,我已经对两者进行了基准测试,对于非常复杂的代码,Loadrunner 的基于 C 的解决方案在从迭代到迭代的执行时比 JMETER 中基于 Java 的解决方案更快。 (方法:动态创建数据文件以进行批量抵押处理上传的复杂算法。p3:800Mhz。2GB RAM。LoadRunner 每小时 180 万次迭代,单个用户不受控制。JMETER,120 万次)一旦添加了节奏是服务器的响应时间,这对两者都是确定的。
应该注意的是,LoadRunner 跟踪其内部 API 时间,以直接解决对该工具影响测试结果的指控。如果您打开结果集数据库集(.mdb 或适当的 Microsoft SQL 服务器实例)并查看 [事件计] 表,您将找到“浪费时间”的参考。浪费时间的定义可以在 LoadRunner 文档中找到。
You really have four possibilities here:
As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.
It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.
罪魁祸首很可能在于脚本的结构。
需要考虑的事项:
Jmeter不会自动放入
等待。
Jmeter 仅请求/下载
HTML 页面,而 Load runner 获取所有内容
嵌入文件?
1000 个 Jmeter 响应都有效吗?
如果您有 1000 个线程
单桌面,我会怀疑你
杀死了 Jmeter,而不是你的全部
答复是有效的。
Most likely the culprit is in HOW the scripts are structured.
Things to consider:
Jmeter does not automatically put in
waits.
Jmeter ONLY requesting/downloading
HTML pages while Load runner gets all
embedded files?
are all 1000 Jmeter responses valid?
If you have 1000 threads from a
single desktop, I would suspect you
killed Jmeter and not all your
responses were valid.
不要忘记测试应用程序本身会自行测量,因为响应的到达是基于测试机器时间的。所以从这个角度来看,JMeter 可能更快。
第二件事是 BlackGaff 提到的等待时间。
始终使用 jmeter 中的结果树检查结果。
并且始终将测试应用程序放在单独的硬件上以查看真实结果,因为测试应用程序本身会加载服务器。
Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.
The second thing to mention is the wait times mentioned by BlackGaff.
Always check results with result tree in jmeter.
And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.