为什么使用 Jmeter 进行负载测试与使用 HP Load runner 进行负载测试时输出存在差异?

发布于 2024-09-29 14:26:18 字数 495 浏览 7 评论 0原文

场景

这是我们正在对 Web 应用程序进行负载测试的 。该应用程序部署在两台 VM 服务器上,并通过硬件负载平衡器分配负载。

这里使用了两种工具 1. HP Load Runner(一种昂贵的工具)。 2. JMeter -

开发团队使用免费的JMeter对大量用户进行测试。它也没有像 Load Runner 那样的任何许可限制。

测试如何进行? 使用一些参数调用 URL,Web 应用程序读取参数、处理结果并生成 pdf 文件。

运行测试时,我们发现对于 60 秒内分布的 1000 个用户的负载,我们的应用程序需要 4 分钟才能生成 1000 个文件。 现在,当我们通过 JMeter 传递相同的 url 时,1000 个用户的启动时间为 60 秒, 应用程序需要 1 分 15 秒才能生成 1000 个文件。

我很困惑为什么性能会有如此巨大的差异。

Load runner 在两台服务器上都安装了 rstat 守护程序。

有什么线索吗?

Here is the scenario

We are load testing a web application. The application is deployed on two VM servers with a a hardware load balancer distributing the load.

There are tow tools used here
1. HP Load Runner (an expensive tool).
2. JMeter - free

JMeter was used by development team to test for a huge number of users. It also does not have any licensing limit like Load Runner.

How the tests are run ?
A URL is invoked with some parameters and web application reads the parameter , process results and generates a pdf file.

When running the test we found that for a load of 1000 users spread over period of 60 seconds, our application took 4 minutes to generate 1000 files.
Now when we pass the same url through JMeter, 1000 users with a ramp up time of 60 seconds,
application takes 1 minutes and 15 seconds to generate 1000 files.

I am baffled here as to why this huge difference in performance.

Load runner has rstat daemon installed on both servers.

Any clues ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

江城子 2024-10-06 14:26:18

这里确实有四种可能性:

  1. 您正在测量两个不同的事物。检查您的计时记录结构。
  2. 这两个工具之间的请求和响应信息是不同的。使用 Fiddler 或 Wireshark 检查。
  3. 您的测试环境初始条件不同,会产生不同的结果。测试 101 的内容,但在追踪此类问题时经常被忽视。
  4. 您的 loadrunner 环境中的负载生成器超载,导致所有虚拟用户速度变慢。例如,您可能会记录所有内容,导致文件系统成为测试的瓶颈。故意降低生成器的负载,降低日志记录级别并观察如何使用内存进行关联,这样就不会创建导致高交换活动的物理内存超额订阅情况。

至于上面关于 JMETER 更快的评论,我已经对两者进行了基准测试,对于非常复杂的代码,Loadrunner 的基于 C 的解决方案在从迭代到迭代的执行时比 JMETER 中基于 Java 的解决方案更快。 (方法:动态创建数据文件以进行批量抵押处理上传的复杂算法。p3:800Mhz。2GB RAM。LoadRunner 每小时 180 万次迭代,单个用户不受控制。JMETER,120 万次)一旦添加了节奏是服务器的响应时间,这对两者都是确定的。

应该注意的是,LoadRunner 跟踪其内部 API 时间,以直接解决对该工具影响测试结果的指控。如果您打开结果集数据库集(.mdb 或适当的 Microsoft SQL 服务器实例)并查看 [事件计] 表,您将找到“浪费时间”的参考。浪费时间的定义可以在 LoadRunner 文档中找到。

You really have four possibilities here:

  1. You are measuring two different things. Check your timing record structure.
  2. Your request and response information is different between the two tools. Check with Fiddler or Wireshark.
  3. Your test environment initial conditions are different yielding different results. Test 101 stuff, but quite often overlooked in tracking down issues like this.
  4. You have an overloaded load generator in your loadrunner environment which is causing all virtual users to slow. For example you may be logging everything resulting in your file system becoming a bottleneck for the test. Deliberately underload your generators, reduce your logging levels and watch how you are using memory for correlations so you don't create a physical memory oversubscribed condition which results in high swap activity.

As to the comment above as to JMETER being faster, I have benchmarked both and for very complex code the C based solution for Loadrunner is faster upon execution from iteration to iteration than the Java based solution in JMETER. (method: complex algorithm for creating data files on the fly for upload for batch mortgage processing. p3: 800Mhz. 2GB of RAM. LoadRunner 1.8 million iterations per hour ungoverned for a single user. JMETER, 1.2 million) Once you add in pacing it is the response time of the server which is determinate to both.

It should be noted that LoadRunner tracks its internal API time to directly address accusations of the tool influencing the test results. If you open the results set database set (.mdb or Microsoft SQL server instance as appropriate) and take a look at the [event meter] table you will find a reference for "Wasted Time." The definition for wasted time can be found in the LoadRunner documentation.

要走干脆点 2024-10-06 14:26:18

罪魁祸首很可能在于脚本的结构。

需要考虑的事项:

  • 思考/等待时间:录音时,
    Jmeter不会自动放入
    等待。
  • 请求的项目: 是
    Jmeter 仅请求/下载
    HTML 页面,而 Load runner 获取所有内容
    嵌入文件?
  • 无效回复:
    1000 个 Jmeter 响应都有效吗?
    如果您有 1000 个线程
    单桌面,我会怀疑你
    杀死了 Jmeter,而不是你的全部
    答复是有效的。

Most likely the culprit is in HOW the scripts are structured.

Things to consider:

  • Think / wait time: When recording,
    Jmeter does not automatically put in
    waits.
  • Items being requested: Is
    Jmeter ONLY requesting/downloading
    HTML pages while Load runner gets all
    embedded files?
  • Invalid Responses:
    are all 1000 Jmeter responses valid?
    If you have 1000 threads from a
    single desktop, I would suspect you
    killed Jmeter and not all your
    responses were valid.
澜川若宁 2024-10-06 14:26:18

不要忘记测试应用程序本身会自行测量,因为响应的到达是基于测试机器时间的。所以从这个角度来看,JMeter 可能更快。

第二件事是 BlackGaff 提到的等待时间。

始终使用 jmeter 中的结果树检查结果。

并且始终将测试应用程序放在单独的硬件上以查看真实结果,因为测试应用程序本身会加载服务器。

Dont forget that the testing application itself measures itself, since the arrival of the response is based on the testing machine time. So from this perspective it could be the answer, that JMeter is simply faster.

The second thing to mention is the wait times mentioned by BlackGaff.

Always check results with result tree in jmeter.

And always put the testing application onto separate hardware to see real results, since testing application itself loads the server.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文