用于分析代码速度的 ApacheBench 的替代方案
我已经使用 Apache Bench 进行了一些实验来分析我的代码响应时间,但它并不能完全为我生成正确类型的数据。希望这里的好心人出出主意。
具体来说,我需要一个工具来
- 通过网络执行 HTTP 请求(它不需要做任何非常花哨的事情)
- 尽可能准确地记录响应时间(至少到几毫秒)
- 将响应时间数据写入文件而无需进一步操作处理(或者将其提供给我的代码,如果是库)
我知道ab -e
,它将数据打印到文件。问题是这仅打印分位数数据,这很有用,但不是我需要的。 ab -g
选项可以工作,只是它不打印亚秒数据,这意味着我没有所需的分辨率。
我写了几行Python来做到这一点,但是httplib的效率非常低,所以结果毫无用处。一般来说,我需要比纯 Python 可能提供的更好的精度。如果有人对 Python 中可用的库有建议,我会洗耳恭听。
我需要高性能、可重复且可靠的东西。
我知道我的一半回答将是“互联网延迟使这种详细的测量变得毫无意义”。在我的特定用例中,情况并非如此。我需要高分辨率计时细节。实际使用我的 HPET 硬件的东西会很棒。
由于答案和浏览量很少,因此在这里悬赏。
I've done some experiments using Apache Bench to profile my code response times, and it doesn't quite generate the right kind of data for me. I hope the good people here have ideas.
Specifically, I need a tool that
- Does HTTP requests over the network (it doesn't need to do anything very fancy)
- Records response times as accurately as possible (at least to a few milliseconds)
- Writes the response time data to a file without further processing (or provides it to my code, if a library)
I know about ab -e
, which prints data to a file. The problem is that this prints only the quantile data, which is useful, but not what I need. The ab -g
option would work, except that it doesn't print sub-second data, meaning I don't have the resolution I need.
I wrote a few lines of Python to do it, but the httplib is horribly inefficient and so the results were useless. In general, I need better precision than pure Python is likely to provide. If anyone has suggestions for a library usable from Python, I'm all ears.
I need something that is high performance, repeatable, and reliable.
I know that half my responses are going to be along the lines of "internet latency makes that kind of detailed measurements meaningless." In my particular use case, this is not true. I need high resolution timing details. Something that actually used my HPET hardware would be awesome.
Throwing a bounty on here because of the low number of answers and views.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(6)
我通过两种方式做到了这一点。
“loadrunner”是一个很棒但相当昂贵的产品(我认为这些天来自惠普)。
结合 perl/php 和 Curl 包。我发现 php 中的 CURL api 更容易使用。发起您自己的 GET 和 PUT 请求非常容易。我还建议使用 Firefox 手动运行一些示例请求,并添加 LiveHttpHeaders 来捕获您所需的 http 请求的确切格式。
I have done this in two ways.
With "loadrunner" which is a wonderful but pretty expensive product (from I think HP these days).
With combination perl/php and the Curl package. I found the CURL api slightly easier to use from php. Its pretty easy to roll your own GET and PUT requests. I would also recommend manually running through some sample requests with Firefox and the LiveHttpHeaders add on to captute the exact format of the http requests you need.
JMeter 非常方便。它有一个 GUI,您可以从中设置请求和线程池,也可以从命令行运行。
JMeter is pretty handy. It has a GUI from which you can set up your requests and threadpools and it also can be run from the command line.
如果您可以使用 Java 进行编码,则可以查看 JUnitPerf + HttpUnit 的组合。
缺点是你必须自己做更多的事情。但以此为代价,您将获得无限的灵活性,并且可以说比 GUI 工具更精确,更不用说 HTML 解析、JavaScript 执行等。
还有另一个项目,名为 Grinder 似乎是用于类似的任务,但我没有任何经验。
If you can code in Java, you can look at the combination of JUnitPerf + HttpUnit.
The downside is that you will have to do more things yourself. But at the price of this you will get unlimited flexibility and arguably more preciseness than with GUI tools, not to mention HTML parsing, JavaScript execution, etc.
There's also another project called Grinder which seems to be purposed for a similar task but I don't have any experience with it.
开源性能测试工具的一个很好的参考:http://www.opensourcetesting.org/performance.php
您将找到描述和“最受欢迎”列表
A good reference of opensource perfomance testing tools: http://www.opensourcetesting.org/performance.php
You will find descriptions and a "most popular" list
httperf 非常强大。
httperf is very powerful.
我使用一个脚本来驱动同一交换机上的 10 个盒子,通过向 1 个服务器“重放”请求来生成负载。我让我的网络应用程序按照我需要的粒度记录响应时间(仅限服务器),但我不关心对客户端的响应时间。我不确定您是否愿意将往返客户的行程纳入计算中,但如果您这样做了,那么编写代码应该不会太困难。然后,我使用一个脚本处理我的日志,该脚本提取每个 url 的时间并绘制散点图和基于负载的趋势图。
这满足了我的要求:
我将控制器作为一个 shell 脚本,让 foreach 服务器在后台启动一个进程,循环遍历文件中的所有 url,对每个 URL 调用curl。我用 Perl 编写了日志处理器,因为当时我更多地使用 Perl。
I've used a script to drive 10 boxes on the same switch to generate load by "replaying" requests to 1 server. I had my web app logging response time (server only) to the granularity I needed, but I didn't care about the response time to the client. I'm not sure you care to include the trip to and from the client in your calculations, but if you did it shouldn't be to difficult to code up. I then processed my log with a script which extracted the times per url and did scatter plot graphs, and trend graphs based on load.
This satisfied my requirements which were:
I did controller as a shell script that foreach server started a process in the background to loop over all the urls in a file calling curl on each one. I wrote the log processor in Perl since I was doing more Perl at that time.