如何测量 HTTP 请求在接受队列中花费的时间?

发布于 2024-10-11 09:39:28 字数 402 浏览 2 评论 0原文

我在 Ubuntu 9.10 上使用 Apache2,并且尝试调整 Web 应用程序的配置以减少对 HTTP 请求的响应延迟。

在我的小型服务器负载中等的情况下,有 24 个 apache2 进程处理请求。额外的请求会排队。

使用“netstat”,我看到 24 个连接已建立,125 个连接处于 TIME_WAIT。 我想弄清楚这是否被认为是合理的积压。

大多数请求都会在不到一秒的时间内得到处理,因此我假设请求会相当快地通过接受队列,可能在 1 或 2 秒内,但我想更加确定。

谁能推荐一种简单的方法来测量 HTTP 请求在接受队列中的时间?

到目前为止,我遇到的建议似乎是在 apache2 工作程序接受连接之后启动时钟。我正在尝试量化在此之前的接受队列延迟。

提前致谢, 大卫·琼斯

I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests.

During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued.

Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT.
I am trying to figure out if that is considered a reasonable backlog.

Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain.

Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue?

The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that.

thanks in advance,
David Jones

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

想你只要分分秒秒 2024-10-18 09:39:28

我不知道您是否可以专门测量连接被接受之前的时间,但是您可以使用 apache utils 附带的 ab 工具来测量响应时间的延迟和变化(这是真正重要的部分) 。

它将根据您配置的并发生成流量,然后分解响应时间并给出标准差。

Server Hostname:        stackoverflow.com
Document Length:        192529 bytes
Concurrency Level:      3
Time taken for tests:   48.769 seconds
Complete requests:      100
Failed requests:        44
   (Connect: 0, Receive: 0, Length: 44, Exceptions: 0)
Write errors:           0
Total transferred:      19427481 bytes
HTML transferred:       19400608 bytes
Requests per second:    2.05 [#/sec] (mean)
Time per request:       1463.078 [ms] (mean)
Time per request:       487.693 [ms] (mean, across all concurrent requests)
Transfer rate:          389.02 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      101  109   9.0    105     152
Processing:   829 1336 488.0   1002    2246
Waiting:      103  115  38.9    104     368
Total:        939 1444 485.2   1112    2351

Percentage of the requests served within a certain time (ms)
  50%   1112
  66%   1972
  75%   1985
  80%   1990
  90%   2062
  95%   2162
  98%   2310
  99%   2351
 100%   2351 (longest request)

(所以表现不是特别好:)

您可以做的另一件事是将请求时间戳放在请求本身中,并在处理请求时立即进行比较。如果您在同一台机器上生成流量或同步时钟,它可以让您测量请求处理时间。

I don't know if you can specifically measure time before connection is accepted, but you can measure latency and variability of response times (and that's the part that really matters) using ab tool that comes with apache utils.

It will generate traffic with concurrency you configure and then break down response times and give you standard deviation.

Server Hostname:        stackoverflow.com
Document Length:        192529 bytes
Concurrency Level:      3
Time taken for tests:   48.769 seconds
Complete requests:      100
Failed requests:        44
   (Connect: 0, Receive: 0, Length: 44, Exceptions: 0)
Write errors:           0
Total transferred:      19427481 bytes
HTML transferred:       19400608 bytes
Requests per second:    2.05 [#/sec] (mean)
Time per request:       1463.078 [ms] (mean)
Time per request:       487.693 [ms] (mean, across all concurrent requests)
Transfer rate:          389.02 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:      101  109   9.0    105     152
Processing:   829 1336 488.0   1002    2246
Waiting:      103  115  38.9    104     368
Total:        939 1444 485.2   1112    2351

Percentage of the requests served within a certain time (ms)
  50%   1112
  66%   1972
  75%   1985
  80%   1990
  90%   2062
  95%   2162
  98%   2310
  99%   2351
 100%   2351 (longest request)

(SO didn't perform particularly well :)

The other thing you could do is to put request timestamp in the request itself and compare immediately when handling the request. If you generate traffic on the same machine or have clocks synchronised, it will let you measure request processing time.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文