如何测量 HTTP 请求在接受队列中花费的时间?
我在 Ubuntu 9.10 上使用 Apache2,并且尝试调整 Web 应用程序的配置以减少对 HTTP 请求的响应延迟。
在我的小型服务器负载中等的情况下,有 24 个 apache2 进程处理请求。额外的请求会排队。
使用“netstat”,我看到 24 个连接已建立,125 个连接处于 TIME_WAIT。 我想弄清楚这是否被认为是合理的积压。
大多数请求都会在不到一秒的时间内得到处理,因此我假设请求会相当快地通过接受队列,可能在 1 或 2 秒内,但我想更加确定。
谁能推荐一种简单的方法来测量 HTTP 请求在接受队列中的时间?
到目前为止,我遇到的建议似乎是在 apache2 工作程序接受连接之后启动时钟。我正在尝试量化在此之前的接受队列延迟。
提前致谢, 大卫·琼斯
I am using Apache2 on Ubuntu 9.10, and I am trying to tune my configuration for a web application to reduce latency of responses to HTTP requests.
During a moderately heavy load on my small server, there are 24 apache2 processes handling requests. Additional requests get queued.
Using "netstat", I see 24 connections are ESTABLISHED and 125 connections are TIME_WAIT.
I am trying to figure out if that is considered a reasonable backlog.
Most requests get serviced in a fraction of a second, so I am assuming requests move through the accept-queue fairly quickly, probably within 1 or 2 seconds, but I would like to be more certain.
Can anyone recommend an easy way to measure the time an HTTP request sits in the accept-queue?
The suggestions I have come across so far seem to start the clock after the apache2 worker accepts the connection. I'm trying to quantify the accept-queue delay before that.
thanks in advance,
David Jones
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我不知道您是否可以专门测量连接被接受之前的时间,但是您可以使用 apache utils 附带的
ab
工具来测量响应时间的延迟和变化(这是真正重要的部分) 。它将根据您配置的并发生成流量,然后分解响应时间并给出标准差。
(所以表现不是特别好:)
您可以做的另一件事是将请求时间戳放在请求本身中,并在处理请求时立即进行比较。如果您在同一台机器上生成流量或同步时钟,它可以让您测量请求处理时间。
I don't know if you can specifically measure time before connection is accepted, but you can measure latency and variability of response times (and that's the part that really matters) using
ab
tool that comes with apache utils.It will generate traffic with concurrency you configure and then break down response times and give you standard deviation.
(SO didn't perform particularly well :)
The other thing you could do is to put request timestamp in the request itself and compare immediately when handling the request. If you generate traffic on the same machine or have clocks synchronised, it will let you measure request processing time.