PHP 生成页面,但不会立即将其返回给用户
我目前正在测试我正在组装的服务器设置的负载能力。 apache2 服务器上安装了 PHP 5.X,它连接到单独机器上的主数据库,然后连接到 2 个从服务器中的 1 个进行读取。
如果我自己调用它,我的测试页面需要 0.2 秒才能生成。我在另一台服务器上创建了一个 php 脚本,该脚本创建了 65 个对测试页面的同时调用。测试页面在整个页面上进行微时间基准测试,让我知道每个部分花费了多长时间。正如预期的那样 - 至少对我来说,如果有人对此有意见或建议,请随时发表评论 - 页面的 SQL 部分需要很短的时间来处理它收到的前几个请求,然后会降级,因为其余部分查询堆积起来并且必须等待。我认为这可能是磁盘 IO 问题,但在固态硬盘上测试时也出现了相同的行为。
我的问题是,按照我的预期,我的测试脚本创建并加载了 65 个页面中的大约 30 个页面。例如,我的基准测试表明该页面在 3 秒内创建,而我的测试脚本表明它在 3.1 秒内完整接收了该页面。差别并不大。问题是,对于其他请求,我的基准测试显示页面在 3 秒内加载,但测试脚本直到 6 秒才完整接收页面。从 apache 服务器生成页面到发送回请求该页面的测试脚本之间整整花了 3 秒。为了确保这不是测试脚本的问题,我尝试在本地浏览器运行时加载页面,并通过 Chrome 中的时间线窗口确认了相同的延迟。
我已经尝试了 Apache 的各种配置,但似乎无法找到导致这种延迟的原因。我最近的尝试如下。该机器是四核 AMD 2.8Ghz 和 2Ghz RAM。任何有关配置的帮助或有关如何操作的其他建议将不胜感激。 ——抱歉问了这么长的问题。
我应该提到的是,我在脚本运行时监控了资源,并且 CPU 负载达到了最大 9%,并且始终有至少 1 GB 的空闲内存。
我还将提到,当我查询的只是静态 HTML 页面时,也会发生相同类型的情况。第一对需要 .X 秒,然后慢慢增加到 3 秒。
LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 120 MaxClients 150 KeepAlive On KeepAliveTimeout 4 MaxKeepAliveRequests 150 Header always append x-frame-options sameorigin StartServers 50 MinSpareServers 25 MaxSpareServers 50 MaxClients 150 MaxRequestsPerChild 0 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .httpdoverride Order allow,deny DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps SecRuleEngine On SecRequestBodyAccess On SecResponseBodyAccess Off SecUploadKeepFiles Off SecDebugLog /var/log/apache2/modsec_debug.log SecDebugLogLevel 0 SecAuditEngine RelevantOnly SecAuditLogRelevantStatus ^5 SecAuditLogParts ABIFHZ SecAuditLogType Serial SecAuditLog /var/log/apache2/modsec_audit.log SecRequestBodyLimit 131072000 SecRequestBodyInMemoryLimit 131072 SecResponseBodyLimit 524288000 ServerTokens Full SecServerSignature "Microsoft-IIS/5.0"
更新: 似乎很多回应都集中在 SQL 是罪魁祸首这一事实上。所以我在这里声明,静态 HTML 页面上也会发生相同的行为。下面列出了基准测试的结果。
Concurrency Level: 10 Time taken for tests: 5.453 seconds Complete requests: 1000 Failed requests: 899 (Connect: 0, Receive: 0, Length: 899, Exceptions: 0) Write errors: 0 Total transferred: 290877 bytes HTML transferred: 55877 bytes Requests per second: 183.38 [#/sec] (mean) Time per request: 54.531 [ms] (mean) Time per request: 5.453 [ms] (mean, across all concurrent requests) Transfer rate: 52.09 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 21 250.7 0 3005 Processing: 16 33 17.8 27 138 Waiting: 16 33 17.8 27 138 Total: 16 54 253.0 27 3078 Percentage of the requests served within a certain time (ms) 50% 27 66% 36 75% 42 80% 46 90% 58 95% 71 98% 90 99% 130 100% 3078 (longest request)
我还将声明,我通过使用 PHP 和 microtime() 确定延迟是在生成页面之前发生的。我通过生成页面和接收它的测试脚本之间的时间差来确定这一点。差异是一致的,这意味着从生成页面到我的测试页面收到它的时间点是相同的,无论整个请求花费多长时间。
感谢所有回复的人。所有的观点都很好,我只是不能说其中任何一个已经解决了问题。
I'm currently testing the load capacity of a server setup I'm putting together. The apache2 server has PHP 5.X installed on it, and it connects to a master database on a seperate machine, and then 1 of 2 slave servers to do read froms.
My test page takes .2 seconds to generate if I call it by itself. I created a php script on a different server that creates 65 simultaneous calls to the test page. The test page takes microtime benchmarks throughout the page to let me know how long each section is taking. As expected - at least to me, if anyone has opinions or suggestions on this, feel free to comment-, the SQL portion of the page takes a short amount of time for the first couple requests it receives and then degrades because the rest of the queries stack up and have to wait. I thought that it may be a disk IO issue, but the same behavoir occured when testing on a solid state drive.
My issue is that about 30 or so of 65 pages are created, and loaded by my test script as I expected. My benchmark said the page was created in 3 seconds for example, and my test script said it received the page in full in 3.1 seconds. The differential wasn't much. The problem is that for the other requests, my benchmark says the pages were loaded in 3 seconds, but the test script didn't receive the page in full until 6 seconds. that's a full 3 seconds between the page being generated by the apache server and sent back to my test script that requested it. To make sure it wasn't an issue with the test script, I tried loading the page in a local browser while it was running, and received the same delay confirmed via the timeline window in Chrome.
I have tried all sorts of configurations for Apache, but can't seem to find what is causing this delay. My most recent attempt is below. The machine is a quad core AMD 2.8Ghz with 2Ghz of Ram. Any help with configuration, or other suggestions on what to do would be appreciated. -- Sorry for the long question.
I should mention that I monitored the resources while the script while it was running, and the CPU hit a max of 9% load and always had at least 1 gig of ram free.
I'll also mention that the same type of thing occurs when all I'm querying is a static HTML page. The first couple take .X seconds, and then it slowly ramps up to 3 seconds.
LockFile ${APACHE_LOCK_DIR}/accept.lock PidFile ${APACHE_PID_FILE} Timeout 120 MaxClients 150 KeepAlive On KeepAliveTimeout 4 MaxKeepAliveRequests 150 Header always append x-frame-options sameorigin StartServers 50 MinSpareServers 25 MaxSpareServers 50 MaxClients 150 MaxRequestsPerChild 0 User ${APACHE_RUN_USER} Group ${APACHE_RUN_GROUP} AccessFileName .httpdoverride Order allow,deny DefaultType text/plain HostnameLookups Off ErrorLog ${APACHE_LOG_DIR}/error.log LogLevel warn Include mods-enabled/*.load Include mods-enabled/*.conf Include httpd.conf Include ports.conf LogFormat "%v:%p %h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined LogFormat "%h %l %u %t \"%r\" %>s %O" common LogFormat "%{Referer}i -> %U" referer LogFormat "%{User-agent}i" agent Include conf.d/ Include sites-enabled/ AddType application/x-httpd-php .php AddType application/x-httpd-php-source .phps SecRuleEngine On SecRequestBodyAccess On SecResponseBodyAccess Off SecUploadKeepFiles Off SecDebugLog /var/log/apache2/modsec_debug.log SecDebugLogLevel 0 SecAuditEngine RelevantOnly SecAuditLogRelevantStatus ^5 SecAuditLogParts ABIFHZ SecAuditLogType Serial SecAuditLog /var/log/apache2/modsec_audit.log SecRequestBodyLimit 131072000 SecRequestBodyInMemoryLimit 131072 SecResponseBodyLimit 524288000 ServerTokens Full SecServerSignature "Microsoft-IIS/5.0"
UPDATE:
It seems alot of responses are are focusing on the fact that the SQL is the culprit. So I'm stating here that the same behavoir happens on a static HTML Page. The results of a benchmarking are listed below.
Concurrency Level: 10 Time taken for tests: 5.453 seconds Complete requests: 1000 Failed requests: 899 (Connect: 0, Receive: 0, Length: 899, Exceptions: 0) Write errors: 0 Total transferred: 290877 bytes HTML transferred: 55877 bytes Requests per second: 183.38 [#/sec] (mean) Time per request: 54.531 [ms] (mean) Time per request: 5.453 [ms] (mean, across all concurrent requests) Transfer rate: 52.09 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 21 250.7 0 3005 Processing: 16 33 17.8 27 138 Waiting: 16 33 17.8 27 138 Total: 16 54 253.0 27 3078 Percentage of the requests served within a certain time (ms) 50% 27 66% 36 75% 42 80% 46 90% 58 95% 71 98% 90 99% 130 100% 3078 (longest request)
I'll also state that I determined through the use of PHP and microtime() that the lag is happening before the page is being generated. I determined this through the difference in time between the page being generated and my test script receiving it. The difference is consistent meaning the amount of time from the point the page is generated until the point my test page receives it was the same no matter how long the entire request took.
Thank you to all who have responded. All are good points, I just can't say any of them have solved the issue.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
投放前加载的确切页面数是多少?您提到您正在从单个外部脚本创建 65 个并发请求。你没有启用像 limitipconn 这样的 mod 来限制来自单个 IP 的 N 个连接或其他东西吗?是否总是恰好有 30 个(或其他)连接然后延迟?
What is the exact number of pages that load before the drop off? You mentioned that you were creating 65 simultaneous requests from a single, external script. You don't have a mod like limitipconn enabled that would be limiting things after N connections from a single IP or something? Is it always exactly 30 (or whatever) connections and then delay?
还有很多其他因素,但我真的猜测您正在快速生成 30-40 个进程,每个进程使用 30M 左右的内存,并杀死机器有限的内存,然后继续生成新进程并进行交换,从而减慢一切速度。
如果内存为 2G,MaxClients 为 150,MaxRequestsPerChild 为 0,即使您的数据库不在同一台物理服务器上,服务器资源也可能会被淹没。
基本上,对于 Web 服务器性能,您不想进行交换。运行测试,然后立即使用以下命令检查 Web 服务器上的内存:
这将为您提供以 MB 为单位的内存使用情况和交换使用情况。理想情况下,您应该看到交换为 0 或接近 0。如果不是零或交换使用率非常低,则问题只是内存耗尽,您的服务器正在抖动,因此浪费了 CPU,导致响应时间缓慢。
您需要获得一些数字才能确定,但首先执行“top”并在 top 运行时按 Shift-M 以按内存排序。下次运行测试时,您会发现每个 httpd 进程报告的 %MEM 数量的大致数字。它会有所不同,因此最好使用较高的值作为最坏情况的指导。我在同一台服务器上有一个 wordpress、一个 drupal 和一个 customere 站点,它们通常从开始时为每个 http 进程分配 20M,并最终随着时间的推移而向上增长——如果不检查的话,每个进程都会超过 100M。
举个例子,如果我有 2G,linux、核心服务和 mysql 使用 800M,我会保持对 Apache fun 可用内存的期望值低于 1G。这样,如果我的 apache 进程平均使用 20M 的高端,我只能有 50 个 MaxClient。这是一个非常不保守的数字,在现实生活中,为了安全起见,我会将 Max 降至 40 左右。不要试图压缩内存...如果您提供足够的流量以支持 40 个同时连接,请先花 100 美元升级到 4G,然后再升级 Max 服务器。这就是其中之一,一旦你越过界限,一切都会消失,所以请安全地保持在你的记忆限制范围内!
另外,对于 php,我喜欢将 MaxRequestsPerChild 保持在 100 左右...您不是 CPU 受限的服务网页,所以不必担心节省几毫秒生成新的子进程。将其设置为 0 意味着无限制的请求,并且除非客户端总数超过 MaxSpareServers,否则它们永远不会被终止。对于使用 apache 工作线程的 php 来说,这通常是一件非常糟糕的事情,因为它们会不断增长,直到发生坏事(比如必须硬重启你的服务器,因为你无法登录,因为 apache 耗尽了所有内存,并且ssh 如果没有超时就无法工作)。
祝你好运!
There are many other factors, but I'm really guessing that you're spawning 30-40 processes quickly using using 30M or so each and killing your machines limited memory, then continuing to spawn new ones and thashing to swap, slowing everything down.
With 2G of ram, MaxClients at 150 and MaxRequestsPerChild at 0 the server resources are probably getting swamped even if your DB isn't on the same physical server.
Basically, for web server performance you don't want to ever swap. Run your tests and then immediately check memory on the web server with a:
This will give you memory usage in MB and swap usage. You should ideally see swap either 0 or close to 0. If not zilch or very low swap usage, the issue is simply memory running out and your server is thrashing therefore wasting CPU resulting in slow response time.
You need to get some numbers to be certain, but first do a 'top' and press Shift-M while top is running to sort by memory. The next time you run your tests and find a ballpark number on how much %MEM is being reported for each httpd process. It will vary, so it's best to use the higher ones as your guide for a worst case bound. I've got a wordpress, a drupal, and a custome site on the same server that routinely allocates 20M per http process from start and eventually grow in time upwards--if unchecked past 100M each.
Pulling some numbers out of my butt for example, if I had 2G and linux, core services, and mysql were using 800M, I'd keep expectations for memory I'd want to assume available for Apache fun to be under 1G. With this, if my apache processes were using on the high side an average of 20M, I could only have 50 MaxClients. Thats a very non-conservative number, in real life I'd drop Max down to 40 or so to be safe. Don't try to pinch memory... if you're serving up enough traffic to have 40 simultaneous connections, pony up the $100 to go to 4G before inching up the Max servers. It's one of those, once you cross the line everything goes down the toilet, so stay safely under your memory limits!
Also, with php I like to keep the MaxRequestsPerChild to 100 or so... you're not CPU bound serving web pages, so don't worry about saving a few milliseconds spawning new child process. Setting it to 0 means unlimited requests and they never be killed off unless the total clients exceed MaxSpareServers. This is generally A VERY BAD THING with php using apache workers as they just keep growing on until badness occurs (like having to hard restart your server because you can't log in cause apache used up all memory and ssh can't work without timing out).
Good Luck!