如何衡量 PHP 脚本的效率

发布于 2024-12-18 08:00:13 字数 570 浏览 1 评论 0 原文

我想知道对 PHP 脚本进行基准测试的最佳方法是什么。不管是 cron 作业、网页还是 Web 服务。

我知道我可以使用 microtime,但它真的能给我 PHP 脚本的实时时间吗?

我想测试和基准测试 PHP 中执行相同操作的不同函数。例如,preg_match vs strposdomdocument vs preg_match 或 preg_replace vs str_replace`

网页示例:

<?php
// login.php

$start_time = microtime(TRUE);

session_start(); 
// do all my logic etc...

$end_time = microtime(TRUE);

echo $end_time - $start_time;

这将输出:0.0146126717(一直在变化 - 但这是我得到的最后一个)。这意味着执行 PHP 脚本需要 0.015 左右的时间。

有更好的办法吗?

I want to know what is the best way to benchmark my PHP scripts. Does not matter if a cron job, or webpage or web service.

I know i can use microtime but is it really giving me the real time of a PHP script?

I want to test and benchmark different functions in PHP that do the same thing. For example, preg_match vs strpos or domdocument vs preg_match or preg_replace vs str_replace`

Example of a webpage:

<?php
// login.php

$start_time = microtime(TRUE);

session_start(); 
// do all my logic etc...

$end_time = microtime(TRUE);

echo $end_time - $start_time;

This will output: 0.0146126717 (varies all the time - but that is the last one I got). This means it took 0.015 or so to execute the PHP script.

Is there a better way?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

熊抱啵儿 2024-12-25 08:00:14

如果您确实想对现实世界的代码进行基准测试,请使用 XdebugXHProf

当您从事开发/登台工作时,Xdebug 非常有用,而 XHProf 是一个很棒的生产工具,并且在那里运行它是安全的(只要您阅读说明)。任何一个单页面加载的结果都不会像查看代码的执行情况那样相关,而服务器还需要做一百万件其他事情并且资源变得稀缺。这就提出了另一个问题:CPU 是否遇到瓶颈?内存?输入/输出?

您还需要不仅仅关注脚本中运行的代码,还要关注脚本/页面的服务方式。您使用什么网络服务器?举个例子,我可以让 nginx + PHP-FPM 的性能明显优于 mod_php + Apache,而 mod_php + Apache 反过来又因使用良好的 CDN 提供静态内容而受到打击。

接下来要考虑的是您要优化什么?

  • 页面在用户浏览器中呈现的速度是
    第一优先?
  • 是否让对服务器的每个请求尽快被抛出
    可能以最小的 CPU 消耗实现目标吗?

可以通过对发送到浏览器的所有资源进行压缩等操作来帮助实现前者,但这样做可能(在某些情况下)使您远离实现后者。

希望以上所有内容都可以帮助表明,仔细隔离的“实验室”测试不会反映您在生产中遇到的变量和问题,并且您必须确定您的高级目标是什么,然后可以采取哪些措施来实现这一目标,在走上微/过早优化地狱之路之前。

If you actually want to benchmark real world code, use tools like Xdebug and XHProf.

Xdebug is great for when you're working in dev/staging, and XHProf is a great tool for production and it's safe to run it there (as long as you read the instructions). The results of any one single page load aren't going to be as relevant as seeing how your code performs while the server is getting hammered to do a million other things as well and resources become scarce. This raises another question: are you bottlenecking on CPU? RAM? I/O?

You also need to look beyond just the code you are running in your scripts to how your scripts/pages are being served. What web server are you using? As an example, I can make nginx + PHP-FPM seriously out perform mod_php + Apache, which in turn gets trounced for serving static content by using a good CDN.

The next thing to consider is what you are trying to optimise for?

  • Is the speed with which the page renders in the users browser the
    number one priority?
  • Is getting each request to the server thrown back out as quickly as
    possible with smallest CPU consumption the goal?

The former can be helped by doing things like gzipping all resources sent to the browser, yet doing so could (in some circumstances) push you further away from the achieving the latter.

Hopefully all of the above can help show that carefully isolated 'lab' testing will not reflect the variables and problems that you will encounter in production, and that you must identify what your high level goal is and then what you can do to get there, before heading off down the micro/premature-optimisation route to hell.

月亮邮递员 2024-12-25 08:00:14

为了衡量完整脚本在服务器上运行的速度,您可以使用很多工具。首先确保您的脚本(例如 preg_match 与 strpos)必须输出相同的结果才能使您的测试合格。

您可以使用:

To benchmark how fast your complete script runs on the server, there are plenty of tools you can use. First make sure your script (preg_match vs strpos for example) has to output the same results in order to qualify your test.

You can use:

平安喜乐 2024-12-25 08:00:14

您需要查看Xdebug,更具体地说,Xdebug 的分析功能

基本上,您启用探查器,每次加载网页时,它都会创建一个可以使用 WinCacheGrindKCacheGrind

Xdebug 配置起来可能有点棘手,所以这里是我的 php.ini 的相关部分供参考:

[XDebug]
zend_extension = h:\xampp\php\ext\php_xdebug-2.1.1-5.3-vc6.dll
xdebug.remote_enable=true
xdebug.profiler_enable_trigger=1
xdebug.profiler_output_dir=h:\xampp\cachegrind
xdebug.profiler_output_name=callgrind.%t_%R.out

这是 .out 文件的屏幕截图="http://sourceforge.net/projects/wincachegrind" rel="noreferrer">WinCacheGrind

那应该提供有关 PHP 脚本效率的充足详细信息。您想要针对花费最多时间的事情。例如,您可以优化一个函数以节省一半的时间,但是优化一个在页面加载期间被调用数十次(甚至数百次)的函数会更好。

如果您好奇,这只是我为自己使用而编写的 CMS 的旧版本。

You will want to look at Xdebug and more specifically, Xdebug's profiling capabilities.

Basically, you enable the profiler, and every time you load a webpage it creates a cachegrind file that can be read with WinCacheGrind or KCacheGrind.

Xdebug can be a bit tricky to configure so here is the relevant section of my php.ini for reference:

[XDebug]
zend_extension = h:\xampp\php\ext\php_xdebug-2.1.1-5.3-vc6.dll
xdebug.remote_enable=true
xdebug.profiler_enable_trigger=1
xdebug.profiler_output_dir=h:\xampp\cachegrind
xdebug.profiler_output_name=callgrind.%t_%R.out

And here is a screenshot of a .out file in WinCacheGrind:

enter image description here

That should provide ample details about how efficent your PHP script it. You want to target the things that take the most amount of time. For example, you could optimize one function to take half the amount of time, but your efforts would be better served optimizing a function that is called dozens if not hundreds of times during a page load.

If you are curious, this is just an old version of a CMS I wrote for my own use.

古镇旧梦 2024-12-25 08:00:14

尝试 https://github.com/fotuzlab/appgati

它允许定义代码中的步骤并报告时间、两个步骤之间的内存使用情况、服务器负载等。

类似于:

    $appgati->Step('1');

    // Do some code ...

    $appgati->Step('2');

    $report = $appgati->Report('1', '2');
    print_r($report);

示例输出数组:

Array
(
    [Clock time in seconds] => 1.9502429962158
    [Time taken in User Mode in seconds] => 0.632039
    [Time taken in System Mode in seconds] => 0.024001
    [Total time taken in Kernel in seconds] => 0.65604
    [Memory limit in MB] => 128
    [Memory usage in MB] => 18.237907409668
    [Peak memory usage in MB] => 19.579357147217
    [Average server load in last minute] => 0.47
    [Maximum resident shared size in KB] => 44900
    [Integral shared memory size] => 0
    [Integral unshared data size] => 0
    [Integral unshared stack size] => 
    [Number of page reclaims] => 12102
    [Number of page faults] => 6
    [Number of block input operations] => 192
    [Number of block output operations] => 
    [Number of messages sent] => 0
    [Number of messages received] => 0
    [Number of signals received] => 0
    [Number of voluntary context switches] => 606
    [Number of involuntary context switches] => 99
)

Try https://github.com/fotuzlab/appgati

It allows to define steps in the code and reports time, memory usage, server load etc between two steps.

Something like:

    $appgati->Step('1');

    // Do some code ...

    $appgati->Step('2');

    $report = $appgati->Report('1', '2');
    print_r($report);

Sample output array:

Array
(
    [Clock time in seconds] => 1.9502429962158
    [Time taken in User Mode in seconds] => 0.632039
    [Time taken in System Mode in seconds] => 0.024001
    [Total time taken in Kernel in seconds] => 0.65604
    [Memory limit in MB] => 128
    [Memory usage in MB] => 18.237907409668
    [Peak memory usage in MB] => 19.579357147217
    [Average server load in last minute] => 0.47
    [Maximum resident shared size in KB] => 44900
    [Integral shared memory size] => 0
    [Integral unshared data size] => 0
    [Integral unshared stack size] => 
    [Number of page reclaims] => 12102
    [Number of page faults] => 6
    [Number of block input operations] => 192
    [Number of block output operations] => 
    [Number of messages sent] => 0
    [Number of messages received] => 0
    [Number of signals received] => 0
    [Number of voluntary context switches] => 606
    [Number of involuntary context switches] => 99
)
雪花飘飘的天空 2024-12-25 08:00:14

我会研究xhprof。它是在 cli 上运行还是通过另一个 sapi(例如 fpm 或 fcgi 甚至 Apache 模块)运行并不重要。

xhprof 最好的部分是它甚至足以在生产中运行。有些东西与 xdebug 配合得不太好(我上次检查过)。 xdebug 对性能有影响,而 xhprof(我不会说没有)管理得更好。

我们经常使用 xhprof 来收集真实流量的样本,然后分析其中的代码。

从它为您带来时间等方面来看,它并不是一个真正的基准,尽管它也能做到这一点。它使得分析生产流量变得非常容易,然后深入到收集的调用图中的 php 函数级别。

一旦扩展被编译并加载,您就可以开始在代码中进行分析:

xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);

要停止:

$xhprof_data = xhprof_disable();

然后将数据保存到文件或数据库中 - 无论什么都可以浮动您的船并且不会中断通常的运行时。我们将其异步推送到 S3 以集中数据(以便能够查看所有服务器的所有运行情况)。

github 上的代码包含一个 xhprof_html 文件夹,您可以将其转储到服务器上,并且只需最少的配置,您就可以可视化收集数据并开始向下钻取。

哈!

I'd look into xhprof. It doesn't matter if it's run on the cli or via another sapi (like fpm or fcgi or even the Apache module).

The best part about xhprof is that it's even fit enough to be run in production. Something that doesn't work as well with xdebug (last time I checked). xdebug has an impact on performance and xhprof (I wouldn't say there is none) manages a lot better.

We frequently use xhprof to collect samples with real traffic and then analyze the code from there.

It's not really a benchmark in terms that it gets you a time and all that, though it does that as well. It just makes it very easy to analyze production traffic and then drill down to the php function level in the callgraph collected.

Once the extension is compiled and loaded you start profiling in the code with:

xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);

To stop:

$xhprof_data = xhprof_disable();

Then save the data to a file, or database - whatever floats your boath and doesn't interupt usual runtime. We asynchronously push this to S3 to centralize the data (to be able to see all runs from all of our servers).

The code on github contains an xhprof_html folder which you dump on the server and with minimal configuration, you can visualize the data collected and start drilling down.

HTH!

︶ ̄淡然 2024-12-25 08:00:14

将其放入 for 循环中,将每件事执行 1,000,000 次,以获得更实际的数字。并且仅在您实际想要进行基准测试的代码之前启动计时器,然后记录之后的结束时间(即不要在 session_start() 之前启动计时器。

还要确保代码相同对于您想要进行基准测试的每个函数,除了您正在计时的函数之外,

脚本的执行方式(cronjob、命令行中的 php、Apache 等)不应产生影响,因为您只是计时了函数之间的相对差异。所以这个比率应该保持不变。同样,

如果您运行基准测试的计算机发生了许多其他情况,并且在基准测试运行期间恰好出现其他应用程序的 CPU 或内存使用量高峰,则可能会影响基准测试结果。如果您的计算机上有很多可用资源,那么我认为这不会成为问题。

Put it in a for loop to do each thing 1,000,000 times to get a more realistic number. And only start the timer just before the code you actually want to benchmark, then record the end time just after (i.e. don't start the timer before the session_start().

Also make sure the code is identical for each function you want to benchmark, with the exception of the function you are timing.

How the script is executed (cronjob, php from commandline, Apache, etc.) should not make a difference since you are only timing the relative difference between the speed of the different functions. So this ratio should remain the same.

If the computer on which you are running the benchmark has many other things going on, this could affect the benchmark results if there happens to be a spike in CPU or memory usage from another application while your benchmark is running. But as long as you have a lot of resources to spare on the computer then I don't think this will be a problem.

独留℉清风醉 2024-12-25 08:00:14

埃里克,

你问自己错误的问题。如果您的脚本在大约 15 毫秒内执行,那么它的时间基本上是无关紧要的。如果您在共享服务上运行,则 PHP 映像激活将需要大约 100 毫秒,如果完全缓存在服务器上,则读取脚本文件大约需要 30-50 毫秒,如果从后端 NAS 场加载,则可能需要 1 秒或更多秒。加载页面家具时的网络延迟可能会增加很多秒。

这里的主要问题是用户对加载时间的感知:他或她在单击链接和获得完全呈现的页面之间需要等待多长时间。查看 Google Page Speed,您可以将其用作 Ff 或 Chrome 扩展程序,然后Pagespeed 文档深入讨论了如何获得良好的页面性能。请遵循这些准则并尝试使您的页面得分高于 90/100。 (Google 主页得分为 99/100,我的博客也是如此)。这是获得良好的用户感知性能的最佳方式。

Eric,

You are asking yourself the wrong question. If your script is executing in ~15 mSec then its time is largely irrelevant. If you run on a shared service then PHP image activation will take ~100 mSec, reading in the script files ~30-50 mSec if fully cached on the server, possibly 1 or more seconds if being loaded in from a backend NAS farm. Network delays on loading the page furniture can add lots of seconds.

The main issue here is the users perception of load time: how long does he or she have to wait between clicking on the Link and getting a fully rendered page. Have a look at Google Page Speed which you can use as Ff or chrome extension, and the Pagespeed documentation which discusses in depth how to get good page performance. Follow these guidelines and try to get your page scores better than 90/100. (The google home page scores 99/100 as does my blog). This is the best way to get good user-perceived performance.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文