使 PHP 性能分析变得可预测
我正在使用 xdebug 和 PHP 来进行一些性能分析。但是当我多次运行同一个脚本时,我经常会得到非常不同的时间。因此很难知道对结果有多少信心。
显然,机器上发生的很多事情都会影响 PHP 的性能。但是我能做些什么来减少变量的数量,以便多次测试更加一致吗?
我在 Mac OS X 上的 Apache 下运行 PHP。
I'm using xdebug with PHP to do some performance profiling. But when I run the same script more than once, I often get very different times. So it's hard to know how much faith to put in the results.
Obviously there's a lot happening on a machine that can affect PHP's performance. But is there anything I can do to reduce the number of variables, so multiple tests are more consistent?
I'm running PHP under Apache, on Mac OS X.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
ab
或siege
,以确保所有阿帕奇孩子都被击中。curl
或wget
从命令行分析脚本,以便 Apache 只提供一种资源:脚本本身。可能有人认为通过省略其中一些步骤来获得更多“现实世界”的数字。我期待这个问题可能收到的其他答案。
ab
orsiege
, to make sure all Apache children are hit.curl
orwget
so that Apache only serves one resource: the script itself.There may be an argument for getting more "real world" numbers by omitting some of these steps. I look forward to other answers this question may receive.
有两个不同的任务:衡量性能和发现问题。
为了测量所花费的时间,您应该预期可变性,因为它取决于机器中发生的其他情况。这很正常。
为了发现问题,您需要知道各种活动所用时间的百分比。作为其他因素的函数,百分比不会发生太大变化,而且百分比的确切值无论如何也没有多大关系。
重要的是你找到了能带来健康百分比的活动,你可以修复它们,然后修复它们。当您这样做时,您可以期望节省高达该百分比的时间,但发现的结果正是您需要做的。测量是次要的。
补充:您可能想问“难道您不需要测量才能找到吗?”
考虑一个例子。假设您在打开调试的情况下运行程序,然后随机暂停它,并且您在关闭日志文件的过程中看到它。你继续,然后再次暂停,看到同样的事情。粗略的“测量”表明它花费了 100% 的时间来做这件事。当然,花在这件事上的时间并不是真正的 100%,但无论它是什么,它都很大,而且你已经找到了。那么也许您不必如此频繁地打开/关闭文件,或者其他什么。通常,需要更多样本,但不要太多。
There are two different tasks, measuring performance and finding problems.
For measuring the time it takes, you should expect variability, because it depends on what else is going on in the machine. That's normal.
For finding problems, what you need to know is the percent of time used by various activities. Percent doesn't change too much as a function of other things, and the exact value of the percent doesn't matter much anyway.
What matters is that you find activities responsible for a healthy percent, that you can fix, and then that you fix them. When you do, you can expect to save time up to that percent, but the finding is what you need to do. The measuring is secondary.
Added: You might want to ask "Don't you have to measure in order to find?"
Consider an example. Suppose you run your program with debugging turned on, and you randomly pause it, and you see it in the process of closing a log file. You continue it, and then pause it again, and see the same thing. Well that rough "measurement" says it's spending 100% of its time doing that. Naturally, the time spent doing it isn't really 100%, but whatever it is, it's big, and you've found it. So then maybe you don't have to open/close the file so often, or something. Typically, more samples are needed, but not too many.