包含 PHP 文件对性能有多大影响?

发布于 2024-11-15 14:13:56 字数 420 浏览 6 评论 0原文

问题几乎说明了一切,我正在开发一个大型项目,其中大多数对 php include() 的调用在 100 到 150 个文件之间。 php 平均花费的时间在 150 到 300 毫秒之间。我想知道其中有多少是由于包含 PHP 脚本造成的?我一直在考虑运行一个脚本来检查最常访问的文件中的特定调用,并将它们合并到一个文件中以加快速度,但据我所知,这影响为零。

我应该注意,我使用 APC,我不完全了解 APC 在后台做什么,但我想它可能已经以某种方式缓存了我的文件,因此文件数量并没有真正产生很大的区别?

非常感谢有关该主题的任何意见。

当然,300 毫秒并不算多,但如果我能把它降低到 100 甚至 50 毫秒,那就是一个显着的提升。

编辑:

为了澄清我正在谈论通过 php include / require 加载文件。

Question pretty much states it all, I am working on a large project where most calls to php include() between 100 and 150 files. On average the time php takes is between 150 and 300 ms. I'm wondering how much of this is due to including PHP scripts? I've been thinking about running a script that checks most accessed files for particular calls and merge them into one file to speed things up, but for all I know this has zero impact.

I should note that I use APC, I'm not fully aware of what APC does in the background, but I would imagine it might already cache my files somehow so the amount of files doens't really make a big difference?

Would appreciate any input on the subject.

Of course, 300ms isnt much, but if I can bring it down to say, 100 or even 50ms, thats a significant boost.

Edit:

To clarify I am talking about file loading by php include / require.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

寄居者 2024-11-22 14:13:56

文件加载是一件棘手的事情。正如其他人所说,唯一可靠的判断方法是进行一些基准测试。但是,这里有一些一般规则仅适用于 PHP 加载,不适用于使用 fopen 的文件

  • APC 会将其操作码缓存存储在共享内存中,因此您将在第一次加载时受到影响,但不会在后续加载时受到影响。
  • includeinclude_once (以及它们的 require 表兄弟)实际上相当重。以下是一些提高速度的技巧:
    • 使用文件的绝对路径(避免像 ../foo.php 这样的相对路径)
    • 两个 _once 函数都需要进行检查,以确保该文件不是通过符号链接包含的,因为符号链接可以生成同一文件的多个路径。这是非常昂贵的。 (参见下一点)
  • 仅加载所需的文件比调用 include 便宜得多。使用自动加载器仅在需要时加载类。
  • 本地磁盘几乎总是比网络存储更好的选择。如果可能,如果您有多个服务器,请在每台服务器上保留源代码的副本。这意味着您需要在发布期间更新多个位置,但在性能方面付出的努力是值得的。

总的来说,这取决于您的硬盘速度。但与根本不加载文件或从 RAM 加载文件相比,文件加载速度慢得令人难以置信。

我希望这有帮助。

File loading is a tricky thing. As others have said, the only sure fire way to tell is to do some benchmarks. However, here are some general rules that apply only to PHP loading, not files with fopen:

  • APC will store its opcode cache in shared memory so you will take a hit on the first load but not subsequent loads.
  • include and include_once (and their require cousins) are actually quite heavy. Here are some tips to improve their speed:
    • Use absolute paths to your files (avoid relative paths like ../foo.php)
    • Both the _once functions need to check to make sure that the file wasn't also included via a symbolic link since a symbolic link can produce multiple paths to the same file. This is extremely expensive. (see next point)
  • It is much cheaper to load only the files you need than to call include. Make use of auto-loaders to only load classes when they are needed.
  • Local disks will almost always be a better bet than networked storage. When possible, if you have multiple servers, keep copies of the source code on each server. It means you need to update multiple places during a release but it is worth the effort in performance.

Overall it is dependent on your hard disk speed. But compared to not loading a file at all or loading it from RAM, file loading is incredible slow.

I hope that helped.

撩人痒 2024-11-22 14:13:56

这是相当多的文件,但如果使用框架(Zend 可能吗?),这并不意外。包含这么多文件的影响主要取决于服务器硬盘的速度。无论如何,文件访问速度非常慢,因此如果可以的话,请减少包含的数量。

不过,APC 确实/可以在内存中缓存所有这些文件的操作码,这意味着在缓存失效/销毁之前不会再进行磁盘查找。

  • 尝试关闭 APC,看看它会产生多大的变化。执行时间应该会出现明显的峰值。

  • 尝试使用 xdebug 分析脚本。您很可能会发现还有其他问题(代码问题)比文件访问对性能的影响更大。

That is quite a bit of files, but not to be unexpected if using a framework (Zend by any chance?). The impact of including that many files is mainly dependent on your server's hard drives' speed. Regardless, file access is extremely slow so if you can, reduce the number of includes.

APC does/can cache the opcodes for all those files in memory though, meaning no more disk seeks until the cache is invalidated/destroyed.

  • Try turning APC off and see how much of a difference it makes. There should be a noticeable spike in execution time.

  • Try profiling the script with xdebug. You'll most likely find that there are other issues (code issues) that affect performance more than the file access.

云仙小弟 2024-11-22 14:13:56

我知道这是一个非常老的问题,但我也想知道同样的事情,谷歌把我带到了这里。我最终使用 PHP 7.4(现已停产)在一台小型通用 Digital Ocean 服务器(配有 SSD 驱动器)上进行了一些基准测试。

方法

我创建了 260 个类,每个类都有自己的文件。文件的名称类似于 A0.php、A1.php、A2.php、...、Z8.php、Z9.php。它们只有 608 字节,很小,因此在读取结果时请记住这一点。

我还创建了一个文件,LotsOfClasses.php,其中包含所有 260 个类。该文件大小约为 155k。

脚本从命令行运行,没有 PHP 缓存。人们应该假设底层 CentOS 操作系统的磁盘缓存正常工作。

下面是测试脚本的基本样子(选择 microtime 而不是 hrtime,因为它更容易被人类阅读):

<?php
$start = microtime(true);
foreach (range('A', 'Z') as $letter) {
    foreach (range(0, 9) as $i) {
        include "{$letter}{$i}.php";
    }
}
echo (microtime(true) - $start) . "\n";

然后

<?php
$start = microtime(true);
include "LotsOfClasses.php";
echo (microtime(true) - $start) . "\n";

我以 100 为一组运行每个脚本 3 次:

for i in {1..100}; do php benchmark.php; done > time.txt

我还编写了一个脚本来计算平均值。没什么特别的,但为了完整起见,将其包含在此处:

<?php
$file = file_get_contents("time.txt");
$values = array_filter(explode("\n", $file));
echo (array_sum($values) / sizeof($values)) . "\n";

命令行结果:

包含 1 个包含 260 个类的文件所需的平均时间为 11.79 毫秒。

包含 260 个文件(每个文件属于一个类)所需的平均时间为 21.42 毫秒

Web 服务器结果:

精明的读者可能会问,包含大约 155k 的单个文件为何需要 11.79 毫秒。这是否意味着如果您的应用程序加载了 100 个此类文件,则仅包含文件就需要 1 秒以上的时间?如果使用命令行,答案是肯定的。然而,PHP 通常通过服务器运行。当在命令行上运行时,PHP 每次都必须解析该文件。通过服务器加载文件时,PHP 将使用其 Opcache,速度明显更快。

包含 1 个包含 260 个类的文件所需的平均时间为 0.056 毫秒。

从 Web 服务器包含 260 个文件所需的平均时间为 0.65 毫秒。

结论

虽然 1 个大文件几乎是瞬时的,但通过 Web 服务器加载时,1 个文件和 260 个小文件之间仅存在 0.6 毫秒的差异。增加的幅度可以忽略不计,合并文件很大程度上只是一种微观优化。即使在更大的应用程序中,我也无法想象它的差异会超过几毫秒。此外,单个文件的维护可能会更加复杂。我想说,加载多个可维护文件来换取微不足道的时间增加是值得的。

无论如何,我还在 Web 服务器上尝试使用 260 文件版本的 include_once ,它只增加了 0.08ms 的时间。

I know this is a super old question, but I was wondering the same thing and Google brought me here. I wound up doing some benchmarks using PHP 7.4 (Now end-of-lifed) on a small, general purpose Digital Ocean server, which has an SSD drive.

Methodology:

I created 260 classes, each in its own file. Files are named something like A0.php, A1.php, A2.php, ..., Z8.php, Z9.php. At 608 bytes, they are small, so keep that in mind when reading results.

I also created a single file, LotsOfClasses.php, with all 260 classes in it. The file was approximately 155k in size.

Scripts were run from the command line with no caching by PHP. One should assume disk caching was working as it normally would by the underlying CentOS operating system.

Here are what the test scripts basically look like (microtime was chosen over hrtime as it's easier to read by humans):

<?php
$start = microtime(true);
foreach (range('A', 'Z') as $letter) {
    foreach (range(0, 9) as $i) {
        include "{$letter}{$i}.php";
    }
}
echo (microtime(true) - $start) . "\n";

and

<?php
$start = microtime(true);
include "LotsOfClasses.php";
echo (microtime(true) - $start) . "\n";

I then ran each script in a batch of 100, 3 times:

for i in {1..100}; do php benchmark.php; done > time.txt

I also wrote a script to calculate the average of times. Nothing fancy, but including here for completeness:

<?php
$file = file_get_contents("time.txt");
$values = array_filter(explode("\n", $file));
echo (array_sum($values) / sizeof($values)) . "\n";

Command Line Results:

The average time it took to include 1 file with 260 classes was 11.79 ms.

The average time it took to include 260 files, each with one class was 21.42 ms

Web Server Results:

Astute readers may ask how including a single file of approximately 155k can take 11.79ms. Does that mean if your application loaded 100 such files it would take over 1 second just to include files? If using the command line, the answer is yes. However, PHP is generally run via a server. When running on the command line, PHP has to parse the file each time. When loading files via the server, PHP will use its Opcache, which is significantly faster.

The average time it took to include 1 file with 260 classes was 0.056 ms.

The average time it took to include 260 files from the web server was 0.65ms.

Conclusion:

While one large file was almost instantaneous, there was only 0.6 ms difference between the 1 file and 260 smaller files when loading via a web server. The increase is negligible and combining files would largely be a micro-optimization. Even in a larger app, I can't imagine it being more than a couple milliseconds difference. In addition, it's likely a single file would be more complex to maintain. I'd say the tradeoff of loading multiple maintainable files for a negligible increase in time is worth it.

For what it's worth, I also tried include_once with the 260 file version on a web server and it only increased the time by 0.08ms.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文