为什么我的 mod_perl 脚本冻结了我的服务器?

发布于 2024-08-07 03:33:54 字数 2026 浏览 7 评论 0原文

我无法使我的 Perl 脚本在服务器上稳定运行。 问题就在这里。

当每秒访问脚本超过 5 次时,服务器就会冻结。 一段时间后,服务器永远挂起。 SSH 没有响应,我必须重新启动服务器。

我将 Apache 与 mod_perl 结合使用。

该脚本托管在 Ubuntu 下的虚拟专用服务器上。 我通过 SSH 来操作它。 这些是服务器参数 中央处理器:400兆赫 RAM:256 MB

脚本的最大执行时间为 200 毫秒。

我已经使用“top”实用程序监视服务器负载。 它没有显示任何问题,这是每秒加载 5 个脚本期间的 CPU 统计信息:

Cpu(s): 12.1%us,  0.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 87.2%st

我必须采取哪些选项才能使脚本正常工作?

这是 ps aux | 的结果加载时的 fgrep perl:

ps aux | fgrep perl
www-data  2925  0.3  6.5  45520 17064 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2926  0.2  6.5  45520 17068 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2927  0.4  6.5  45676 17060 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2928  0.3  6.5  45676 17060 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2929  0.2  6.5  45676 17060 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2931  0.4  6.5  45740 17076 ?        R    17:00   0:01 /var/www/perl/loa -k start
root      2968  0.0  0.2   3196   656 pts/0    R+   17:06   0:00 fgrep perl

更新

我找到了瓶颈。 我在代码中多次使用 DateTime 模块。 以下 DateTime 模块方法似乎非常慢。

  • new()
  • now()
  • set(...)
  • delta_ms(...)

我将用快速类似物替换它们。

另一个担忧。 mod_perl 实例占用大量内存。 我不知道为什么。 我尝试运行一个不导入任何模块的简单 perl 脚本。 我在 apache 重新启动后运行它。 该脚本占用37M内存。 为什么会发生这种情况? 你知道如何强制 mod_perl 不使用额外的内存吗?

没有 mod_perl 支持的常规 Perl 脚本需要 3-5M 内存。

伙计们,谢谢你们这么多的帮助,我没想到会有这么好的回应!

更新2

我又发现了一个事实。 我创建了一个简单的 perl 脚本,只需等待 5 秒。

#!/usr/bin/perl
use CGI;

my $query= new CGI;
my $content = "5 second delay...\n";

$query->header(
    '-Content-type' => "text/plain",
    '-Content-Length' => length($content)
);

print $content;

sleep(5);

然后我同时生成许多这样的脚本。 顶级实用程序中的隐身时间 (st) 从 0% 跃升至 80%,并保持较高水平,直到脚本完成。

这个负载从哪里来?

另外,正如我已经提到的,每个 perl 实例需要 36M 内存。

I cannot make my Perl script run stable on the server.
Here is the problem.

When the script is accessed more than 5 times a second, the server freezes.
And some time later the server hangs forever.
SSH does not respond and I have to restart the server.

I'm using Apache with mod_perl.

The script is hosted on Virtual Dedicated Server under Ubuntu.
I'm operating it through SSH.
These are the server params
CPU: 400 MHz
RAM: 256 MB

Maximal execution time of the script is 200 milliseconds.

I have monitored server load with the "top" utility.
It does not display any problems, this is the CPU statistics during a load of 5 scripts per second:

Cpu(s): 12.1%us,  0.6%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si, 87.2%st

What options do I have to make the script work without problems?

This is the result of ps aux | fgrep perl at the moment of loading:

ps aux | fgrep perl
www-data  2925  0.3  6.5  45520 17064 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2926  0.2  6.5  45520 17068 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2927  0.4  6.5  45676 17060 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2928  0.3  6.5  45676 17060 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2929  0.2  6.5  45676 17060 ?        R    17:00   0:01 /var/www/perl/loa -k start
www-data  2931  0.4  6.5  45740 17076 ?        R    17:00   0:01 /var/www/perl/loa -k start
root      2968  0.0  0.2   3196   656 pts/0    R+   17:06   0:00 fgrep perl

UPDATE

I have found the bottleneck.
I've been using DateTime module many times around the code.
The following DateTime module methods appear to be very slow.

  • new()
  • now()
  • set(...)
  • delta_ms(...)

I'm going to substitute them with fast analogs.

Another concern.
mod_perl instance takes a lot of memory.
And I have no idea why.
I have tried to run a simple perl script that does not import any modules.
I run it just after apache restart.
The script takes 37M of memory.
Why does it happen?
Do you know how to force mod_perl do not use the extra memory?

A regular perl script, without mod_perl support, takes 3-5M of memory.

Guys, thank you for so much help, I wasn't expecting such a wonderful response!

UPDATE 2

I have found one more fact.
I've created a simple perl script that just waits for 5 seconds.

#!/usr/bin/perl
use CGI;

my $query= new CGI;
my $content = "5 second delay...\n";

$query->header(
    '-Content-type' => "text/plain",
    '-Content-Length' => length($content)
);

print $content;

sleep(5);

Then I spawn many these scripts at the same time.
Stealth time (st) in top utility jumps up from 0% to 80% and stays high until the scripts are done.

Where does this load come from?

Also, as I've already mentioned, each perl instance takes 36M of memory.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

柠檬色的秋千 2024-08-14 03:33:54

top 中的数字似乎表明虚拟机外部的其他进程正在限制您的 CPU,请注意最后一个数字 87.2%st,这表明您的 CPU 大约使用了 87%即使您的虚拟机有它想要运行的东西,您的虚拟机管理程序也会为虚拟机之外的任务分配时间。很难说这是否与您的问题有关。

除了按照放松的建议升级服务器之外,使用持久zoul 建议的进程环境,您的进程可能根本不受 CPU 限制,而是 IO 限制,例如网络或磁盘访问,或内存限制。如果没有更多关于脚本被调用时实际执行的操作的详细信息,很难说。

编辑:您更新的问题以及有关内存使用情况的信息很能说明问题,因为您的每个进程都需要 45M 的内存,并且正在共享 17M 以上的内存。如果仅运行 5 或 6 个进程,就超出了可用 RAM 量。对于普通 Perl 脚本来说,这已经是很大的内存了,它用它做什么呢?

Your numbers from top seem to indicate that other processes outside your VM are throttling your CPU, note the last number, 87.2%st, which indicates that about 87% of your CPU time is being allocated by your hypervisor for tasks outside your VM even though your VM has things it would like to run. Whether this is related to your problem or not is hard to say.

Beyond upgrading your server as suggested by unwind, using a persistent process environment as suggested by zoul, it's possible that your process isn't CPU bound at all but is instead IO-bound, such as to the network or to your disk access, or memory-bound. It's hard to say without more details on what your script is actually doing when it's invoked.

EDIT: Your updated question with info on your memory usage is revealing, as each of your processes wants 45M of ram all to itself, and is sharing 17M more. With just 5 or 6 processes running, you're exceeding the amount of RAM available. That's a good amount of memory for a vanilla Perl script to use, what's it doing with it?

乖乖公主 2024-08-14 03:33:54

这不是一个非常大的服务器。难道只是生成了 Perl 解释器就让它跪下了吗?每秒加载 perl(我很高兴地假设它超过 1 MB)五次可能要求太多。

当然,它应该被缓存,但它仍然需要初始化才能执行。

That's not a very large server. Could it be simply spawning the Perl interpreter that makes it kneel? Loading perl (which I happily assume is more than 1 MB) five times a second might be asking too much.

Of course, it should be cached, but it will still need initialization before being able to execute.

空气里的味道 2024-08-14 03:33:54

虽然按照今天的标准,服务器的规格并不令人印象深刻,但我已经在类似的硬件上同时运行了相当复杂的东西。不过,我用的是非常准系统,只运行 FreeBSD 所需要的配置。 (类似于使用 ArchLinux 可以实现的目标)。我怀疑你没有做很多自定义配置,并接受了 Ubuntu 默认值,这对于这些规格来说可能太重了。

目前,我正在使用 Linode 360​​,性能还不错。

现在,所有这些都是为了表明一个显而易见的事实:我们需要您拥有但尚未与我们分享的信息。 Web 服务器配置、脚本 + 解释器的内存占用、打开的文件数量等。要么尝试提供仍然存在问题的最小脚本,要么提供更多信息。

更新:现在我看到您正在使用mod_perl:1)您是否确保脚本所需的所有库都在服务器启动时预加载? 2) 您是否在日志中收到任何 variable won't keep Shared 消息? 3) 您读过mod_perl 性能吗? (第 10 章:共享内存可能特别相关)。

一般来说,您应该在 Apache 服务器启动时预加载公共库。作为一个非常简单的经验法则,共享的内容越多,您可以从服务器中获得的信息就越多。请参阅 启动文件 中的 实用 mod_perl

另外,我认为每台服务器 35MB 有点多了。我认为如果您从 Apache 配置中删除不需要的模块,您可以减少它。然而,即使您不能共享 35MB,加上最大子进程为 50MB,您也应该能够同时容纳大约 20 个客户端。

我刚刚注意到您正在测试的脚本。实际上,可以尝试在服务器启动时预加载 CGI,方法是将以下几行添加到 startup.pl 中:

use strict;
use warnings;

use CGI();

其次,将该脚本更改为

#!/usr/bin/perl

use strict;
use warnings;
use CGI ();

$| = 1;

handle_request();

sub handle_request {
    my $cgi = CGI->new;

    my $content = "5 second delay...\n";

    print $cgi->header('text/plain'), $content;

    sleep(5);
}

注意,您从未在原始版本中发送标头脚本(我也讨厌调用 CGI 实例 $query 所以我也冒昧地改变了它)。另请参阅 Perl 参考

之后报告内存使用情况。

最后,你为什么要睡5秒? AFAIK,Apache 脚本的默认超时时间是 3 秒。

While, by today's standards, the server's specs are not impressive, I have run fairly complicated stuff concurrently on similar hardware. However, I used a very barebones, run only what is necessary FreeBSD configuration. (Similar to what you can achieve using ArchLinux). I suspect you did not do a lot of custom configuration, and accepted Ubuntu defaults which may be too heavy for those specs.

Currently, I am playing around with a Linode 360 and performance is fine.

Now, all this is meant to state the obvious: We need information that you have which you have not shared with us. Web server configuration, memory footprint of the script + interpreter, how many files are open etc etc. Either try to provide the smallest script that still exhibits the problem or provide more information.

Update: Now that I see you are using mod_perl: 1) Have you made sure all libraries needed by the script were preloaded at server start? 2) Are you getting any variable won't stay shared messages in the log? 3) Have you read mod_perl Performance? (Chapter 10: Sharing Memory might be especially relevant).

In general, you should preload common libraries at the start of the Apache server. As a very simplified rule of thumb, the more stuff stays shared, the more you can get out of your server. See Startup File in Practical mod_perl.

Plus, I think 35MB per server is a little much. I think you could cut that down if you eliminated unneeded modules from the Apache configuration. However, even if you could not, say all that 35MB is shared, plus the maximum child process is 50MB, you should be able to accommodate about 20 clients at a time.

I just noticed the script you are testing. Really, try preloading CGI at server startup by adding the following lines to your startup.pl:

use strict;
use warnings;

use CGI();

Second, change that script to

#!/usr/bin/perl

use strict;
use warnings;
use CGI ();

$| = 1;

handle_request();

sub handle_request {
    my $cgi = CGI->new;

    my $content = "5 second delay...\n";

    print $cgi->header('text/plain'), $content;

    sleep(5);
}

Note that you were never sending the header in the original script (I also hate calling a CGI instance $query so I took the liberty of changing that as well). See also Perl Reference.

Report back the memory usage after that.

Finally, why are you sleeping 5 seconds? AFAIK, Apache's default time out for a script is 3 seconds.

春花秋月 2024-08-14 03:33:54

该脚本使用什么样的接口?如果您可以避免一次又一次地运行 perl 可执行文件,您肯定会获得更好的性能,例如使用 FastCGI

What kind of interface does the script use? You would certainly get a better performance if you could avoid running the perl executable again and again, for example by using FastCGI.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文