如何在 Perl CGI 脚本中生成长时间运行的进程?
我现在正在编写一个 Perl CGI 脚本,但它正在成为资源消耗者,并且它不断被我的 Web 主机杀死,因为我不断达到进程内存限制。我想知道是否有一种方法可以将我的脚本拆分为多个脚本,然后让第一个脚本调用下一个脚本,然后退出,这样整个脚本就不会立即进入内存。我看到有一个导出器模块,但我还不知道如何使用它,因为我刚刚学习 Perl,我不认为这会解决我的内存问题,但我可能是错的。
I'm writing a Perl CGI script right now but it's becoming a resource hog and it keeps getting killed by my web host because I keep hitting my process memory limit. I was wondering if there is a way I can split the script I have into multiple scripts and then have the first script call the next script then exit so the entire script isn't in memory at once. I saw there is an exporter module but I don't know how to use it yet as I'm just learning Perl, and I don't think that will solve my memory problem but I might be wrong.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
请参阅通过 CGI 观看长进程。
另一方面,更好地管理内存也可能会解决您的问题。例如,如果您一次将整个文件读入内存,请尝试编写脚本,以便它逐行或固定大小的块处理数据。在尽可能小的范围内声明变量。
尝试确定脚本的哪一部分正在创建最大的内存占用,并将相关摘录发布在单独的问题中以获取更多内存管理建议。
See Watching long processes through CGI.
On the other hand, just managing memory better might also solve your problem. For example, if you are reading entire files into memory at once, try to write the script so that it handles data line-by-line or in fixed sized chunks. Declare your variables in the smallest possible scope.
Try to identify what part of your script is creating the largest memory footprint and post the relevant excerpt in a separate question for more memory management suggestions.
如果适用,使计算/生成离线。
创建一个守护进程或一个创建结果的静态版本的计划作业,守护进程可以根据事件(例如文件修改)或按设定的时间间隔创建新版本的结果。
如果您根据客户端输入生成页面,请考虑分离逻辑,以便可以缓存至少部分应用程序。
旁注,除非它满足您的需求,否则我会完全放弃 CGI 并研究 mod_perl 或 < a href="http://www.fastcgi.com/drupal/node/5" rel="nofollow noreferrer">fastcgi,其中您有持久的 perl 进程来处理请求,从而节省了分叉新进程的开销perl解释器,加载模块等。
If applicable, make the computation/generation off line.
create a daemon or a scheduled job that creates a static version of the results, the daemon can create a new version of the results on events (e.g files modified) or in set intervals.
If you generate the page depending on client input, look into separating the logic so it's possible to cache at least parts of the application.
Side note, unless it suites your needs, I'd move away from CGI altogether and look into mod_perl or fastcgi, where you have persistent perl processes to handle requests which saves the overhead of forking a new perl interpretor, loading modules and etc.
是的,您可以从 perl 脚本启动另一个 perl 脚本,然后退出调用脚本:
http ://perldoc.perl.org/functions/fork.html
示例代码:
但是,如果您希望第二个脚本能够使用 CGI 与您的网络服务器/用户进行通信,那么这将不起作用。如果您将 perl 脚本作为 CGI 运行,那么它必须将结果返回给用户。
因此,您有两种方法来处理此问题:
尝试找出为什么使用这么多内存并改进脚本。
如果确实没有办法减少内存消耗,您可以使用守护进程 perl 脚本作为工作进程,它执行计算并将结果返回到您的 CGI-perl 脚本,该脚本必须等待 脚本
Yes, you can start another perl-script from a perl-script and then exit the calling script:
http://perldoc.perl.org/functions/fork.html
Example Code:
But this won't work, if you want the second script being able to use the CGI to communicate with your webserver/user. If you are running the perl-script as CGI, then it has to return the result to the user.
So you have two ways of dealing with this problem:
Try to find out, why you are using so much memory and improve the script.
If there is really no way to reduce memory-consumption, you can use the daemonized perl-script as worker-process, that do the calculations and returns the results to your CGI-perl-script, which has to wait for the result before termination.