编写 Linux 守护进程的首选方法是什么?

发布于 2024-10-10 17:07:17 字数 559 浏览 8 评论 0原文

大家好

我有一个 PHP 网站,应该使用一些缓存数据(例如,存储在 Memcache 中)。 数据应该由守护进程从 Web 服务获取数据存储在缓存中,其中一些数据也应该存储在 MySQL 服务器中。

守护进程应该执行以下操作:

  1. 获取外汇汇率,解析它们并将它们存储在数据库以及单独机器中的两个单独的内存缓存中。
  2. 获取金融指数并将其存储在单独的内存缓存中。
  3. 获取大型 XML 数据并将其存储在两个独立的内存缓存中。

我能够用 C/C++/Perl/PHP/Python 编写这些守护进程。

我必须决定应该选择哪种语言/脚本来实现这些守护进程。 使用 PHP 的优点是我可以使用网站应用程序本身使用的 API。另一个优点是 PHP 很简单并且每个人都知道它,所以我不会被束缚于维护这些守护进程,但另一方面 PHP 速度较慢并且消耗更多资源。

使用 PHP 以外的其他语言的主要缺点是更难维护用 C/C++/Perl 编写的代码。如今,我想使用 C/C++/Perl 完成此类任务并不常见。我这样说有错吗?

在这种情况下,您建议我做什么?

Hello to All

I have a PHP website that should use some cached data (stored in Memcache, for example).
The data should be stored in cache by daemons fetching it from web services and some of it should be stored in MySQL server too.

The daemons should do the following:

  1. Fetch foreign exchange rates, parse them and store them in database as well as in two seperated memcaches in seperate machines.
  2. Fetch financial indices and store it in seperated memcaches.
  3. Fetch large XML data and store it in two seperated memcaches.

I am capable of writing these daemons in C/C++/Perl/PHP/Python.

I have to decide in which language/script I should choose in order to implement these daemons.
The advantage of using PHP for this is that I can use API used by the website application itself. Another advantage is that PHP is easy and everyone knows it so I won't be tied up to maintaining these daemons but on the other hand PHP is slower and consumes much more resources.

The main disadvantage of using other language than PHP is that it's harder to maintain code written in C/C++/Perl. Nowadays, I guess it's not common to do these kind of tasks using C/C++/Perl. Am I wrong in saying that ?

What would you recommend me to do in this case ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

秋意浓 2024-10-17 17:07:17

Perl 和 Python 是编写此类脚本的默认答案。但如果你写出好的代码,使用什么语言并不重要。更重要的是如何处理失败时的脚本。

从长远来看,您可能会发现您的脚本很少因任意原因而失败,并且您可能不值得调试脚本,因为它通常做得很好,而且很难找到出错的地方。

我很少有 Perl 脚本可以做与你相同的事情。对我来说,棘手的部分是确保我的脚本不会长时间失败,因为我不想错过大量实时流数据。

为此,我使用了 monit 。一个很棒的工具。

Perl and Python are default answers for writing such scripts. But it doesn't matter (much) what language you use if you write good code. The more importat thing is that how you handle your script on failure.

In the long run you may see your scripts are failing seldom for arbitrary reasons, and it may not worth for you to debug the script because it usually does a fair job and it would be difficult to find where it went wrong.

I have few perl scripts doing the same kind of thing that you are doing. to me the tricky part was to make sure that my scripts don't fail for long because I didn't want to miss a chunck of live streamed data.

And for that I used monit . A great tool.

吃不饱 2024-10-17 17:07:17

为了简单/代码重用,最好的选择可能是 PHP。

PEAR 系统守护进程
在 php 中创建守护进程

编辑
据我所知,它只是传递数据,无需担心性能。关于资源使用,请确保不要耗尽 max_memory(可能通过流式传输或配置足够的内存)。中止并记录耗时过长的操作。当SQL操作失败等情况时,循环重新连接数据库。

注意
守护进程编程很棘手,很多事情都可能出错。考虑所有故障点。

另外,请注意,Perl 在守护进程方面比 PHP 更精通。我忽略了 c/c++,因为性能(传递数据)不是问题,而且守护程序编程已经足够困难了,为什么还要担心内存泄漏、段错误等?

The best choice would probably be PHP for simplicity/code reuse.

PEAR System Daemon
Create daemons in php

EDIT
From what I can tell it's just passing data around, it's no performance to worry about. And about resource usage just make sure not to run out of max_memory (by means of streaming maybe or configure plenty). Abort and log operations that take too long. Reconnect to the database in a loop when SQL operation fail etc.

NOTE OF CAUTION
Daemon programming is tricky and a lot of things can go wrong. Take into considerations all points of failure.

Also, note that Perl is a lot more versed in regards to daemons than PHP. I left out c/c++ as performance (pass data around) is not an issue and daemon programming is hard enough as it it, why add worries on memory leaks, segfaults etc. ?

阳光①夏 2024-10-17 17:07:17

最佳实践是使用您最了解的技术。您将:

  • 更快地实现解决方案
  • 能够更好地调试您遇到的问题
  • 更轻松地评估可以为您减轻部分工作的库(甚至了解它们)
  • 更轻松地维护和扩展代码

实际上,速度和资源除非您确实有真正的性能要求,否则使用相对不重要。

The best practice is to use whatever technology you know the best. You will:

  • implement the solution faster
  • be better able to debug problems you run into
  • more easily evaluate libs (or even know about them) that can offload some of the work for you
  • have an easier time maintaining and extending the code

Realistically, speed and resource usage are going to be relatively unimportant unless you actually have real performance requirements.

猛虎独行 2024-10-17 17:07:17

简短:
我会使用Python。

更大:
我在 cli 模式下尝试过 PHP,我经历了很多内存泄漏,当然是因为 PHP 库不好,或者 PHP 库从来没有考虑过其他事情在网络请求模式下快速死亡(例如,我对 PDO 持怀疑态度)。

python世界中,我最近看到了shinken的部分代码,这是一个很好的nagios重写为python守护进程,非常聪明。请参阅http://www.shinken-monitoring.org/the-global-architecture/< /a> & http://www.shinken-monitoring.org/wiki/official/development-hackingcode 。由于它是一个监视工具,您肯定会发现对于某些守护进程重复任务有一些非常好的想法。

现在,我可以提出一个建议吗?为什么不使用 Shinken 或 Centreon 作为数据获取任务的调度程序? (也许很快 Centreon 就会用 shinken 引擎而不是 nagios 引擎,我希望)?这对于检测外部数据的变化、获取问题等很有用。

然后,对于应该完成的任务(获取数据、转换数据、存储数据等),这是 ETL。 Talend ETL (Java) 是一款不错的开源工具。有一些用于 Talend 的调度和监控工具,但不是开源的(有点开源,您必须支付许可证)。但是为任务添加像 Nagios 这样的外部调度程序应该很容易(我希望如此)。您需要检查 memcached 是否可用作 talend ETL 的存储引擎或编写您的插件代码。

因此,这就是说您应该考虑工具而不是语言
或者不是,取决于您可以假设的复杂性,每个工具都会增加自己的复杂性。然而,如果你想从头开始重建一切,Python 是快速高效的。

short:
I would use Python.

bigger:
I've tried PHP in cli mode, I experienced a lot of memory leaks, certainly because of bad PHP libs, or PHP libs which have never been though for another thing than fast die in a web-request mode (I'm suscpicious on PDO for example).

In the python world I've seen recently portion of code from shinken, it's a nice nagios rewrite as python daemons, very clever. See http://www.shinken-monitoring.org/the-global-architecture/ & http://www.shinken-monitoring.org/wiki/official/development-hackingcode . As it's a monitoring tool you can certainly find there some very good ideas for some daemons repeting tasks.

Now, can I make a proposition? Why not using Shinken or Centreon as the scheduler for data fetching tasks? (And maybe soon Centreon with a shinken engine instead of nagios engine, I hope)? This could be useful to detect changes in external data, issue in fetchs, etc.

Then for the tasks that should be done (fetch data, transform data, store data, etc) this is the job of an ETL. One nice open source tool is Talend ETL (Java). There're some scheduling and monitoring tools for Talend but not Open source (sort-of-open-source-where-you-must-pay-a-license). But adding an external scheduler like Nagios for tasks should be easy (I hope). You'll need to check that memcached is available as a storage engine for talend ETL or code your plugin.

So, this to say than instead of the language you should maybe think about the tools.
Or not, depending on the complexity you can assume, each tool add his own complexity. However if you want to rebuild all from scratch python is fast an efficient.

凉世弥音 2024-10-17 17:07:17

您应该使用与编写应用程序其余部分相同的语言。这样您就可以更轻松地重用代码和开发人员技能。

然而,正如其他人指出的那样,PHP 不适合长时间运行的守护进程,因为它处理内存的方式容易泄漏。

因此,我将在定期(重新)启动的“cron”作业中运行这些任务,但请确保运行的任务副本不会超出预期。

Cron 作业比守护进程更强大。

  • 失败并退出的 cron 作业将在下次计划时重新启动
  • 包含内存泄漏的 cron 作业将在结束运行时释放其内存
  • 更新软件(库等)的 cron 作业会自动获取新版本在随后的运行中无需任何特别的努力。
  • “cron”已经提供了启动/关闭脚本,您的运维团队可以使用它们来控制它,因此您不需要重写这些脚本。您的运维团队已经知道如何操作“cron”,并且知道如何注释掉 crontab 条目(如果他们想暂时禁用它)。

You should use the same language that the rest of your application is written in. That way you can reuse code and developer skills more easily.

However, as others have noted, PHP is bad for long-running daemons because it handles memory in a way which is liable to leak.

So I would run these tasks in a "cron" job which was periodically (re-) started, but make sure you don't run more copies of the tasks than you intend.

Cron jobs are more robust than daemons.

  • A cron job which fails and quits will start again next time it is scheduled
  • A cron job which contains memory leaks will release its memory when it ends its run anyway
  • A cron job which has its software upated (libraries etc) automatically picks up the new versions on the subsequent run without any special effort.
  • "cron" already provides startup/shutdown scripts which your Ops team can use to control it, so you don't need to rewrite these. Your Ops team already know how to operate "cron", and know how to comment out crontab entries if they want to temporarily disable it.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文