C++应用程序在几个小时后崩溃

发布于 2024-11-06 13:37:35 字数 968 浏览 0 评论 0原文

我有一个用 C++ 编写的应用程序,它使用 opencv 2.0、curl 和 opensurf 库。首先,PHP 脚本 (cron.php) 调用 proc_open 并调用 C++ 应用程序(称为 icomparer)。当它完成处理 N 个图像时,返回组,说明哪些图像是相同的,之后脚本使用:

shell_exec('php cron.php > /dev/null 2>&1 &');  
die;

并再次开始。好吧,在 800 或 900 次迭代之后,我的 icomparer 开始崩溃。系统不允许我在 icomparer 和 php 脚本中创建更多文件。

proc_open(): unable to create pipe Too many open files (2)
shell_exec(): Unable to execute 'php cron.php > /dev/null 2>&1 &'

而且curl也失败了:

couldn't resolve host name (6)

一切都崩溃了。我认为我做错了什么,例如,我不知道是否从 PHP 进程释放资源启动另一个 PHP 进程。

在“icomparer”中,我将关闭所有打开的文件。也许不会用 mutex_destroy 释放所有互斥体...但是在每个迭代器中,C++ 应用程序都被关闭,我认为所有东西都被释放了,对吗?

我需要注意什么?我尝试过用 stof 监视打开的文件。


  • Php 5.2
  • Centos 5.X
  • 1 GB 内存
  • 120 GB 硬盘(使用了 4%)
  • 4 x intel xeon
  • 是一个 VPS(机器有 16 GB 内存)
  • 该进程打开 10 个线程并将它们连接起来。

i have an application written in C++ that uses opencv 2.0, curl and a opensurf library. First a PHP script (cron.php) calls proc_open and calls the C++ application (called icomparer). When it finishes processing N images returns groups saying which images are the same, after that the script uses:

shell_exec('php cron.php > /dev/null 2>&1 &');  
die;

And starts again. Well, after 800 or 900 iterates my icomparer starts breaking. The system don't lets me create more files, in icomparer and in the php script.

proc_open(): unable to create pipe Too many open files (2)
shell_exec(): Unable to execute 'php cron.php > /dev/null 2>&1 &'

And curl fails too:

couldn't resolve host name (6)

Everything crashes. I think that i'm doing something wrong, for example, I dunno if starting another PHP process from a PHP process release resources.

In "icomparer" I'm closing all opened files. Maybe not releasing all mutex with mutex_destroy... but in each iterator the c++ application is closed, I think that all stuff is released right?

What I have to watch for? I have tried monitoring opened files with stof.


  • Php 5.2
  • Centos 5.X
  • 1 GB ram
  • 120 gb hard disk (4% used)
  • 4 x intel xeon
  • Is a VPS (machine has 16 gb ram)
  • The process opens 10 threads and joins them.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

懷念過去 2024-11-13 13:37:35

听起来你正在泄漏文件描述符。

Sounds like you're leaking file descriptors.

静赏你的温柔 2024-11-13 13:37:35

在类 Unix 系统上,子进程继承父进程的打开文件描述符。但是,当子进程退出时,它会关闭其打开的文件描述符的所有副本,但不会关闭父进程的副本。

因此,您在父级中打开文件描述符而不是关闭它们。我敢打赌,您没有关闭 proc_open() 调用返回的管道。

您还需要调用 proc_close() 。

On Unix-alike systems, child processes inherit the open file descriptors of the parent. However, when the child process exits, it does close all of its copies of the open file descriptors but not the parent's copies.

So you are opening file descriptors in the parent and not closing them. My bet is that you are not closing the pipes returned by the proc_open() call.

And you'll also need to call proc_close() too.

云朵有点甜 2024-11-13 13:37:35

是的,看起来您正在打开进程,但在使用后不要关闭它们,而且 - 看起来 - 它们不会自动关闭(这在某些情况下可能有效)。

如果您这样做,请确保使用 proc_close($res) 关闭/终止进程不要再使用该资源。

Yeah, it looks like you're opening processes but don't close them after use and - as it seems - they are not closed automatically (which may work in some circumstances).

Make sure you close/terminate your process with proc_close($res) if you don't use the resource anymore.

长梦不多时 2024-11-13 13:37:35

您的应用程序不会关闭它的文件/套接字,您可以尝试使用 ulimit 系统调用,这样您可以删除每个应用程序允许的打开文件数。看看: man ulimit

Your application doesn't close it's files / sockets you can try to use the ulimit syscall with this you can remove the number of open files allowed per application. Have a look: man ulimit

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文