Linux #打开文件限制
我们面临着由于超出打开文件限制而导致进程卡住的情况。全局设置 file-max 设置得非常高(在 sysctl.conf 中设置)&每个用户的值也在 /etc/security/limits.conf 中设置为较高的值。即使 ulimit -n 也反映了作为无头用户(进程所有者)运行时的每个用户的值。所以问题是,此更改是否需要重新启动系统(我的理解是不需要)?有没有人遇到过类似的问题?我正在运行 ubuntu lucid &该应用程序是一个java进程。 #of 临时端口范围也足够高,&在问题期间检查时,该进程打开了 #1024(注意默认值)文件(如 lsof 所报告)。
we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您可能遇到的一个问题是
select
使用的fd_set
仅限于 FD_SETSIZE,该值在编译时固定(在本例中为 JRE),即限制为 1024。幸运的是,C 库和内核都可以处理任意大小的
fd_set
,因此,对于已编译的 C 程序,可以提高该限制。One problem you might run into is that the
fd_set
used byselect
is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.Luckily both the c library and the kernel can handle arbitrary sized
fd_set
, so, for a compiled C program, it is possible to raise that limit.考虑到您已正确编辑
sysctl.conf
和/etc/security/limits.conf
中的 file-max 值; then:然后执行
请注意,您可能需要注销并重新登录才能使更改生效。
Considering you have edited file-max value in
sysctl.conf
and/etc/security/limits.conf
correctly; then:and then do
Note that you may need to log out and back in again before the changes take effect.