C: 使用 opendir 和 open 打开的文件太多
我正在循环中使用以下代码将大约 6000 个文本文件读入内存:
void readDocs(const char *dir, char **array){
DIR *dp = opendir(dir);;
struct dirent *ep;
struct stat st;
static uint count = 0;
if (dp != NULL){
while (ep = readdir(dp)){ // crawl through directory
char name[strlen(dir) + strlen(ep->d_name) + 2];
sprintf(name, "%s/%s", dir, ep->d_name);
if(ep->d_type == DT_REG){ // regular file
stat(name, &st);
array[count] = (char*) malloc(st.st_size);
int f;
if((f = open(name, O_RDONLY)) < 0) perror("open: ");
read(f, array[count], st.st_size));
if(close(f) < 0) perror("close: ");
++count;
}
else if(ep->d_type == DT_DIR && strcmp(ep->d_name, "..") && strcmp(ep->d_name, "."))
// go recursive through sub directories
readDocs(name, array);
}
}
}
在迭代 2826 中,打开第 2826 个文件时出现“打开文件过多”错误。 到目前为止,关闭操作没有发生错误。
由于它总是在第 2826 次迭代中挂起,我不认为我应该等到调用 close();
后文件真正关闭 我在使用 fopen、fread 和 fclose 时遇到了同样的问题。
我认为这与此片段的上下文无关,但如果您这样做,我会提供它。
感谢您抽出时间!
编辑:
我让程序进入睡眠状态并检查/proc//fd/(感谢 nos)。就像你怀疑的那样,文件描述符恰好有 1024 个,我发现这是一个通常的限制。
+ 我给了你从目录和所有子目录中读取文档的整个函数
+ 该程序在 Linux 上运行!抱歉忘记了!
I am reading about 6000 text-files into memory with the following code in a loop:
void readDocs(const char *dir, char **array){
DIR *dp = opendir(dir);;
struct dirent *ep;
struct stat st;
static uint count = 0;
if (dp != NULL){
while (ep = readdir(dp)){ // crawl through directory
char name[strlen(dir) + strlen(ep->d_name) + 2];
sprintf(name, "%s/%s", dir, ep->d_name);
if(ep->d_type == DT_REG){ // regular file
stat(name, &st);
array[count] = (char*) malloc(st.st_size);
int f;
if((f = open(name, O_RDONLY)) < 0) perror("open: ");
read(f, array[count], st.st_size));
if(close(f) < 0) perror("close: ");
++count;
}
else if(ep->d_type == DT_DIR && strcmp(ep->d_name, "..") && strcmp(ep->d_name, "."))
// go recursive through sub directories
readDocs(name, array);
}
}
}
In iteration 2826 i get an "Too many open files" error when opening the 2826th file.
No error occured in the close operation until this point.
Since it always hangs in the 2826th iteration i do not believe that i should wait until a file is realy closed after calling close();
I had the same issue using fopen, fread and fclose.
I don't think it has to do with the context of this snippet but if you do i will provide it.
Thanks for your time!
EDIT:
I put the program to sleep and checked /proc//fd/ (thanks to nos). Like you suspected there were exactly 1024 file descriptors which i found to be a usual limit.
+ i gave you the whole function which reads documents out of a directory and all subdirectories
+ the program runs on Linux! Sorry for forgetting that!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
您需要在循环后调用 closeir() 。打开目录也会消耗文件描述符。
You need to call closedir() after having looped. Opening a directory also consumes a file-descriptor.
您可能会达到操作系统允许打开文件数的限制。不知道您正在使用哪个操作系统,您应该搜索您的操作系统+“打开文件太多”以了解如何解决此问题。这是 Linux 的一个结果,http://lj4newbies.blogspot。 com/2007/04/too-many-open-files.html
You may be hitting the OS limit for # of open files allowed. Not knowing which OS you are using, you should google your OS + "too many open files" to find out how to fix this. Here is one result for linux, http://lj4newbies.blogspot.com/2007/04/too-many-open-files.html
我通过添加到
/etc/security/limits.conf
* soft nofile 40960
* Hard nofile 102400
问题是登录 debian 时 解决的它显示
ulimit -n 40960
,但是当su用户时,它又是1024
。需要取消注释
/etc/pam.d/su
上的一行session required pam_limits.so
然后总是需要限制
I solved the problem by adding to
/etc/security/limits.conf
* soft nofile 40960
* hard nofile 102400
Problem was when login to debian it shows
ulimit -n 40960
, but when su user, it's again1024
.Need uncomment one row on
/etc/pam.d/su
session required pam_limits.so
Then always needed limits
你应该调用 closedir() 因为它 opendir 也返回描述符
在linux系统中,/proc/sys/fs/file-max文件最多可以打开多少次
尽管您可以增加/减少这个数字
You should call closedir() as it opendir also returns descriptor
as in linux system maximum this much number of time /proc/sys/fs/file-max file can be opened
although you can increase/decrease this number