减少java代码中打开文件的数量
您好,我有一些使用块的代码
RandomAccessFile file = new RandomAccessFile("some file", "rw");
FileChannel channel = file.getChannel();
// some code
String line = "some data";
ByteBuffer buf = ByteBuffer.wrap(line.getBytes());
channel.write(buf);
channel.close();
file.close();
,但应用程序的具体情况是我必须生成大量临时文件,平均超过 4000 个(用于 Hive 插入分区表)。
问题是有时我
Failed with exception Too many open files
在应用程序运行期间捕获异常。
我想知道是否有任何方法可以告诉操作系统文件已关闭且不再使用,为什么它
channel.close();
file.close();
不会减少打开文件的数量。有没有办法在Java代码中做到这一点?
中增加了打开文件的最大数量
#/etc/sysctl.conf:
kern.maxfiles=204800
kern.maxfilesperproc=200000
kern.ipc.somaxconn=8096
我已经在更新: 我试图消除这个问题,所以我分开了代码来研究它的每个部分(创建文件、上传到配置单元、删除文件)。
使用类“File”或“RandomAccessFile”失败,并出现“打开文件过多”异常。
最后我使用了代码:
FileOutputStream s = null;
FileChannel c = null;
try {
s = new FileOutputStream(filePath);
c = s.getChannel();
// do writes
c.write("some data");
c.force(true);
s.getFD().sync();
} catch (IOException e) {
// handle exception
} finally {
if (c != null)
c.close();
if (s != null)
s.close();
}
这适用于大量文件(在 20K 上测试,每个文件大小为 5KB)。代码本身不会像前两个类那样抛出异常。 但生产代码(使用配置单元)仍然有例外。看来通过 JDBC 进行 hive 连接就是其原因。 我会进一步调查。
Hi I have some code that uses block
RandomAccessFile file = new RandomAccessFile("some file", "rw");
FileChannel channel = file.getChannel();
// some code
String line = "some data";
ByteBuffer buf = ByteBuffer.wrap(line.getBytes());
channel.write(buf);
channel.close();
file.close();
but the specific of the application is that I have to generate large number of temporary files, more then 4000 in average (used for Hive inserts to the partitioned table).
The problem is that sometimes I catch exception
Failed with exception Too many open files
during the app running.
I wounder if there any way to tell OS that file is closed already and not used anymore, why the
channel.close();
file.close();
does not reduce the number of opened files. Is there any way to do this in Java code?
I have already increased max number of opened files in
#/etc/sysctl.conf:
kern.maxfiles=204800
kern.maxfilesperproc=200000
kern.ipc.somaxconn=8096
Update:
I tried to eliminate the problem, so I parted the code to investigate each part of it (create files, upload to hive, delete files).
Using class 'File' or 'RandomAccessFile' fails with the exception "Too many open files".
Finally I used the code:
FileOutputStream s = null;
FileChannel c = null;
try {
s = new FileOutputStream(filePath);
c = s.getChannel();
// do writes
c.write("some data");
c.force(true);
s.getFD().sync();
} catch (IOException e) {
// handle exception
} finally {
if (c != null)
c.close();
if (s != null)
s.close();
}
And this works with large amounts of files (tested on 20K with 5KB size each). The code itself does not throw exception as previous two classes.
But production code (with hive) still had the exception. And it appears that the hive connection through the JDBC is the reason of it.
I will investigate further.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
操作系统可以使用的打开文件句柄的数量与进程可以打开的文件句柄的数量不同。大多数 UNIX 系统限制每个进程的文件句柄数量。最有可能的是 JVM 的 1024 个文件句柄。
a) 您需要在启动 JVM 的 shell 中将 ulimit 设置为更高的值。 (类似于“ulimit -n 4000”)
b) 您应该验证是否存在任何阻止文件“最终确定”的资源泄漏。
The amount of open file handles that can be used by the OS is not the same thing as the number of file handles that can be opened by a process. Most unix systems restrict the number of file handles per process. Most likely it something like 1024 file handles for your JVM.
a) You need to set the ulimit in the shell that launches the JVM to some higher value. (Something like 'ulimit -n 4000')
b) You should verify that you don't have any resource leaks that are preventing your files from being 'finalized'.
确保使用finally{} 块。如果由于某种原因出现异常,则在编写的代码中永远不会发生关闭。
Make sure to use a finally{} block. If there is an exception for some reason the close will never happen in the code as written.
这是确切的代码吗?因为我可以想到一种情况,您可能会在循环中打开所有文件,并编写代码以最终关闭所有文件,这会导致此问题。请发布完整代码。
Is this the exact code? Because I can think of one scenario where you might be opening all the files in a loop and written the code to close all of them in the end which is causing this problem. Please post the full code.