java nio 和 FileInputStream

发布于 2024-11-07 14:01:12 字数 874 浏览 0 评论 0原文

基本上我有这段代码来解压缩存储在文件中的一些字符串:

public static String decompressRawText(File inFile) {
    InputStream in = null;
    InputStreamReader isr = null;
    StringBuilder sb = new StringBuilder(STRING_SIZE);
    try {
        in = new FileInputStream(inFile);
        in = new BufferedInputStream(in, BUFFER_SIZE);
        in = new GZIPInputStream(in, BUFFER_SIZE);
        isr = new InputStreamReader(in);
        int length = 0;
        while ((length = isr.read(cbuf)) != -1) {
            sb.append(cbuf, 0, length);
        }
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        try {
            in.close();
        } catch (Exception e1) {
            e1.printStackTrace();
        }
    }
    return sb.toString();
}

由于物理 IO 非常耗时,并且由于我的压缩版本的文件都非常小(大约 2K 从 2M 文本),我是否有可能仍然执行上述操作,但是在已经映射到内存的文件上?可能使用java NIO?谢谢

Basically I have this code to decompress some string that stores in a file:

public static String decompressRawText(File inFile) {
    InputStream in = null;
    InputStreamReader isr = null;
    StringBuilder sb = new StringBuilder(STRING_SIZE);
    try {
        in = new FileInputStream(inFile);
        in = new BufferedInputStream(in, BUFFER_SIZE);
        in = new GZIPInputStream(in, BUFFER_SIZE);
        isr = new InputStreamReader(in);
        int length = 0;
        while ((length = isr.read(cbuf)) != -1) {
            sb.append(cbuf, 0, length);
        }
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        try {
            in.close();
        } catch (Exception e1) {
            e1.printStackTrace();
        }
    }
    return sb.toString();
}

Since physical IO is quite time consuming, and since my compressed version of files are all quite small(around 2K from 2M of text), is it possible for me to still do the above, but on a file that is already mapped to memory? possibly using java NIO? Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

画尸师 2024-11-14 14:01:12

不会有什么区别,至少不会有太大区别。我上次查看时,映射文件的 I/O 速度大约快了 20%。您仍然需要实际执行 I/O:映射只是节省一些数据复制。我会考虑将 BUFFER_SIZE 增加到至少 32k。还有cbuf的大小,它应该是这个方法中的局部变量,而不是成员变量,所以它将是线程安全的。可能值得将文件压缩到特定大小阈值(例如 10k)。

另外,您应该在此处关闭 isr,而不是 in

可能值得尝试将另一个 BufferedInputStream 放在 GZIPInputStream 的顶部及其下面。让它一次做更多事情。

It won't make any difference, at least not much. Mapped files are about 20% faster in I/O last time I looked. You still have to actually do the I/O: mapping just saves some data copying. I would look at increasing BUFFER_SIZE to at least 32k. Also the size of cbuf, which should be a local variable in this method, not a member variable, so it will be thread-safe. It might be worth not compressing the files under a certain size threshold, say 10k.

Also you should be closing isr here, not in.

It might be worth trying putting another BufferedInputStream on top of the GZIPInputStream, as well as the one underneath it. Get it to do more at once.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文