FileInputStream、JAVA 的意外行为

发布于 2024-11-24 22:36:43 字数 2252 浏览 1 评论 0原文

我正在编写一个应用程序,该应用程序处理二进制文件中的大量整数(最多 50 兆)。我需要尽快完成此操作,主要的性能问题是磁盘访问时间,因为我从磁盘进行大量读取,优化读取时间通常会提高应用程序的性能。

到目前为止,我认为将文件分成的块越少(即我读取的数据越少/读取的大小越大),我的应用程序的运行速度就越快。这是因为 HDD 由于其机械特性,在查找(即定位块的开头)时速度非常慢。但是,一旦它找到了您要求它读取的块的开头,它应该会相当快地执行实际读取。

好吧,直到我进行这个测试:

旧测试已删除,由于 HDD 缓存而出现问题

新测试(HDD 缓存在这里没有帮助,因为文件太大(1GB)并且我访问其中的随机位置):

    int mega = 1024 * 1024;
    int giga = 1024 * 1024 * 1024;
    byte[] bigBlock = new byte[mega];
    int hundredKilo = mega / 10;
    byte[][] smallBlocks = new byte[10][hundredKilo];
    String location = "C:\\Users\\Vladimir\\Downloads\\boom.avi";
    RandomAccessFile raf;
    FileInputStream f;
    long start;
    long end;
    int position;
    java.util.Random rand = new java.util.Random();
    int bigBufferTotalReadTime = 0;
    int smallBufferTotalReadTime = 0;

    for (int j = 0; j < 100; j++)
    {
        position = rand.nextInt(giga);
        raf = new RandomAccessFile(location, "r");
        raf.seek((long) position);
        f = new FileInputStream(raf.getFD());
        start = System.currentTimeMillis();
        f.read(bigBlock);
        end = System.currentTimeMillis();
        bigBufferTotalReadTime += end - start;
        f.close();
    }

    for (int j = 0; j < 100; j++)
    {
        position = rand.nextInt(giga);
        raf = new RandomAccessFile(location, "r");
        raf.seek((long) position);
        f = new FileInputStream(raf.getFD());
        start = System.currentTimeMillis();
        for (int i = 0; i < 10; i++)
        {
            f.read(smallBlocks[i]);
        }
        end = System.currentTimeMillis();
        smallBufferTotalReadTime += end - start;
        f.close();
    }

    System.out.println("Average performance of small buffer: " + (smallBufferTotalReadTime / 100));
    System.out.println("Average performance of big buffer: " + (bigBufferTotalReadTime / 100));

结果: 小缓冲区的平均值 - 35ms 大缓冲区的平均值 - 40 毫秒?! (在 Linux 和 Windows 上尝试过,在这两种情况下,较大的块大小都会导致较长的读取时间,为什么?)

在多次运行此测试后,我意识到由于某种神奇的原因,读取一个大块平均比读取 10 个块花费更长的时间依次减小尺寸。我认为这可能是由于 Windows 过于智能并试图优化其文件系统中的某些内容所致,因此我在 Linux 上运行了相同的代码,令我惊讶的是,我得到了相同的结果。

我不知道为什么会发生这种情况,有人能给我提示吗?在这种情况下,最佳的块大小是多少?

亲切的问候

I am in the process of writing an application that processes a huge number of integers from a binary file (up to 50 meg). I need to do it as quickly as possible and the main performance issue is the disk access time, since I make a large number of reads from the disk, optimizing read time would improve performance of the app in general.

Up until now I thought that the fewer blocks I split my file into (i.e. the fewer reads I have / the larger the read size is) the faster my app should work. This is because HDD is very slow on seeking i.e. locating the beginning of the block due to its mechanical nature. However, once it locates the beginning of the block you asked it to read off it should perform the actual read fairly quickly.

Well, that was up until I ran this test:

Old test removed, had issues due to HDD Caching

NEW TEST (HDD Cache doesn't help here since the file is too big (1gb) and I access random locations within it):

    int mega = 1024 * 1024;
    int giga = 1024 * 1024 * 1024;
    byte[] bigBlock = new byte[mega];
    int hundredKilo = mega / 10;
    byte[][] smallBlocks = new byte[10][hundredKilo];
    String location = "C:\\Users\\Vladimir\\Downloads\\boom.avi";
    RandomAccessFile raf;
    FileInputStream f;
    long start;
    long end;
    int position;
    java.util.Random rand = new java.util.Random();
    int bigBufferTotalReadTime = 0;
    int smallBufferTotalReadTime = 0;

    for (int j = 0; j < 100; j++)
    {
        position = rand.nextInt(giga);
        raf = new RandomAccessFile(location, "r");
        raf.seek((long) position);
        f = new FileInputStream(raf.getFD());
        start = System.currentTimeMillis();
        f.read(bigBlock);
        end = System.currentTimeMillis();
        bigBufferTotalReadTime += end - start;
        f.close();
    }

    for (int j = 0; j < 100; j++)
    {
        position = rand.nextInt(giga);
        raf = new RandomAccessFile(location, "r");
        raf.seek((long) position);
        f = new FileInputStream(raf.getFD());
        start = System.currentTimeMillis();
        for (int i = 0; i < 10; i++)
        {
            f.read(smallBlocks[i]);
        }
        end = System.currentTimeMillis();
        smallBufferTotalReadTime += end - start;
        f.close();
    }

    System.out.println("Average performance of small buffer: " + (smallBufferTotalReadTime / 100));
    System.out.println("Average performance of big buffer: " + (bigBufferTotalReadTime / 100));

RESULTS:
Average for small buffer - 35ms
Average for large buffer - 40ms ?!
(Tried on linux and windows, in both cases larger block size results in longer read time, why?)

After running this test for many many times I have realised that for some magical reason reading one big block takes on average longer than reading 10 blocks of smaller size sequentially. I thought that it might have been a result of Windows being too smart and trying to optimize something in its file system, so I ran the same code on Linux and to my surprise I got the same result.

I have no clue as to why this is happening, could anyone please give me a hint? Also what would be the best block size in this case?

Kind Regards

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

微凉徒眸意 2024-12-01 22:36:43

第一次读取数据后,数据将位于磁盘缓存中。第二次读取应该快得多。您需要首先运行您认为更快的测试。 ;)

如果您有 50 MB 内存,您应该能够一次读取整个文件。


package com.google.code.java.core.files;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;

public class FileReadingMain {
    public static void main(String... args) throws IOException {
        File temp = File.createTempFile("deleteme", "zeros");
        FileOutputStream fos = new FileOutputStream(temp);
        fos.write(new byte[50 * 1024 * 1024]);
        fos.close();

        for (int i = 0; i < 3; i++)
            for (int blockSize = 1024 * 1024; blockSize >= 512; blockSize /= 2) {
                readFileNIO(temp, blockSize);
                readFile(temp, blockSize);
            }
    }

    private static void readFile(File temp, int blockSize) throws IOException {
        long start = System.nanoTime();
        byte[] bytes = new byte[blockSize];
        int r;
        for (r = 0; System.nanoTime() - start < 2e9; r++) {
            FileInputStream fis = new FileInputStream(temp);
            while (fis.read(bytes) > 0) ;
            fis.close();
        }
        long time = System.nanoTime() - start;
        System.out.printf("IO: Reading took %.3f ms using %,d byte blocks%n", time / r / 1e6, blockSize);
    }

    private static void readFileNIO(File temp, int blockSize) throws IOException {
        long start = System.nanoTime();
        ByteBuffer bytes = ByteBuffer.allocateDirect(blockSize);
        int r;
        for (r = 0; System.nanoTime() - start < 2e9; r++) {
            FileChannel fc = new FileInputStream(temp).getChannel();
            while (fc.read(bytes) > 0) {
                bytes.clear();
            }
            fc.close();
        }
        long time = System.nanoTime() - start;
        System.out.printf("NIO: Reading took %.3f ms using %,d byte blocks%n", time / r / 1e6, blockSize);
    }
}

在我的笔记本电脑上打印

NIO: Reading took 57.255 ms using 1,048,576 byte blocks
IO: Reading took 112.943 ms using 1,048,576 byte blocks
NIO: Reading took 48.860 ms using 524,288 byte blocks
IO: Reading took 78.002 ms using 524,288 byte blocks
NIO: Reading took 41.474 ms using 262,144 byte blocks
IO: Reading took 61.744 ms using 262,144 byte blocks
NIO: Reading took 41.336 ms using 131,072 byte blocks
IO: Reading took 56.264 ms using 131,072 byte blocks
NIO: Reading took 42.184 ms using 65,536 byte blocks
IO: Reading took 64.700 ms using 65,536 byte blocks
NIO: Reading took 41.595 ms using 32,768 byte blocks <= fastest for NIO
IO: Reading took 49.385 ms using 32,768 byte blocks <= fastest for IO
NIO: Reading took 49.676 ms using 16,384 byte blocks
IO: Reading took 59.731 ms using 16,384 byte blocks
NIO: Reading took 55.596 ms using 8,192 byte blocks
IO: Reading took 74.191 ms using 8,192 byte blocks
NIO: Reading took 77.148 ms using 4,096 byte blocks
IO: Reading took 84.943 ms using 4,096 byte blocks
NIO: Reading took 104.242 ms using 2,048 byte blocks
IO: Reading took 112.768 ms using 2,048 byte blocks
NIO: Reading took 177.214 ms using 1,024 byte blocks
IO: Reading took 185.006 ms using 1,024 byte blocks
NIO: Reading took 303.164 ms using 512 byte blocks
IO: Reading took 316.487 ms using 512 byte blocks

看来最佳读取大小可能是 32KB。注意:由于文件完全位于磁盘缓存中,因此这可能不是从磁盘读取文件的最佳大小。

After you read the data the first time, the data will be in disk cache. The second read should be much faster. You need to run the test you think is faster first. ;)

If you have 50 MB of memory, you should be able to read the entire file at once.


package com.google.code.java.core.files;

import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.nio.ByteBuffer;
import java.nio.channels.FileChannel;

public class FileReadingMain {
    public static void main(String... args) throws IOException {
        File temp = File.createTempFile("deleteme", "zeros");
        FileOutputStream fos = new FileOutputStream(temp);
        fos.write(new byte[50 * 1024 * 1024]);
        fos.close();

        for (int i = 0; i < 3; i++)
            for (int blockSize = 1024 * 1024; blockSize >= 512; blockSize /= 2) {
                readFileNIO(temp, blockSize);
                readFile(temp, blockSize);
            }
    }

    private static void readFile(File temp, int blockSize) throws IOException {
        long start = System.nanoTime();
        byte[] bytes = new byte[blockSize];
        int r;
        for (r = 0; System.nanoTime() - start < 2e9; r++) {
            FileInputStream fis = new FileInputStream(temp);
            while (fis.read(bytes) > 0) ;
            fis.close();
        }
        long time = System.nanoTime() - start;
        System.out.printf("IO: Reading took %.3f ms using %,d byte blocks%n", time / r / 1e6, blockSize);
    }

    private static void readFileNIO(File temp, int blockSize) throws IOException {
        long start = System.nanoTime();
        ByteBuffer bytes = ByteBuffer.allocateDirect(blockSize);
        int r;
        for (r = 0; System.nanoTime() - start < 2e9; r++) {
            FileChannel fc = new FileInputStream(temp).getChannel();
            while (fc.read(bytes) > 0) {
                bytes.clear();
            }
            fc.close();
        }
        long time = System.nanoTime() - start;
        System.out.printf("NIO: Reading took %.3f ms using %,d byte blocks%n", time / r / 1e6, blockSize);
    }
}

On my laptop prints

NIO: Reading took 57.255 ms using 1,048,576 byte blocks
IO: Reading took 112.943 ms using 1,048,576 byte blocks
NIO: Reading took 48.860 ms using 524,288 byte blocks
IO: Reading took 78.002 ms using 524,288 byte blocks
NIO: Reading took 41.474 ms using 262,144 byte blocks
IO: Reading took 61.744 ms using 262,144 byte blocks
NIO: Reading took 41.336 ms using 131,072 byte blocks
IO: Reading took 56.264 ms using 131,072 byte blocks
NIO: Reading took 42.184 ms using 65,536 byte blocks
IO: Reading took 64.700 ms using 65,536 byte blocks
NIO: Reading took 41.595 ms using 32,768 byte blocks <= fastest for NIO
IO: Reading took 49.385 ms using 32,768 byte blocks <= fastest for IO
NIO: Reading took 49.676 ms using 16,384 byte blocks
IO: Reading took 59.731 ms using 16,384 byte blocks
NIO: Reading took 55.596 ms using 8,192 byte blocks
IO: Reading took 74.191 ms using 8,192 byte blocks
NIO: Reading took 77.148 ms using 4,096 byte blocks
IO: Reading took 84.943 ms using 4,096 byte blocks
NIO: Reading took 104.242 ms using 2,048 byte blocks
IO: Reading took 112.768 ms using 2,048 byte blocks
NIO: Reading took 177.214 ms using 1,024 byte blocks
IO: Reading took 185.006 ms using 1,024 byte blocks
NIO: Reading took 303.164 ms using 512 byte blocks
IO: Reading took 316.487 ms using 512 byte blocks

It appears that the optimal read size may be 32KB. Note: as the file is entirely in disk cache this may not be the optimal size for a file which is read from disk.

老旧海报 2024-12-01 22:36:43

如前所述,通过读取每个测试的相同数据,您的测试将不可避免地受到影响。

我可能会吐槽,但您可能会从阅读这篇文章中获得更多信息,然后查看 此示例说明如何使用 FileChannel。

As noted, your test is hopelessly compromised by reading the same data for each.

I could spew on, but you'll probably get more out of reading this article, then looking at this example of how to use FileChannel.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文