多维字节数组与LinkedHashMap...有更好的方法吗?

发布于 2024-10-08 16:43:25 字数 1725 浏览 5 评论 0原文

我对 Java 编程非常陌生,所以请原谅我的新问题:)。

我正在使用 LinkedHashMap 作为我正在修改的应用程序的文件缓存,以协助开发。我这样做是为了减少 I/O 开销,从而提高性能。这样做的问题是,开销是通过许多其他方式引入的。

相关来源看起来像这样。


// Retrieve Data From LinkedHashMap  
byte grid[][][] = null;  
if(file.exists())
{  
    if (!cache.containsKey(file))  
    {
        FileInputStream fis = new FileInputStream(file);
        BufferedInputStream bis = new BufferedInputStream(fis, 16384);
        ObjectInputStream ois = new ObjectInputStream(bis);
        cache.put(file, ois.readObject());
        ois.close();
    }
    grid = (byte[][][]) cache.get(file);
} else {
    grid = new byte[8][8][];
}

以下是我用来保存数据的。加载数据的方法正好相反。


ByteArrayOutputStream baos = new ByteArrayOutputStream();
GZIPOutputStream gos = new GZIPOutputStream(baos){{    def.setLevel(2);}};
BufferedOutputStream bos = new BufferedOutputStream(gos, 16384);
DataOutputStream dos = new DataOutputStream(bos);
// Some code writes to dos
dos.close();
byte[cx][cz] = baos.toByteArray();
baos.close();
cache.put(file, grid);

这是缓存的声明。


private static LinkedHashMap<File, Object> cache = new LinkedHashMap<File, Object>(64, 1.1f, true)
{protected boolean removeEldestEntry(Map.Entry<File, Object> eldest) 
    {
        return size() > 64;
    }
}

由于我对Java流礼仪非常不熟悉,所以上面的代码很可能看起来很草率。我也确信有更有效的方法来执行上述操作,例如将缓冲区放在哪里。

无论如何,我的主要问题是:每当我需要对单个块执行任何操作时,我必须将所有网格数据转换为一个对象,将其发送到缓存,然后写入文件。这是一种非常低效的做事方式。我想知道是否有更好的方法来做到这一点,这样我就不必 get();当我只需要访问那一个块时,整个 byte[8][8][] 数组。我很想做 chunk = cache.get[cx]cz 之类的事情,但我确信它没那么简单。

不管怎样,正如我之前所说,如果答案是显而易见的,请原谅这个问题,我只是一个卑微的新手:D。我非常感谢任何意见:)。

谢谢。

I'm VERY new to Java Programming, so please forgive my newbish questions :).

I am using a LinkedHashMap as a file cache for an application I am modifying for the sake of assisting development. I am doing this to reduce I/O overhead, and hence improve performance. The problem with this is that overhead is introduced in many other ways.

The relevant source looks like this.


// Retrieve Data From LinkedHashMap  
byte grid[][][] = null;  
if(file.exists())
{  
    if (!cache.containsKey(file))  
    {
        FileInputStream fis = new FileInputStream(file);
        BufferedInputStream bis = new BufferedInputStream(fis, 16384);
        ObjectInputStream ois = new ObjectInputStream(bis);
        cache.put(file, ois.readObject());
        ois.close();
    }
    grid = (byte[][][]) cache.get(file);
} else {
    grid = new byte[8][8][];
}

The following is what I use to save data. The method to load the data is the exact opposite.


ByteArrayOutputStream baos = new ByteArrayOutputStream();
GZIPOutputStream gos = new GZIPOutputStream(baos){{    def.setLevel(2);}};
BufferedOutputStream bos = new BufferedOutputStream(gos, 16384);
DataOutputStream dos = new DataOutputStream(bos);
// Some code writes to dos
dos.close();
byte[cx][cz] = baos.toByteArray();
baos.close();
cache.put(file, grid);

And here is the declaration for the cache.


private static LinkedHashMap<File, Object> cache = new LinkedHashMap<File, Object>(64, 1.1f, true)
{protected boolean removeEldestEntry(Map.Entry<File, Object> eldest) 
    {
        return size() > 64;
    }
}

Since I'm very unfamiliar with Java stream etiquette, it's very likely that the above code looks sloppy. I am also sure that there are more efficient ways to do the above, such as where to put the buffer.

Anyway, my primary problem is this: Whenever I need to do anything to a single chunk, I have to convert all the grid data to an object, send it to the cache, and write the file. This is a very inefficient way of doing things. I would like to know if there is a better way of doing this, so that I don't have to get(); the entire byte[8][8][] array when I only need access to that one chunk. I'd love to do something like chunk = cache.get[cx]cz, but I'm sure it isn't that simple.

Anyway, as I said earlier, please excuse the question if the answer is obvious, I am but a lowly newb :D. I greatly appreciate any input :).

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

晚风撩人 2024-10-15 16:43:25

如果您的目标是减少 I/O 开销,那么将 byte[][][] 对象放置在添加脏标志概念的包装对象中怎么样?

这样,您可以减少修改时写入文件的次数,仅当您使用完缓存或在插入完整缓存时要删除最旧的对象时,才将脏对象写入磁盘。

If your aim is to reduce I/O overhead, how about placing the byte[][][] object in a wrapper object which adds the concept of a dirty flag?

That way you can reduce the number of times a file is written upon modification, only writing dirty objects to disk when you either are done using the cache or your are about to remove an eldest object while inserting into a full cache.

來不及說愛妳 2024-10-15 16:43:25

我首先创建一个新类(将其称为 ByteMatrix3D)来保存数据。我不使用 byte[][][],而是使用具有计算偏移量的一维数组(例如,在 8x8x8 数组中,[1][2 ][3] 可以计算为 1 * 64 + 2 * 8 + 3 此更改将消除相当多的对象管理开销,并且还允许您进行其他更改,而无需进行其他更改。 。

我要做的第一个更改是使用 MappedByteBuffer 来访问文件,这将使操作系统管理实际数据并进行读取和写入 对程序透明。

I would start by creating a new class -- call it ByteMatrix3D -- to hold the data. And rather than using byte[][][], I'd use a single-dimension array with calculated offsets (eg, in an 8x8x8 array, the offset of [1][2][3] could be calculated as 1 * 64 + 2 * 8 + 3. This change will eliminate quite a bit of object-management overhead, and also let you make additional changes without affecting the higher-level code.

And the first change that I'd make would be to use a MappedByteBuffer to access the files. This would let the operating system manage the actual data, and make reads and writes transparent to the program.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文