使用Bytebuffers和NIO时如何避免OutOfMemoryError?

发布于 2024-07-05 02:45:15 字数 273 浏览 8 评论 0原文

我使用 ByteBuffers 和 FileChannels 将二进制数据写入文件。 当对大文件或连续对多个文件执行此操作时,我收到 OutOfMemoryError 异常。 我在其他地方读到过,将 Bytebuffers 与 NIO 一起使用是有问题的,应该避免。 你们中是否有人已经遇到过此类问题并找到了一种解决方案来有效地将大量二进制数据保存在java文件中?

jvm 选项 -XX:MaxDirectMemorySize 是正确的选择吗?

I'm using ByteBuffers and FileChannels to write binary data to a file. When doing that for big files or successively for multiple files, I get an OutOfMemoryError exception.
I've read elsewhere that using Bytebuffers with NIO is broken and should be avoided. Does any of you already faced this kind of problem and found a solution to efficiently save large amounts of binary data in a file in java?

Is the jvm option -XX:MaxDirectMemorySize the way to go?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

寻找一个思念的角度 2024-07-12 02:45:15

前面两个回答看起来很有道理。 至于命令行开关是否起作用,取决于你的内存使用达到极限的速度。 如果您没有足够的可用内存和虚拟内存来至少使可用内存增加三倍,那么您将需要使用给出的替代建议之一。

The previous two responses seem pretty reasonable. As for whether the command line switch will work, it depends how quickly your memory usage hits the limit. If you don't have enough ram and virtual memory available to at least triple the memory available, then you will need to use one of the alternate suggestions given.

怼怹恏 2024-07-12 02:45:15

这可能取决于特定的 JDK 供应商和版本。

某些 Sun JVM 中的 GC 存在错误。 直接内存不足不会触发主堆中的 GC,但直接内存会被主堆中的垃圾直接 ByteBuffer 所固定。 如果主堆大部分是空的,那么它们很长时间都不会被收集。

即使您自己没有使用直接缓冲区,这也会让您感到烦恼,因为 JVM 可能会代表您创建直接缓冲区。 例如,将非直接 ByteBuffer 写入 SocketChannel 会在幕后创建一个直接缓冲区以用于实际的 I/O 操作。

解决方法是自己使用少量直接缓冲区,并保留它们以供重复使用。

This can depend on the particular JDK vendor and version.

There is a bug in GC in some Sun JVMs. Shortages of direct memory will not trigger a GC in the main heap, but the direct memory is pinned down by garbage direct ByteBuffers in the main heap. If the main heap is mostly empty they many not be collected for a long time.

This can burn you even if you aren't using direct buffers on your own, because the JVM may be creating direct buffers on your behalf. For instance, writing a non-direct ByteBuffer to a SocketChannel creates a direct buffer under the covers to use for the actual I/O operation.

The workaround is to use a small number of direct buffers yourself, and keep them around for reuse.

计㈡愣 2024-07-12 02:45:15

使用 transferFrom 方法应该对此有所帮助,假设您增量写入通道而不是像之前的答案也指出的那样一次全部写入。

Using the transferFrom method should help with this, assuming you write to the channel incrementally and not all at once as previous answers also point out.

夜巴黎 2024-07-12 02:45:15

如果您以随机方式访问文件(在这里读,跳过,在那里写,后退),那么您就会遇到问题;-)

但是如果您只写大文件,您应该认真< /strong> 考虑使用流。 java.io.FileOutputStream 可以直接用于逐字节写入文件,也可以包装在任何其他流(即 DataOutputStreamObjectOutputStream)中以方便使用编写浮点数、整数、字符串甚至可序列化对象。 存在用于读取文件的类似类。

流为您提供了在(几乎)任意小的内存中操作任意大的文件的便利。 在绝大多数情况下,它们是访问文件系统的首选方式。

If you access files in a random fashion (read here, skip, write there, move back) then you have a problem ;-)

But if you only write big files, you should seriously consider using streams. java.io.FileOutputStream can be used directly to write file byte after byte or wrapped in any other stream (i.e. DataOutputStream, ObjectOutputStream) for convenience of writing floats, ints, Strings or even serializeable objects. Similar classes exist for reading files.

Streams offer you convenience of manipulating arbitrarily large files in (almost) arbitrarily small memory. They are preferred way of accessing file system in vast majority of cases.

殤城〤 2024-07-12 02:45:15

我想说不要创建一个包含所有数据的巨大 ByteBuffer。 创建一个更小的 ByteBuffer,用数据填充它,然后将这些数据写入 FileChannel。 然后重置ByteBuffer并继续,直到所有数据写入。

I would say don't create a huge ByteBuffer that contains ALL of the data at once. Create a much smaller ByteBuffer, fill it with data, then write this data to the FileChannel. Then reset the ByteBuffer and continue until all the data is written.

双手揣兜 2024-07-12 02:45:15

查看 Java 的映射字节缓冲区,也称为“直接缓冲区”。 基本上,这种机制使用操作系统的虚拟内存分页系统将缓冲区直接“映射”到磁盘。 操作系统将自动、快速地管理将字节移入/移出磁盘和内存,并且您不必担心更改虚拟机选项。 这还将使您能够利用 NIO 相对于传统的基于 java 流的 I/O 的改进性能,而无需任何奇怪的修改。

我能想到的唯一两个问题是:

  1. 在 32 位系统上,所有映射字节缓冲区的总大小限制为略低于 4GB。 (这实际上是我的应用程序的限制,我现在在 64 位架构上运行。)
  2. 实现是 JVM 特定的,而不是必需的。 我用的是Sun的JVM,没有问题,但是YMMV。

Kirk Pepperdine(一位颇有名气的 Java 性能专家)参与了一个网站 www.JavaPerformanceTuning.com,该网站提供了更多 MBB 详细信息:NIO 性能提示

Check out Java's Mapped Byte Buffers, also known as 'direct buffers'. Basically, this mechanism uses the OS's virtual memory paging system to 'map' your buffer directly to disk. The OS will manage moving the bytes to/from disk and memory auto-magically, very quickly, and you won't have to worry about changing your virtual machine options. This will also allow you to take advantage of NIO's improved performance over traditional java stream-based i/o, without any weird hacks.

The only two catches that I can think of are:

  1. On 32-bit system, you are limited to just under 4GB total for all mapped byte buffers. (That is actually a limit for my application, and I now run on 64-bit architectures.)
  2. Implementation is JVM specific and not a requirement. I use Sun's JVM and there are no problems, but YMMV.

Kirk Pepperdine (a somewhat famous Java performance guru) is involved with a website, www.JavaPerformanceTuning.com, that has some more MBB details: NIO Performance Tips

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文