64 位 JVM 的最大可能堆大小是多少?

发布于 2024-12-08 03:37:11 字数 403 浏览 0 评论 0原文

在 32 位系统中使用 -Xmx 设置的理论最大堆值当然是 2^32 字节,但通常情况下(参见:了解最大 JVM 堆大小 - 32 位 vs 64位)无法使用全部4GB。

对于在 64 位机器上运行在 64 位操作系统中的 64 位 JVM,除了 2^64 字节或 16 艾字节的理论限制之外,还有其他限制吗?

我知道,由于各种原因(主要是垃圾收集),过大的堆可能并不明智,但根据阅读有关具有 TB RAM 的服务器的信息,我想知道什么是可能的< /em>。

The theoretical maximum heap value that can be set with -Xmx in a 32-bit system is of course 2^32 bytes, but typically (see: Understanding max JVM heap size - 32bit vs 64bit) one cannot use all 4GB.

For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?

I know that for various reasons (mostly garbage collection), excessively large heaps might not be wise, but in light of reading about servers with terrabytes of RAM, I'm wondering what is possible.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

极度宠爱 2024-12-15 03:37:11

如果您想使用 32 位引用,则堆大小限制为 32 GB。

但是,如果您愿意使用 64 位引用,则大小可能会受到操作系统的限制,就像 32 位 JVM 一样。例如,在 Windows 32 位上,该大小为 1.2 到 1.5 GB。

注意:您将希望 JVM 堆适合主内存,最好位于一个 NUMA 区域内。在较大的机器上大约为 1 TB。如果您的 JVM 跨越 NUMA 区域,则内存访问(尤其是 GC)将花费更长的时间。如果您的 JVM 堆开始交换,GC 可能需要数小时,甚至会导致您的机器无法使用,因为它会破坏交换驱动器。

注意:即使您在堆中使用 32 位引用,您也可以访问大型直接内存和内存映射大小。即使用远高于 32 GB 的内存。

Hotspot JVM 中的压缩 oops

压缩的 oop 将托管指针(在 JVM 中的许多位置但不是所有位置)表示为 32 位值,必须将其缩放 8 倍并添加到 64 位基地址才能找到它们引用的对象。这使得应用程序能够寻址多达 40 亿个对象(不是字节),或高达约 32Gb 的堆大小。同时,数据结构紧凑性与ILP32模式不相上下。

If you want to use 32-bit references, your heap is limited to 32 GB.

However, if you are willing to use 64-bit references, the size is likely to be limited by your OS, just as it is with 32-bit JVM. e.g. on Windows 32-bit this is 1.2 to 1.5 GB.

Note: you will want your JVM heap to fit into main memory, ideally inside one NUMA region. That's about 1 TB on the bigger machines. If your JVM spans NUMA regions the memory access and the GC in particular will take much longer. If your JVM heap start swapping it might take hours to GC, or even make your machine unusable as it thrashes the swap drive.

Note: You can access large direct memory and memory mapped sizes even if you use 32-bit references in your heap. i.e. use well above 32 GB.

Compressed oops in the Hotspot JVM

Compressed oops represent managed pointers (in many but not all places in the JVM) as 32-bit values which must be scaled by a factor of 8 and added to a 64-bit base address to find the object they refer to. This allows applications to address up to four billion objects (not bytes), or a heap size of up to about 32Gb. At the same time, data structure compactness is competitive with ILP32 mode.

相思碎 2024-12-15 03:37:11

答案显然取决于 JVM 的实现。 Azul 声称他们的 JVM

可以扩展到...超过 1/2 TB 的内存

通过“可以扩展”,它们似乎意味着“运行良好”,而不是“完全运行”。

The answer clearly depends on the JVM implementation. Azul claim that their JVM

can scale ... to more than a 1/2 Terabyte of memory

By "can scale" they appear to mean "runs wells", as opposed to "runs at all".

风吹过旳痕迹 2024-12-15 03:37:11

Windows 对每个进程施加内存限制,您可以查看每个版本的内存限制 这里

参见:

每个64位进程的用户模式虚拟地址空间;
设置 IMAGE_FILE_LARGE_ADDRESS_AWARE(默认):
x64:8TB
英特尔 IPF:7 TB
2 GB,已清除 IMAGE_FILE_LARGE_ADDRESS_AWARE

Windows imposes a memory limit per process, you can see what it is for each version here

See:

User-mode virtual address space for each 64-bit process;
With IMAGE_FILE_LARGE_ADDRESS_AWARE set (default):
x64: 8 TB
Intel IPF: 7 TB
2 GB with IMAGE_FILE_LARGE_ADDRESS_AWARE cleared

秋日私语 2024-12-15 03:37:11

我尝试过 -Xmx32255M 被 vmargs 接受用于压缩 oops。

I tried -Xmx32255M is accepted by vmargs for compressed oops.

月下伊人醉 2024-12-15 03:37:11

对于在 64 位机器上运行在 64 位操作系统中的 64 位 JVM,除了 2^64 字节或 16 艾字节的理论限制之外,还有其他限制吗?

您还必须考虑硬件限制。虽然指针可能是 64 位,但当前 CPU 只能寻址小于 2^64 字节虚拟内存的价值

对于未压缩的指针,热点 JVM 需要为其堆提供连续的虚拟地址空间块。因此,硬件之后的第二个障碍是提供如此大块的操作系统,并非所有操作系统都支持这一点。

第三个是实用性。即使你可以拥有那么多的虚拟内存,也不意味着 CPU 支持那么多的物理内存,如果没有物理内存,你最终将进行交换,这将对 JVM 的性能产生不利影响,因为 GC 通常必须接触很大一部分堆的。

正如其他答案提到压缩 oops 一样:通过将对象对齐提高到高于 8 个字节,压缩 oops 的限制可以是 增加超过 32GB

For a 64-bit JVM running in a 64-bit OS on a 64-bit machine, is there any limit besides the theoretical limit of 2^64 bytes or 16 exabytes?

You also have to take hardware limits into account. While pointers may be 64bit current CPUs can only address a less than 2^64 bytes worth of virtual memory.

With uncompressed pointers the hotspot JVM needs a continuous chunk of virtual address space for its heap. So the second hurdle after hardware is the operating system providing such a large chunk, not all OSes support this.

And the third one is practicality. Even if you can have that much virtual memory it does not mean the CPUs support that much physical memory, and without physical memory you will end up swapping, which will adversely affect the performance of the JVM because the GCs generally have to touch a large fraction of the heap.

As other answers mention compressed oops: By bumping the object alignment higher than 8 bytes the limits with compressed oops can be increased beyond 32GB

烟─花易冷 2024-12-15 03:37:11

理论上一切皆有可能,但现实中你会发现这个数字远低于你的预期。
我经常尝试解决服务器上的巨大空间问题,发现尽管服务器可以拥有大量内存,但令我惊讶的是,大多数软件实际上永远无法在实际场景中解决它,这仅仅是因为 cpu 的速度不够快,无法真正解决它们。
为什么说是对的?! 。这就是我所研究过的每台巨型机器的无休止的崩溃。
因此,我建议不要仅仅因为可以就解决大量问题而太过分,而是使用您认为可以使用的内容。
实际值通常远低于您的预期。
当然,我们中没有人真正在家里使用 hp 9000 系统,而且大多数人实际上都会接近家庭系统的容量。
例如,大多数用户的系统内存不超过 16 GB。当然,一些休闲游戏玩家每月使用一次工作站进行游戏,但我敢打赌这个比例非常小。
因此,脚踏实地意味着我会在 8 Gb 64 位系统上处理不超过 512 mb 的堆空间,或者如果超出范围,请尝试 1 Gb。我很确定即使有这些数字也纯粹是矫枉过正。
我在游戏过程中不断监控内存使用情况,看看寻址是否会产生任何差异,但当我寻址更低的值或更大的值时,根本没有注意到任何差异。即使在服务器/工作站上,无论我设置的值有多大,性能也没有明显的变化。
这并不是说一些 java 用户可能能够利用更多的空间,但到目前为止我还没有看到任何应用程序需要如此多的空间。
当然,我假设如果 java 实例没有足够的堆空间来使用,那么它们的性能会有很小的差异。
到目前为止,我还没有找到任何一个,但是如果您设置太多的堆空间,则缺乏实际安装的内存会导致性能立即下降。
当你有一个 4 GB 的系统时,你会很快用完堆空间,然后你会看到一些错误和速度变慢,因为人们处理了太多的空间,而系统中实际上没有空闲的空间,因此操作系统开始处理驱动器空间以弥补短缺因此它开始交换。

In theory everything is possible but reality you find the numbers much lower than you might expect.
I have been trying to address huge spaces on servers often and found that even though a server can have huge amounts of memory it surprised me that most software actually never can address it in real scenario's simply because the cpu's are not fast enough to really address them.
Why would you say right ?! . Timings thats the endless downfall of every enormous machine which i have worked on.
So i would advise to not go overboard by addressing huge amounts just because you can, but use what you think could be used.
Actual values are often much lower than what you expected.
Ofcourse non of us really uses hp 9000 systems at home and most of you actually ever will go near the capacity of your home system ever.
For instance most users do not have more than 16 Gb of memory in their system. Ofcourse some of the casual gamers use work stations for a game once a month but i bet that is a very small percentage.
So coming down to earth means i would address on a 8 Gb 64 bit system not much more than 512 mb for heapspace or if you go overboard try 1 Gb. I am pretty sure its even with these numbers pure overkill.
I have constant monitored the memory usage during gaming to see if the addressing would make any difference but did not notice any difference at all when i addressed much lower values or larger ones. Even on the server/workstations there was no visible change in performance no matter how large i set the values.
That does not say some jave users might be able to make use of more space addressed, but this far i have not seen any of the applications needing so much ever.
Ofcourse i assume that their would be a small difference in performance if java instances would run out of enough heapspace to work with.
This far i have not found any of it at all, however lack of real installed memory showed instant drops of performance if you set too much heapspace.
When you have a 4 Gb system you run quickly out of heapspace and then you will see some errors and slowdowns because people address too much space which actually is not free in the system so the os starts to address drive space to make up for the shortage hence it starts to swap.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文