Linux 下 Java 的虚拟内存使用情况,使用了太多内存

发布于 2024-07-14 00:17:13 字数 644 浏览 9 评论 0原文

我在 Linux 下运行 Java 应用程序时遇到问题。

当我使用默认最大堆大小 (64 MB) 启动应用程序时,我看到使用 tops 应用程序为该应用程序分配了 240 MB 虚拟内存。 这会给计算机上的一些其他软件带来一些问题,而计算机的资源相对有限。

据我了解,无论如何,保留的虚拟内存都不会被使用,因为一旦达到堆限制,就会抛出 OutOfMemoryError 。 我在 Windows 下运行相同的应用程序,发现虚拟内存大小和堆大小相似。

无论如何,我可以在 Linux 下配置 Java 进程使用的虚拟内存吗?

编辑1:问题不在于堆。 问题是,例如,如果我设置 128 MB 的堆,Linux 仍然会分配 210 MB 的虚拟内存,而这是永远不需要的。**

编辑 2:使用 ulimit - v 允许限制虚拟内存量。 如果设置的大小低于 204 MB,则应用程序将无法运行,即使它不需要 204 MB,只需要 64 MB。 所以我想了解为什么Java需要这么多的虚拟内存。 这可以改变吗?

编辑3:系统中还运行着其他几个嵌入的应用程序。 而且系统确实有虚拟内存限制(来自评论,重要细节)。

I have a problem with a Java application running under Linux.

When I launch the application, using the default maximum heap size (64 MB), I see using the tops application that 240 MB of virtual Memory are allocated to the application. This creates some issues with some other software on the computer, which is relatively resource-limited.

The reserved virtual memory will not be used anyway, as far as I understand, because once we reach the heap limit an OutOfMemoryError is thrown. I ran the same application under windows and I see that the Virtual Memory size and the Heap size are similar.

Is there anyway that I can configure the Virtual Memory in use for a Java process under Linux?

Edit 1: The problem is not the Heap. The problem is that if I set a Heap of 128 MB, for example, still Linux allocates 210 MB of Virtual Memory, which is not needed, ever.**

Edit 2: Using ulimit -v allows limiting the amount of virtual memory. If the size set is below 204 MB, then the application won't run even though it doesn't need 204 MB, only 64 MB. So I want to understand why Java requires so much virtual memory. Can this be changed?

Edit 3: There are several other applications running in the system, which is embedded. And the system does have a virtual memory limit (from comments, important detail).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(8

梦罢 2024-07-21 00:17:13

这是对 Java 的长期抱怨,但它基本上没有意义,并且通常基于查看错误的信息。 通常的措辞类似于“Java 上的 Hello World 需要 10 兆字节!为什么需要这个?” 好吧,这里有一种方法可以让 64 位 JVM 上的 Hello World 声称占用超过 4 GB 的空间……至少通过一种测量形式是这样。

java -Xms1024m -Xmx4096m com.example.Hello

测量内存的不同方法

在 Linux 上,top 命令为您提供几个不同的内存数字。 以下是关于 Hello World 示例的说明:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 2120 kgregory  20   0 4373m  15m 7152 S    0  0.2   0:00.10 java
  • VIRT 是虚拟内存空间:虚拟内存映射中所有内容的总和(见下文)。 它基本上没有意义,除非它没有意义(见下文)。
  • RES 是驻留集大小:当前驻留在 RAM 中的页数。 在几乎所有情况下,这是在说“太大”时应该使用的唯一数字。 但这仍然不是一个很好的数字,尤其是在谈论 Java 时。
  • SHR 是与其他进程共享的常驻内存量。 对于 Java 进程,这通常仅限于共享库和内存映射 JAR 文件。 在此示例中,我只运行了一个 Java 进程,因此我怀疑 7k 是操作系统使用的库的结果。
  • 默认情况下,SWAP 未打开,因此此处未显示。 它指示当前驻留在磁盘上的虚拟内存量,无论它是否实际位于交换空间。 操作系统非常擅长将活动页面保留在 RAM 中,而交换的唯一解决方法是 (1) 购买更多内存,或 (2) 减少进程数量,因此最好忽略此数字。

Windows 任务管理器的情况稍微复杂一些。 在 Windows XP 下,有“内存使用情况”和“虚拟内存大小”列,但 官方文档没有说明它们的含义。 Windows Vista 和 Windows 7 添加了更多列,它们实际上是 记录。 其中,“工作集”测量是最有用的; 它大致相当于 Linux 上 RES 和 SHR 的总和。

了解虚拟内存映射

进程消耗的虚拟内存是进程内存映射中所有内容的总和。 这包括数据(例如,Java 堆),还包括程序使用的所有共享库和内存映射文件。 在 Linux 上,您可以使用 pmap 命令查看映射到进程中的所有内容space(从这里开始,我将只提及 Linux,因为我使用的是 Linux;我确信 Windows 上有等效的工具)。 这是“Hello World”程序的内存映射的摘录; 整个内存映射超过 100 行长,一千行列表并不罕见。

0000000040000000     36K r-x--  /usr/local/java/jdk-1.6-x64/bin/java
0000000040108000      8K rwx--  /usr/local/java/jdk-1.6-x64/bin/java
0000000040eba000    676K rwx--    [ anon ]
00000006fae00000  21248K rwx--    [ anon ]
00000006fc2c0000  62720K rwx--    [ anon ]
0000000700000000 699072K rwx--    [ anon ]
000000072aab0000 2097152K rwx--    [ anon ]
00000007aaab0000 349504K rwx--    [ anon ]
00000007c0000000 1048576K rwx--    [ anon ]
...
00007fa1ed00d000   1652K r-xs-  /usr/local/java/jdk-1.6-x64/jre/lib/rt.jar
...
00007fa1ed1d3000   1024K rwx--    [ anon ]
00007fa1ed2d3000      4K -----    [ anon ]
00007fa1ed2d4000   1024K rwx--    [ anon ]
00007fa1ed3d4000      4K -----    [ anon ]
...
00007fa1f20d3000    164K r-x--  /usr/local/java/jdk-1.6-x64/jre/lib/amd64/libjava.so
00007fa1f20fc000   1020K -----  /usr/local/java/jdk-1.6-x64/jre/lib/amd64/libjava.so
00007fa1f21fb000     28K rwx--  /usr/local/java/jdk-1.6-x64/jre/lib/amd64/libjava.so
...
00007fa1f34aa000   1576K r-x--  /lib/x86_64-linux-gnu/libc-2.13.so
00007fa1f3634000   2044K -----  /lib/x86_64-linux-gnu/libc-2.13.so
00007fa1f3833000     16K r-x--  /lib/x86_64-linux-gnu/libc-2.13.so
00007fa1f3837000      4K rwx--  /lib/x86_64-linux-gnu/libc-2.13.so
...

格式的快速解释:每一行都以段的虚拟内存地址开始。 接下来是段大小、权限和段的来源。 最后一项是文件或“anon”,表示通过 mmap。

从顶部开始,我们有

  • JVM 加载程序(即,当您键入 java 时运行的程序)。 这是非常小的; 它所做的只是加载到存储真实 JVM 代码的共享库中。
  • 一堆保存 Java 堆和内部数据的匿名块。 这是Sun JVM,因此堆被分成多代,每一代都是自己的内存块。 注意,JVM根据-Xmx值分配虚拟内存空间; 这允许它有一个连续的堆。 -Xms 值在内部使用,表示程序启动时有多少堆“正在使用”,并在接近该限制时触发垃圾回收。
  • 内存映射的 JAR 文件,在本例中是保存“JDK 类”的文件。 当您对 JAR 进行内存映射时,您可以非常有效地访问其中的文件(而不是每次都从头读取它)。 Sun JVM 将内存映射类路径上的所有 JAR; 如果您的应用程序代码需要访问 JAR,您还可以对其进行内存映射。
  • 两个线程的每线程数据。 1M块就是线程栈。 我对 4k 块没有很好的解释,但 @ericsoe 将其标识为“保护块”:它没有读/写权限,因此如果访问会导致段错误,JVM 会捕获并翻译它它是一个StackOverFlowError。 对于真正的应用程序,您将看到数十个(如果不是数百个)这些条目在内存映射中重复出现。
  • 保存实际 JVM 代码的共享库之一。 其中有几个。
  • C 标准库的共享库。 这只是 JVM 加载的许多东西之一,这些东西严格来说不是 Java 的一部分。

共享库特别有趣:每个共享库至少有两个段:一个包含库代码的只读段,以及一个包含该库的全局每进程数据的读写段(我不知道这是什么)没有权限的段是;我只在 x64 Linux 上看到过它)。 库的只读部分可以在使用该库的所有进程之间共享; 例如,libc有1.5M的虚拟内存空间可以共享。

虚拟内存大小何时很重要?

虚拟内存映射包含很多东西。 其中一些是只读的,一些是共享的,还有一些已分配但从未被触及(例如,本例中几乎所有 4Gb 堆)。 但操作系统足够聪明,只能加载它需要的内容,因此虚拟内存大小在很大程度上是无关紧要的。

虚拟内存大小非常重要的地方是,如果您在 32 位操作系统上运行,则只能分配 2Gb(或在某些情况下,3Gb)的进程地址空间。 在这种情况下,您正在处理稀缺资源,并且可能必须做出权衡,例如减小堆大小以便内存映射大文件或创建大量线程。

但是,考虑到 64 位机器无处不在,我认为用不了多久虚拟内存大小就会成为一个完全无关的统计数据。

驻留集大小何时很重要?

驻留集大小是实际位于 RAM 中的虚拟内存空间部分。 如果您的 RSS 占据了总物理内存的很大一部分,那么可能是时候开始担心了。 如果您的 RSS 增长到占用了您所有的物理内存,并且您的系统开始交换,那么您就不必开始担心了。

但 RSS 也具有误导性,尤其是在负载较轻的机器上。 操作系统不会花费大量精力来回收进程使用的页面。 这样做几乎没有什么好处,而且如果该进程将来接触该页面,则可能会出现昂贵的页面错误。 因此,RSS 统计信息可能包含大量未活跃使用的页面。

底线

除非您要进行交换,否则不要过分担心各种内存统计数据告诉您什么。 需要注意的是,不断增长的 RSS 可能表明存在某种内存泄漏。

对于 Java 程序,关注堆中发生的情况要重要得多。 消耗的空间总量很重要,您可以采取一些步骤来减少空间消耗。 更重要的是您在垃圾收集上花费的时间,以及堆的哪些部分被收集。

访问磁盘(即数据库)的成本很高,而内存则很便宜。 如果你可以用其中之一交换另一个,那就这样做吧。

This has been a long-standing complaint with Java, but it's largely meaningless, and usually based on looking at the wrong information. The usual phrasing is something like "Hello World on Java takes 10 megabytes! Why does it need that?" Well, here's a way to make Hello World on a 64-bit JVM claim to take over 4 gigabytes ... at least by one form of measurement.

java -Xms1024m -Xmx4096m com.example.Hello

Different Ways to Measure Memory

On Linux, the top command gives you several different numbers for memory. Here's what it says about the Hello World example:

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 2120 kgregory  20   0 4373m  15m 7152 S    0  0.2   0:00.10 java
  • VIRT is the virtual memory space: the sum of everything in the virtual memory map (see below). It is largely meaningless, except when it isn't (see below).
  • RES is the resident set size: the number of pages that are currently resident in RAM. In almost all cases, this is the only number that you should use when saying "too big." But it's still not a very good number, especially when talking about Java.
  • SHR is the amount of resident memory that is shared with other processes. For a Java process, this is typically limited to shared libraries and memory-mapped JARfiles. In this example, I only had one Java process running, so I suspect that the 7k is a result of libraries used by the OS.
  • SWAP isn't turned on by default, and isn't shown here. It indicates the amount of virtual memory that is currently resident on disk, whether or not it's actually in the swap space. The OS is very good about keeping active pages in RAM, and the only cures for swapping are (1) buy more memory, or (2) reduce the number of processes, so it's best to ignore this number.

The situation for Windows Task Manager is a bit more complicated. Under Windows XP, there are "Memory Usage" and "Virtual Memory Size" columns, but the official documentation is silent on what they mean. Windows Vista and Windows 7 add more columns, and they're actually documented. Of these, the "Working Set" measurement is the most useful; it roughly corresponds to the sum of RES and SHR on Linux.

Understanding the Virtual Memory Map

The virtual memory consumed by a process is the total of everything that's in the process memory map. This includes data (eg, the Java heap), but also all of the shared libraries and memory-mapped files used by the program. On Linux, you can use the pmap command to see all of the things mapped into the process space (from here on out I'm only going to refer to Linux, because it's what I use; I'm sure there are equivalent tools for Windows). Here's an excerpt from the memory map of the "Hello World" program; the entire memory map is over 100 lines long, and it's not unusual to have a thousand-line list.

0000000040000000     36K r-x--  /usr/local/java/jdk-1.6-x64/bin/java
0000000040108000      8K rwx--  /usr/local/java/jdk-1.6-x64/bin/java
0000000040eba000    676K rwx--    [ anon ]
00000006fae00000  21248K rwx--    [ anon ]
00000006fc2c0000  62720K rwx--    [ anon ]
0000000700000000 699072K rwx--    [ anon ]
000000072aab0000 2097152K rwx--    [ anon ]
00000007aaab0000 349504K rwx--    [ anon ]
00000007c0000000 1048576K rwx--    [ anon ]
...
00007fa1ed00d000   1652K r-xs-  /usr/local/java/jdk-1.6-x64/jre/lib/rt.jar
...
00007fa1ed1d3000   1024K rwx--    [ anon ]
00007fa1ed2d3000      4K -----    [ anon ]
00007fa1ed2d4000   1024K rwx--    [ anon ]
00007fa1ed3d4000      4K -----    [ anon ]
...
00007fa1f20d3000    164K r-x--  /usr/local/java/jdk-1.6-x64/jre/lib/amd64/libjava.so
00007fa1f20fc000   1020K -----  /usr/local/java/jdk-1.6-x64/jre/lib/amd64/libjava.so
00007fa1f21fb000     28K rwx--  /usr/local/java/jdk-1.6-x64/jre/lib/amd64/libjava.so
...
00007fa1f34aa000   1576K r-x--  /lib/x86_64-linux-gnu/libc-2.13.so
00007fa1f3634000   2044K -----  /lib/x86_64-linux-gnu/libc-2.13.so
00007fa1f3833000     16K r-x--  /lib/x86_64-linux-gnu/libc-2.13.so
00007fa1f3837000      4K rwx--  /lib/x86_64-linux-gnu/libc-2.13.so
...

A quick explanation of the format: each row starts with the virtual memory address of the segment. This is followed by the segment size, permissions, and the source of the segment. This last item is either a file or "anon", which indicates a block of memory allocated via mmap.

Starting from the top, we have

  • The JVM loader (ie, the program that gets run when you type java). This is very small; all it does is load in the shared libraries where the real JVM code is stored.
  • A bunch of anon blocks holding the Java heap and internal data. This is a Sun JVM, so the heap is broken into multiple generations, each of which is its own memory block. Note that the JVM allocates virtual memory space based on the -Xmx value; this allows it to have a contiguous heap. The -Xms value is used internally to say how much of the heap is "in use" when the program starts, and to trigger garbage collection as that limit is approached.
  • A memory-mapped JARfile, in this case the file that holds the "JDK classes." When you memory-map a JAR, you can access the files within it very efficiently (versus reading it from the start each time). The Sun JVM will memory-map all JARs on the classpath; if your application code needs to access a JAR, you can also memory-map it.
  • Per-thread data for two threads. The 1M block is the thread stack. I didn't have a good explanation for the 4k block, but @ericsoe identified it as a "guard block": it does not have read/write permissions, so will cause a segment fault if accessed, and the JVM catches that and translates it to a StackOverFlowError. For a real app, you will see dozens if not hundreds of these entries repeated through the memory map.
  • One of the shared libraries that holds the actual JVM code. There are several of these.
  • The shared library for the C standard library. This is just one of many things that the JVM loads that are not strictly part of Java.

The shared libraries are particularly interesting: each shared library has at least two segments: a read-only segment containing the library code, and a read-write segment that contains global per-process data for the library (I don't know what the segment with no permissions is; I've only seen it on x64 Linux). The read-only portion of the library can be shared between all processes that use the library; for example, libc has 1.5M of virtual memory space that can be shared.

When is Virtual Memory Size Important?

The virtual memory map contains a lot of stuff. Some of it is read-only, some of it is shared, and some of it is allocated but never touched (eg, almost all of the 4Gb of heap in this example). But the operating system is smart enough to only load what it needs, so the virtual memory size is largely irrelevant.

Where virtual memory size is important is if you're running on a 32-bit operating system, where you can only allocate 2Gb (or, in some cases, 3Gb) of process address space. In that case you're dealing with a scarce resource, and might have to make tradeoffs, such as reducing your heap size in order to memory-map a large file or create lots of threads.

But, given that 64-bit machines are ubiquitous, I don't think it will be long before Virtual Memory Size is a completely irrelevant statistic.

When is Resident Set Size Important?

Resident Set size is that portion of the virtual memory space that is actually in RAM. If your RSS grows to be a significant portion of your total physical memory, it might be time to start worrying. If your RSS grows to take up all your physical memory, and your system starts swapping, it's well past time to start worrying.

But RSS is also misleading, especially on a lightly loaded machine. The operating system doesn't expend a lot of effort to reclaiming the pages used by a process. There's little benefit to be gained by doing so, and the potential for an expensive page fault if the process touches the page in the future. As a result, the RSS statistic may include lots of pages that aren't in active use.

Bottom Line

Unless you're swapping, don't get overly concerned about what the various memory statistics are telling you. With the caveat that an ever-growing RSS may indicate some sort of memory leak.

With a Java program, it's far more important to pay attention to what's happening in the heap. The total amount of space consumed is important, and there are some steps that you can take to reduce that. More important is the amount of time that you spend in garbage collection, and which parts of the heap are getting collected.

Accessing the disk (ie, a database) is expensive, and memory is cheap. If you can trade one for the other, do so.

ぃ弥猫深巷。 2024-07-21 00:17:13

Java 和 glibc >= 2.10(包括 Ubuntu >= 10.04、RHEL >= 6)存在一个已知问题。

解决方法是设置这个环境。 变量:

export MALLOC_ARENA_MAX=4

如果您正在运行 Tomcat,则可以将其添加到 TOMCAT_HOME/bin/setenv.sh 文件中。

对于 Docker,将其添加到 Dockerfile

ENV MALLOC_ARENA_MAX=4

有一篇关于设置 MALLOC_ARENA_MAX 的 IBM 文章
https://www.ibm.com/developerworks/community /blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en

这篇博文说

众所周知,驻留记忆会以类似于
内存泄漏或内存碎片。

还有一个开放的 JDK bug JDK-8193521“glibc 在默认配置下浪费内存”

在 Google 或 SO 上搜索 MALLOC_ARENA_MAX 以获取更多参考。

您可能还想调整其他 malloc 选项以优化分配内存的低碎片:

# tune glibc memory allocation, optimize for low fragmentation
# limit the number of arenas
export MALLOC_ARENA_MAX=2
# disable dynamic mmap threshold, see M_MMAP_THRESHOLD in "man mallopt"
export MALLOC_MMAP_THRESHOLD_=131072
export MALLOC_TRIM_THRESHOLD_=131072
export MALLOC_TOP_PAD_=131072
export MALLOC_MMAP_MAX_=65536

There is a known problem with Java and glibc >= 2.10 (includes Ubuntu >= 10.04, RHEL >= 6).

The cure is to set this env. variable:

export MALLOC_ARENA_MAX=4

If you are running Tomcat, you can add this to TOMCAT_HOME/bin/setenv.sh file.

For Docker, add this to Dockerfile

ENV MALLOC_ARENA_MAX=4

There is an IBM article about setting MALLOC_ARENA_MAX
https://www.ibm.com/developerworks/community/blogs/kevgrig/entry/linux_glibc_2_10_rhel_6_malloc_may_show_excessive_virtual_memory_usage?lang=en

This blog post says

resident memory has been known to creep in a manner similar to a
memory leak or memory fragmentation.

There is also an open JDK bug JDK-8193521 "glibc wastes memory with default configuration"

search for MALLOC_ARENA_MAX on Google or SO for more references.

You might want to tune also other malloc options to optimize for low fragmentation of allocated memory:

# tune glibc memory allocation, optimize for low fragmentation
# limit the number of arenas
export MALLOC_ARENA_MAX=2
# disable dynamic mmap threshold, see M_MMAP_THRESHOLD in "man mallopt"
export MALLOC_MMAP_THRESHOLD_=131072
export MALLOC_TRIM_THRESHOLD_=131072
export MALLOC_TOP_PAD_=131072
export MALLOC_MMAP_MAX_=65536
窗影残 2024-07-21 00:17:13

分配给 Java 进程的内存量与我的预期基本一致。 我在嵌入式/内存有限的系统上运行 Java 时也遇到过类似的问题。 在具有任意 VM 限制的情况下或在没有足够交换量的系统上运行任何应用程序往往会崩溃。 这似乎是许多现代应用程序的本质,它们并不是为在资源有限的系统上使用而设计的。

您还有几个选项可以尝试限制 JVM 的内存占用量。 这可能会减少虚拟内存占用:

-XX:ReservedCodeCacheSize=32m 保留代码缓存大小(以字节为单位)- 最大值
代码缓存大小。 [Solaris 64 位,
amd64 和 -server x86: 48m; 在
1.5.0_06 及更早版本,Solaris 64 位和 and64:1024m。]

-XX:MaxPermSize=64m 永久代的大小。 [5.0 及更新版本:
64 位虚拟机扩展了 30%; 1.4
amd64:96m; 1.3.1-客户端:32m。]

另外,您还应该将 -Xmx(最大堆大小)设置为尽可能接近实际峰值内存使用量的值您的申请。 我相信 JVM 的默认行为仍然是每次将堆大小扩展至最大值时加倍。 如果您从 32M 堆开始,并且您的应用程序峰值达到 65M,那么堆最终将增长 32M -> 64M-> 128M。

您还可以尝试这样做来降低虚拟机对堆增长的积极性:

-XX:MinHeapFreeRatio=40 GC 后堆空闲的最小百分比
避免扩展。

另外,根据我几年前的实验,加载的本机库的数量对最小占用空间有巨大的影响。 如果我没记错的话(我可能没有记错),加载 java.net.Socket 添加了超过 15M。

The amount of memory allocated for the Java process is pretty much on-par with what I would expect. I've had similar problems running Java on embedded/memory limited systems. Running any application with arbitrary VM limits or on systems that don't have adequate amounts of swap tend to break. It seems to be the nature of many modern apps that aren't design for use on resource-limited systems.

You have a few more options you can try and limit your JVM's memory footprint. This might reduce the virtual memory footprint:

-XX:ReservedCodeCacheSize=32m Reserved code cache size (in bytes) - maximum
code cache size. [Solaris 64-bit,
amd64, and -server x86: 48m; in
1.5.0_06 and earlier, Solaris 64-bit and and64: 1024m.]

-XX:MaxPermSize=64m Size of the Permanent Generation. [5.0 and newer:
64 bit VMs are scaled 30% larger; 1.4
amd64: 96m; 1.3.1 -client: 32m.]

Also, you also should set your -Xmx (max heap size) to a value as close as possible to the actual peak memory usage of your application. I believe the default behavior of the JVM is still to double the heap size each time it expands it up to the max. If you start with 32M heap and your app peaked to 65M, then the heap would end up growing 32M -> 64M -> 128M.

You might also try this to make the VM less aggressive about growing the heap:

-XX:MinHeapFreeRatio=40 Minimum percentage of heap free after GC to
avoid expansion.

Also, from what I recall from experimenting with this a few years ago, the number of native libraries loaded had a huge impact on the minimum footprint. Loading java.net.Socket added more than 15M if I recall correctly (and I probably don't).

无人问我粥可暖 2024-07-21 00:17:13

Sun JVM 需要大量内存用于 HotSpot,并且它映射到共享内存中的运行时库。

如果内存是一个问题,请考虑使用另一个适合嵌入的 JVM。 IBM 有 j9,还有使用 GNU 类路径库的开源“jamvm”。 此外,Sun 在 SunSPOTS 上运行 Squeak JVM,因此还有其他选择。

The Sun JVM requires a lot of memory for HotSpot and it maps in the runtime libraries in shared memory.

If memory is an issue consider using another JVM suitable for embedding. IBM has j9, and there is the Open Source "jamvm" which uses GNU classpath libraries. Also Sun has the Squeak JVM running on the SunSPOTS so there are alternatives.

脱离于你 2024-07-21 00:17:13

减少资源有限的系统堆块的一种方法可能是使用 -XX:MaxHeapFreeRatio 变量。 该值通常设置为 70,并且是 GC 收缩之前空闲堆的最大百分比。 将其设置为较低的值,您将在 jvisualvm profiler 等中看到,您的程序通常使用较小的堆 sice。

编辑:要为 -XX:MaxHeapFreeRatio 设置较小的值,您还必须设置 -XX:MinHeapFreeRatio
例如

java -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=25 HelloWorld

EDIT2:添加了一个实际应用程序的示例,该应用程序启动并执行相同的任务,一个使用默认参数,另一个使用 10 和 25 作为参数。 我没有注意到任何实际的速度差异,尽管理论上 java 应该在后一个示例中使用更多时间来增加堆。

默认参数

最后,最大堆为 905,已用堆为 378

MinHeap 10, MaxHeap 25

最后,最大堆为 722,已用堆为 378

这实际上有一些影响,因为我们的应用程序运行在远程桌面服务器上,并且许多用户可以立即运行它。

One way of reducing the heap sice of a system with limited resources may be to play around with the -XX:MaxHeapFreeRatio variable. This is usually set to 70, and is the maximum percentage of the heap that is free before the GC shrinks it. Setting it to a lower value, and you will see in eg the jvisualvm profiler that a smaller heap sice is usually used for your program.

EDIT: To set small values for -XX:MaxHeapFreeRatio you must also set -XX:MinHeapFreeRatio
Eg

java -XX:MinHeapFreeRatio=10 -XX:MaxHeapFreeRatio=25 HelloWorld

EDIT2: Added an example for a real application that starts and does the same task, one with default parameters and one with 10 and 25 as parameters. I didn't notice any real speed difference, although java in theory should use more time to increase the heap in the latter example.

Default parameters

At the end, max heap is 905, used heap is 378

MinHeap 10, MaxHeap 25

At the end, max heap is 722, used heap is 378

This actually have some inpact, as our application runs on a remote desktop server, and many users may run it at once.

别想她 2024-07-21 00:17:13

只是一个想法,但您可以检查 ulimit -v 选项

这不是一个实际的解决方案,因为它会限制所有进程可用的地址空间,但这将允许您使用有限的虚拟内存检查应用程序的行为。

Just a thought, but you may check the influence of a ulimit -v option.

That is not an actual solution since it would limit address space available for all process, but that would allow you to check the behavior of your application with a limited virtual memory.

一袭水袖舞倾城 2024-07-21 00:17:13

不,您无法配置虚拟机所需的内存量。 但是,请注意,这是虚拟内存,而不是常驻内存,因此如果不实际使用,它只会留在那里而不会造成任何损害。

或者,您可以尝试其他 JVM,然后使用 Sun JVM,内存占用更小,但我不能在这里提供建议。

No, you can't configure memory amount needed by VM. However, note that this is virtual memory, not resident, so it just stays there without harm if not actually used.

Alernatively, you can try some other JVM then Sun one, with smaller memory footprint, but I can't advise here.

妥活 2024-07-21 00:17:13

Sun 的 java 1.4 有以下参数来控制内存大小:

-Xmsn
指定内存分配池的初始大小(以字节为单位)。
该值必须是 1024 的倍数
大于1MB。 附加字母 k
或 K 表示千字节,或 m 或 M
来表示兆字节。 默认
值为 2MB。 示例:

<前><代码> -Xms6291456
-Xms6144k
-Xms6米

-Xmxn
指定内存分配池的最大大小(以字节为单位)。
该值必须是 1024 的倍数
大于2MB。 附加字母 k
或 K 表示千字节,或 m 或 M
来表示兆字节。 默认
值为 64MB。 示例:

<前><代码> -Xmx83886080
-Xmx81920k
-Xmx80米

http://java.sun.com/ j2se/1.4.2/docs/tooldocs/windows/java.html

Java 5 和 6 还有更多。 请参阅 http://java.sun.com/javase/technologies/hotspot/vmoptions .jsp

Sun's java 1.4 has the following arguments to control memory size:

-Xmsn
Specify the initial size, in bytes, of the memory allocation pool.
This value must be a multiple of 1024
greater than 1MB. Append the letter k
or K to indicate kilobytes, or m or M
to indicate megabytes. The default
value is 2MB. Examples:

           -Xms6291456
           -Xms6144k
           -Xms6m

-Xmxn
Specify the maximum size, in bytes, of the memory allocation pool.
This value must a multiple of 1024
greater than 2MB. Append the letter k
or K to indicate kilobytes, or m or M
to indicate megabytes. The default
value is 64MB. Examples:

           -Xmx83886080
           -Xmx81920k
           -Xmx80m

http://java.sun.com/j2se/1.4.2/docs/tooldocs/windows/java.html

Java 5 and 6 have some more. See http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文