Maven 构建使用/分配大量内存
我有一个规模相当大的 GWT(Google Web Toolkit)项目,它是使用 Apache Maven 构建的。构建过程涉及生成 8 rpms 和 2 wars。
我正在尝试在远程虚拟服务器上构建该项目,运行 CentOS 5.2 作为来宾操作系统。由于来宾操作系统无法使用交换空间,因此我必须为其构建分配大量内存,否则我会收到 java 无法分配内存错误 (error=12)
。如果可用空间低于 7GB,则构建失败。我怀疑这 7GB 中的大部分从未被使用过,而是出于某种原因被分配。
在构建结束时,输出显示:[INFO] Final Memory: 178M/553M
我已将 MAVEN_OPTS
设置为 -Xms256m -Xmx1024M
I不知道如何让 Maven 构建使用更少的内存。非常感谢任何建议。
I have a decent sized GWT (Google Web Toolkit) project that is built using Apache Maven. The build process involves generating 8 rpms and 2 wars.
I'm trying to build the project on a remote virtual server, running CentOS 5.2 as a guest OS. Since the guest OS can't use swap space, I am having to allocate a huge amount of memory to the box for it to build, otherwise I get a java could not allocate memory error (error=12)
. The build fails if there is under 7GB free. I suspect that most of this 7GB is never used, but is allocated for some reason.
At the end of the build the output reads: [INFO] Final Memory: 178M/553M
I have MAVEN_OPTS
set to -Xms256m -Xmx1024M
I'm not sure how to make the maven build use less memory. Any suggestions are much appreciated.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
请注意,像 maven gwt 插件(和 maven Surefire)这样的分叉插件使用的内存“超出”maven 执行报告的总量。我建议将操作系统级别的进程大小与“jps -lv”的输出进行关联,以找出哪个 fork 正在窃取您的所有内存。
例如,如果由于某种原因分叉进程没有终止,它会很快变得非常拥挤。
Note that forking plugins like the maven gwt plugin (and maven surefire) uses memory that is "outside" the total that is reported by the maven execution. I would recommend corrolating OS-level process sizes with the output from "jps -lv" to find out which fork is stealing all your memory.
If, for instance, for some reason the forked process does not terminate it would get very crowded, very quickly.
该内存表明它最多只需要 553M,因此 MAVEN_OPTS 中的设置已经超出了您的需要。您是说您想要使用更少的数量,还是您当前收到错误?
That memory indicates it only ever needed a max of 553M, so the setting in MAVEN_OPTS is already above what you need. Are you saying you want to use less than that, or are you currently getting an error?