运行 Berkeley DB JE 的最佳 java 选择是什么?

发布于 2024-12-23 08:47:26 字数 751 浏览 2 评论 0原文

我有一台机器,可以将大约 502,000,000 行插入 BDB JE。键和值的示例是:

juhnegferseS0004-47-19332   39694.290336

所有键和值的长度大致相同。 JVM 使用以下参数启动:

-Xmx9G -Xms9G -XX:+UseConcMarkSweepGC -XX:NewSize=1024m -server

但是,当它达到 ~50,000,000 行时,JVM 会被“杀死”(我只是收到消息“Killed”,不知道它是如何/被谁杀死的)。我只是猜测它尝试运行垃圾收集,然后它无法释放足够的内存或其他东西。但是,有了这么多的 -Xmx,我想它应该不会有任何问题。

我使用 deferredWrites 并将日志文件的大小设置为 100MB。从 DPL 切换到 Base API 没有任何区别。

我使用的是 JDK 6.0 和 SUSE x86_64,内存为 12GB。还有其他进程需要剩余的 RAM,因此实际上无法为此插入任务分配超过 9GB 的内存。

JVM:

java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

如果有任何修复此问题的提示,我们将不胜感激。

I have a machine that inserts around 502,000,000 rows into a BDB JE. An example of a key and a value is:

juhnegferseS0004-47-19332   39694.290336

All of the keys and values are roughly of the same length. The JVM is started with the following parameters:

-Xmx9G -Xms9G -XX:+UseConcMarkSweepGC -XX:NewSize=1024m -server

But still, when it reaches ~50,000,000 rows, the JVM is "Killed" (I just get the message "Killed", don't know how/by whom it gests killed). I just guess it tries to run garbage collection and then it cannot free up enough memory or something. But, with that amount of -Xmx, I would guess it should not have any problems.

I use deferredWrites and the size of log files is set to 100MB. Switching to Base API from DPL did not make any difference.

I am using JDK 6.0 and SUSE x86_64 with 12GB of RAM. There are other processes that need the rest of the RAM, hence can't really allocate more than 9GB for this insertion task.

JVM:

java version "1.6.0_26"
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)

Any tips for fixing this issue is appreciated.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

め可乐爱微笑 2024-12-30 08:47:26

我不希望 JVM 死掉,并且建议您尝试稍后(甚至可能更早)的 JVM 版本(我说的是次要版本,例如 JDK1.6.0_21 与 JDK1.6.0._22),以便查看如果您可以避免触发可能存在的错误。

我的另一个想法是,也许您遇到了 Linux OOM 杀手问题(与内存过度使用有关)。有关详细信息,请参阅此 Serverfault 问题

I wouldn't expect the JVM to die, and would recommend you trying a later (or perhaps even an earlier) JVM release (I'm talking a minor version e.g. JDK1.6.0_21 vs JDK1.6.0._22) in order to see if you can avoid triggering what is possibly a bug.

My other thought is that perhaps you're running into the Linux OOM killer issue (relating to memory overcommitting). See this Serverfault question for more info.

感悟人生的甜 2024-12-30 08:47:26

没有适合所有情况的单一解决方案。您必须尝试不同的 GC 收集器,看看哪一个在给定情况下表现最好。

There is no single solution that is right for all situation. You will have to try different GC collectors to see which one performs best at the given situation.

够运 2024-12-30 08:47:26

虽然这是一个老问题,但最近我遇到了同样的问题。我为解决问题所做的就是使用 gc 日志分析器(我发现 GCeasy 非常棒),并且 Eclipse Memory Analyzer 来深入了解问题。

然后我发现com.sleepycat.je.tree.BIN这个类几乎占用了JVM的内存。就我而言,JE 的缓存并不是那么重要(我的应用程序是迁移应用程序)。因此,我为我的数据库设置了 CashMode.EVICT_BIN

我的意思是,解决方案可能不在于 JVM 选项,而在于应用程序本身。

Though it's an old question, recently I had the same problem. What I do to solve my problem, is using a gc log analyzer (which I found GCeasy is awesome), and Eclipse Memory Analyzer to have a deep sight of the problem.

And then I found that the class com.sleepycat.je.tree.BIN took almost the memory of JVM. In my case, the cache of JE is not so important (my app is an migration app). So I set up CashMode.EVICT_BIN to my databases.

What I mean is that the solution may not lay on the JVM options, but the app itself.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文