HBase java.lang.OutOfMemoryError

发布于 2024-10-14 04:57:45 字数 784 浏览 2 评论 0原文

我在使用 Hbase 时遇到以下问题。

我有一个脚本,它启动 HBase shell 并将多行插入到具有单列的表中。我尝试插入 10,000 行,但在大约 1,700 行之后,我收到可怕的“java.lang.OutOfMemoryError:无法创建新的本机线程”错误。我尝试将 Java 堆大小从默认的 1000mb 更改为 1800mb,但这不允许我插入超过 1700 行左右的行。

但是,我注意到我可以插入 1000 行,退出 shell,重新启动 shell,在同一个表中再插入 1000 行,再次退出,等等。我对 JVM 的了解还不够,无法弄清楚为什么它允许我在多个会话中执行此操作,但不允许我在同一会话中批量插入。

有人可以向我解释一下这里发生了什么以及我可以采取什么措施吗?

编辑:

我现在使用 64 位机器,red hat linux 5,带有 Java 1.6。我给 HBase 的堆大小为 20GB(我总共有大约 32 GB 内存)。 对于堆栈大小,我给出 8mb。我相信 64 位的默认值是 2mb;使用 2mb 时,我遇到了同样的错误,将其增加到 8mb 根本没有帮助(无论堆栈大小如何,我只能插入相同数量的行,~1700)。

我读过,减少堆大小可以使这个错误消失,但这也没有帮助。下面是我设置的 jvm 选项(除了堆栈大小之外,一切都是默认的)。

HBASE_OPTS="$HBASE_OPTS -ea -Xss8M -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"

I'm having the following issue with Hbase.

I have a script which starts the HBase shell and inserts many rows into a table with a single column. I have tried inserting 10,000 rows but after about 1,700 I get the dreaded "java.lang.OutOfMemoryError: unable to create new native thread" error. I have tried changing the Java heap size from 1000mb default to 1800mb, but this doesn't allow me to insert any more than the 1700 or so rows.

However, I've noticed that I can insert 1000 rows, exit the shell, restart the shell, insert 1000 more into the same table, exit again, and so on and so forth. I don't really understand enough about the JVM to figure out why it's allowing me to do this in several sessions, but not allowing me to batch insert in the same session.

Can someone please explain to me what is going on here, and what I might do about it?

EDIT:

I am now using 64-bit machine, red hat linux 5, with Java 1.6. I'm giving HBase a heapsize of 20gb (I have ~32 gigs memory total).
For stack size, I'm giving 8mb. The default on 64-bit is 2mb I believe; with 2mb I got this same error, and increasing it to 8mb did not help at all (I was only able to insert the same amount of rows regardless of stack size, ~1700).

I have read that decreasing the heap size could make this error go away but that did not help either. Below are the jvm options that I'm setting (everything is default except for stack size).

HBASE_OPTS="$HBASE_OPTS -ea -Xss8M -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode"

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

深海蓝天 2024-10-21 04:57:45

我昨天遇到了这个错误。在我的例子中发生的情况是,我创建了很多 HTable 实例,当我在记录上使用 put 时,它们创建了太多线程。 (我使用映射器并在 map 函数内创建它)

我会检查是否创建了很多与 HBase 的连接(在循环或 map 内) > 函数。如果发生这种情况,则将其移动到实例化更少的 HBase 连接(我使用 HTable)可能会解决问题。
它解决了我的问题。

华泰

I encountered this error yesterday. What was happening in my case is that I was creating a lot of instances of HTable which created way too many threads when I was using the put on a record. (I was using a mapper and creating it inside the map function)

I'd check to see if your connection to HBase is being created a lot (inside a loop or a map function. If that is happening, then moving it to instantiate fewer connections to HBase (I used HTable) may solve the problem.
It solved mine.

HTH

巨坚强 2024-10-21 04:57:45

当我使用 HTablePool 实例获取 HTableInterface 实例时遇到此错误,但在使用后我忘记调用 close()方法就可以了。

I encountered this error when I was using a HTablePool instance to get my HTableInterface instances, but after the utilization I forget to call the close() method on it.

若水微香 2024-10-21 04:57:45

我也遇到了同样的问题,正如上面 kosii 所解释的,根本原因是在使用后没有关闭从 HTablePool 获取的 HTableInterface 实例。

HTableInterface table = tablePool.getTable(tableName);
// Do the work
....
....
table.close()

I also encountered the same issue and as explained by kosii above, the root cause was not closing the HTableInterface instance which I got from the HTablePool after utilization.

HTableInterface table = tablePool.getTable(tableName);
// Do the work
....
....
table.close()
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文