Hadoop java映射器-copyFromLocal堆大小错误
作为 Java 映射器的一部分,我有一个命令在本地节点上执行一些代码并将本地输出文件复制到 hadoop fs。不幸的是我得到以下输出:
VM初始化期间发生错误
无法为对象堆保留足够的空间
我尝试将mapred.map.child.java.opts调整为-Xmx512M,但不幸的是没有运气。
当我 ssh 进入节点时,我可以运行 -copyFromLocal 命令,没有任何问题。输出文件也很小,大约 100kb。
任何帮助将不胜感激!
As part of my Java mapper I have a command executes some code on the local node and copies a local output file to the hadoop fs. Unfortunately I'm getting the following output:
Error occurred during initialization of VM
Could not reserve enough space for object heap
I've tried adjusting mapred.map.child.java.opts to -Xmx512M, but unfortunately no luck.
When I ssh into the node, I can run the -copyFromLocal command without any issues. The ouput files are also quite small like around 100kb.
Any help would be greatly appreciated!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
映射器或化简器中的无限循环可能会导致内存不足错误。
当我有一个以 iterator.hasNext() 作为条件的 while 循环时,我遇到了 OoM,用于减速器值,并且没有在循环内调用 iterator.next() 。
An infinite loop in the mapper or reducer can cause Out of memory errors.
I ran into an OoM once when I had a while-loop with iterator.hasNext() as the condition, for the reducer values, and was not calling iterator.next() inside the loop.