HBase数据丢失?缺少 HDFS 附加支持?在没有启用 HDFS 附加支持的情况下运行 HMaster?
我正在使用 HBase。我已经安装并运行了分布式环境。 但是,它在 HMaster 的界面页面中显示警告:
“您当前正在运行 HMaster,没有启用 HDFS 附加支持。这可能会导致数据丢失” 我该如何解决这个问题?如果我不使用CDH3的hadoop呢?有人可以给我非常详细的说明吗?
谢谢!!!!
I am using HBase. I have installed and have the distributed environment running now.
However, it shows a warning in HMaster's interface page:
"You are currently running the HMaster without HDFS append support enabled. This may result in data loss"
How can I solve this? If I don't use CDH3's hadoop? Can someone give me very detailed instructions please?
Thanks!!!!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
正如您刚刚发现的,您不能(不应该)将标准 Apache 版本的 Hadoop 0.20.* 与 HBase 一起使用,因为它缺少附加支持 HDFS-200。没有官方的 ASF Hadoop 版本具有附加支持。 Cloudera的release是最简单的方法,能详细说明一下为什么不能用吗?它使用与 Apache 相同的许可证进行分发,如果您使用 tarball 版本,它与 Apache 版本类似,并且不需要特殊权限即可安装 RPM。
我知道的其他选择是从 hadoop-append 分支(不好玩)并使用 MapR,我没有第一手经验。
有一段时间,在 HBase 邮件列表上,一些人很幸运地将其 hadoop 安装中的 hadoop jar 替换为随 HBase 分发的 hadoop jar。这种方式确实充满了风险,并不是每个人都对此感到满意。
As you just found out you cannot (should not) use the standard Apache release of Hadoop 0.20.* with HBase as it is missing append support, HDFS-200. There is no official ASF Hadoop release that has append support. Cloudera's release is the easiest way, can you elaborate on why you cannot use it? It is distributed with the same license as Apache, and if you use a tarball release it is similar to the Apache release and you don't need special permission to install RPMs.
The other choices that I am aware of are rolling your own hadoop from the hadoop-append branch (not fun) and using MapR, which I have no first hand experience with.
For a while on the HBase mail lists some people have had luck replacing the hadoop jar in their hadoop install with the hadoop jar that gets distributed with HBase. That way does seem fraught with risk and not everyone is happy with it.