HBASE 错误:无法设置代理接口 org.apache.hadoop.hbase.ipc.HRegionInterface

发布于 2024-12-26 01:49:16 字数 4674 浏览 8 评论 0原文

我目前正在尝试使用 HDFS 和 HBASE。 Hadoop 和 HBASE 已正确安装在计算机上,并且我的应用程序在托管在同一台计算机上时可以完美运行。

但是当托管在另一台机器上时。第一次点击 HBASE 时出现错误:

org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [sitepulsewebsite] in context with path [/SitePulseWeb] threw exception [Request processing failed; nested exception is javax.jdo.JDODataStoreException
NestedThrowables:org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000] with root cause
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000

第二次点击时出现异常:

org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [sitepulsewebsite] in context with path [/SitePulseWeb] threw exception [Request processing failed; nested exception is javax.jdo.JDODataStoreException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1
NestedThrowables: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1] with root cause
java.net.ConnectException: Connection refused

我的 hbase-site.xml 如下所示:

<configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:54310/hbase </value>
    <description>
        The directory shared by region servers. Should be
        fully-qualified to
        include the filesystem to use. E.g:
        hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR

    </description>

</property>

<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
        false: standalone and pseudo-distributed setups with managed
        Zookeeper
        true: fully-distributed with unmanaged Zookeeper Quorum (see
        hbase-env.sh)
    </description>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>master</value>
    <description>Comma separated list of servers in the ZooKeeper Quorum.
        If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of
        servers which we will start/stop ZooKeeper on.
    </description>
</property>
<property>
    <name>hbase.master</name>
    <value>master:60010</value>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property></configuration>

更新日志

查看创建的日志(调试级别)通过我的 Java 应用程序,我发现了以下日志:

1 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to localhost/127.0.0.1:60020 from an unknown user: closed
2 2012-01-12 17:12:13,328 INFO Thread-1320 org.apache.hadoop.ipc.HbaseRPC - Server at localhost/127.0.0.1:60020 could not be reached after 1 tries, giving up.
3 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - locateRegionInMeta parentTable=-ROOT-, metaLocation=address: localhost:60020, regioninfo: -ROOT-,,0.70236052, attempt=0 of 10 failed; retrying after sleep of 1000 because: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1
4 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@9d1e83; hsa=localhost:60020
5 2012-01-12 17:12:13,736 DEBUG Thread-1268 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@9d1e83; hsa=localhost:60020
6 2012-01-12 17:12:13,736 DEBUG Thread-1268 org.apache.hadoop.ipc.HBaseClient - Connecting to localhost/127.0.0.1:60020
7 2012-01-12 17:12:13,737 DEBUG Thread-1268 org.apache.hadoop.ipc.HBaseClient - closing ipc connection to localhost/127.0.0.1:60020: Connection refused
8 java.net.ConnectException: Connection refused

在 /etc/hosts 文件中 当映射从 更改

127.0.0.1 localhost

<my_server_IP> localhost

我的应用程序时工作得非常好。因此,我需要某种方法来告诉应用程序连接到所需的主机名而不是本地主机。

我尝试过调试它,但没有成功。

I am currently trying to work on HDFS and HBASE. The Hadoop and HBASE are properly installed on a machine and my application runs perfectly when hosted on the same machine.

But when hosting on another machine. On first hit to HBASE I get an error saying:

org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [sitepulsewebsite] in context with path [/SitePulseWeb] threw exception [Request processing failed; nested exception is javax.jdo.JDODataStoreException
NestedThrowables:org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000] with root cause
org.apache.hadoop.hbase.MasterNotRunningException: localhost:60000

And on the second hit I am getting the exception:

org.apache.catalina.core.StandardWrapperValve invoke
SEVERE: Servlet.service() for servlet [sitepulsewebsite] in context with path [/SitePulseWeb] threw exception [Request processing failed; nested exception is javax.jdo.JDODataStoreException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1
NestedThrowables: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1] with root cause
java.net.ConnectException: Connection refused

My hbase-site.xml reads as follow :

<configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://master:54310/hbase </value>
    <description>
        The directory shared by region servers. Should be
        fully-qualified to
        include the filesystem to use. E.g:
        hdfs://NAMENODE_SERVER:PORT/HBASE_ROOTDIR

    </description>

</property>

<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
    <description>The mode the cluster will be in. Possible values are
        false: standalone and pseudo-distributed setups with managed
        Zookeeper
        true: fully-distributed with unmanaged Zookeeper Quorum (see
        hbase-env.sh)
    </description>
</property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>master</value>
    <description>Comma separated list of servers in the ZooKeeper Quorum.
        If HBASE_MANAGES_ZK is set in hbase-env.sh this is the list of
        servers which we will start/stop ZooKeeper on.
    </description>
</property>
<property>
    <name>hbase.master</name>
    <value>master:60010</value>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property></configuration>

UPDATED LOGS

Looking into the logs (DEBUG Level) created by my Java application, I found the following logs:

1 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.ipc.HBaseClient - IPC Client (47) connection to localhost/127.0.0.1:60020 from an unknown user: closed
2 2012-01-12 17:12:13,328 INFO Thread-1320 org.apache.hadoop.ipc.HbaseRPC - Server at localhost/127.0.0.1:60020 could not be reached after 1 tries, giving up.
3 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - locateRegionInMeta parentTable=-ROOT-, metaLocation=address: localhost:60020, regioninfo: -ROOT-,,0.70236052, attempt=0 of 10 failed; retrying after sleep of 1000 because: Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1
4 2012-01-12 17:12:13,328 DEBUG Thread-1320 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@9d1e83; hsa=localhost:60020
5 2012-01-12 17:12:13,736 DEBUG Thread-1268 org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation - Lookedup root region location, connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@9d1e83; hsa=localhost:60020
6 2012-01-12 17:12:13,736 DEBUG Thread-1268 org.apache.hadoop.ipc.HBaseClient - Connecting to localhost/127.0.0.1:60020
7 2012-01-12 17:12:13,737 DEBUG Thread-1268 org.apache.hadoop.ipc.HBaseClient - closing ipc connection to localhost/127.0.0.1:60020: Connection refused
8 java.net.ConnectException: Connection refused

In /etc/hosts file when the mapping was changed from

127.0.0.1 localhost

to

<my_server_IP> localhost

My application worked perfectly fine. Hence I need some way to tell the application to connect to desired hostname and not localhost.

I have tried debugging it, without any success.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

够运 2025-01-02 01:49:17

我在 /etc/hosts 中有类似的内容:

127.0.0.1   localhost
127.0.1.1   <hostname>

我将其更改为将 解决为 127.0.0.1 这似乎解决了问题。

127.0.0.1   localhost <hostname>

I had something like this in /etc/hosts:

127.0.0.1   localhost
127.0.1.1   <hostname>

I changed it to solve <hostname> to 127.0.0.1 and that seemed to solve the problem.

127.0.0.1   localhost <hostname>
我的奇迹 2025-01-02 01:49:17

非常感谢 @Robert J Berger 引导我找到同一问题的答案。我没有遇到问题#1,但我遇到了问题#2,服务器报告:

Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1

但是,他的答案集中在“主”方面,所以我想我应该用我自己的答案来扩展他的答案。我的 DNS(由集群中的所有计算机使用)解析“master”,因此客户端与服务器的连接不是问题(无论“master”只是主机名,而不是 FQDN,都可以正常工作)。 [我同意 Robert 的观点,避免 /etc/hosts 修改最终是最可维护的解决方案。]

我的问题是服务器连接到客户端。 Hbase 正在使用计算机的主机名来解析要绑定内部通信的 IP,结果证明我在 master 上正确设置了此设置,但在节点上没有设置。

这是由于 /etc/hosts 文件在发布时如何在区域服务器节点上构建的。 hbase 服务器错误消失了。

127.0.0.1    node-hostname

当我将每个节点上的 /etc/hosts 从: 更改为:

<actual ip>  node-hostname

现在,Hbase 服务器可以正确地看到节点并可以为每个节点构建代理,

Many thanks to @Robert J Berger to leading me to the answer to the same issue. I wasn't having issue #1, but I was having issue #2, with the server reporting:

Failed setting up proxy interface org.apache.hadoop.hbase.ipc.HRegionInterface to localhost/127.0.0.1:60020 after attempts=1

However, his answer was focusing on the 'master' side so I'd thought I'd expand on his with my own answer. My DNS (used by all machines in the cluster) resolves 'master' so client connection to the server was not the issue (was working regardless of 'master' being only hostname, not FQDN). [I agree w/ Robert that avoiding /etc/hosts modifications is ultimately the most maintainable solution.]

My issue was the server connecting to the client. Hbase is using the hostname of the machine to resolve the IP to bind internal communications to, and it turns out I had this set on master properly but not the nodes.

It was due to how the /etc/hosts file was being built on the regionserver nodes as they were getting issued. The hbase server errors disappeared when i changed /etc/hosts on each node from:

127.0.0.1    node-hostname

to:

<actual ip>  node-hostname

Now the Hbase server sees the nodes properly and can build proxies to each.

权谋诡计 2025-01-02 01:49:17

执行以下命令

sudo hostname youwanttogiveformachine

编辑 /etc/hosts 文件

putactual

-ip thehostnameyouwanttogiveformachine

重新启动计算机

并检查

Execute following command

sudo hostname thehostnameyouwanttogiveformachine

edit /etc/hosts file

put

actual-ip thehostnameyouwanttogiveformachine

restart the machine

and check

一身软味 2025-01-02 01:49:16

我不知道这是否是您的问题,但如果您不从同一主机访问所有内容,那么使用本地主机通常会出现问题。

所以不要使用本地主机!

一般情况下不要更改 localhost 的定义。根据定义,本地主机是 127.0.0.1。

您将 hbase.rootdir 定义为 hdfs://master:54310/hbase 并将 hbase.zookeeper.quorum 定义为 master。

什么是大师?它确实应该是主机主以太网接口的完全限定域名。该接口的 IP 地址的反向 DNS 应解析为您填写到这些字段中的相同 FQDN。 (或者如果您无法控制反向 dns,则仅使用原始 IP 地址)

确保您的 HDFS 配置也使用相同的 FQDN 或 IP 地址或同步的 /etc/hosts 文件。同步的 /etc/hosts 文件应确保正向和反向 DNS 相同,只要所有主机(所有 HDFS 和 HBase 以及您的客户端)使用相同的 /etc/hosts 并且没有操作系统内容覆盖 /etc /主机。一般来说,我不喜欢对 /etc/hosts 做任何事情。它最终会咬你。

然后,您的远程客户端应该通过相同的 FQDN 或 IP 地址访问您的 HBase master。

我发现这种 DNS 问题可能会引起相当大的混乱。

如果您需要进行现实检查,只需在各处使用 IP 地址即可,直到成功为止。然后尝试使用完全限定域名或同步的 /etc/hosts 文件。

I don't know if this is your problem, but it generally is a problem to use localhost if you are not accessing everything from the same host.

So don't use localhost!

And in general don't change the definition of localhost. Localhost is 127.0.0.1 by defintion.

You define hbase.rootdir as hdfs://master:54310/hbase and hbase.zookeeper.quorum as master.

What is master? It really should be a fully qualified domain name of the main ethernet interface of your host. The reverse DNS of the IP address of that interface should resolve to the same FQDN that you fill in to these fields. (Or just us the raw IP address if you can't control the reverse dns)

Make sure your HDFS configs also use the same FQDN's or IP addresses or synchronized /etc/hosts files. Synchronized /etc/hosts files should make sure the forward and reverse DNS is the same as long as all the hosts (all the HDFS and HBase and your clients) use the same /etc/hosts and there is no OS stuff overriding the /etc/hosts. In general I don't like to do anything with /etc/hosts. It will eventually bite you.

Your remote client should then access your HBase master via the same FQDN or IP address.

I have found that this kind of DNS issues can cause quite a bit of confusion.

If you need a reality check, just use IP addresses everywhere till you make it work. Then experiment with Fully Qualified Domain Names or synchronized /etc/hosts files.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文