来自 hbase/filesystem 的 hadoop namenode 连接中的 EOF 异常是什么意思?

发布于 2024-12-12 10:52:06 字数 1871 浏览 0 评论 0原文

这既是关于java EOF异常的一般问题,也是与jar互操作性相关的Hadoop的EOF异常。关于任一主题的评论和答案都是可以接受的。

背景

我注意到一些讨论神秘异常的线程,该异常最终是由“readInt”方法引起的。此异常似乎具有一些独立于 hadoop 的一般含义,但最终是由 Hadoop jar 的互操作性引起的。

就我而言,当我尝试在 java 中的 hadoop 中创建一个新的 FileSystem 对象时,我得到了它。

问题

我的问题是:发生了什么以及为什么读取整数会引发 EOF 异常?这个 EOF 异常指的是什么“文件”?如果两个 jar 不能互操作,为什么会抛出这样的异常?

其次,我还想知道如何修复此错误,以便我可以使用 hdfs 协议和 java api 远程连接并读取/写入 hadoops 文件系统......

java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
    at sb.HadoopRemote.main(HadoopRemote.java:35)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

This is both a general question about java EOF exceptions, as well as Hadoop's EOF exception which is related to jar interoperability. Comments and answers on either topic are acceptable.

Background

I'm noting some threads which discuss a cryptic exception, which is ultimately caused by a "readInt" method. This exception seems to have some generic implications which are independent of hadoop, but ultimately, is caused by interoperability of Hadoop jars.

In my case, I'm getting it when I try to create a new FileSystem object in hadoop, in java.

Question

My question is : What is happening and why does the reading of an integer throw an EOF exception ? What "File" is this EOF exception referring to, and why would such an exception be thrown if two jars are not capable of interoperating ?

Secondarily, I also would like to know how to fix this error so i can connect to and read/write hadoops filesystem using the hdfs protocol with the java api, remotely....

java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException
    at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139)
    at org.apache.hadoop.ipc.Client.call(Client.java:1107)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226)
    at $Proxy0.getProtocolVersion(Unknown Source)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398)
    at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384)
    at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213)
    at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180)
    at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89)
    at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514)
    at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
    at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548)
    at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530)
    at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228)
    at sb.HadoopRemote.main(HadoopRemote.java:35)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:375)
    at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

欲拥i 2024-12-19 10:52:06

关于hadoop:我修复了错误!您需要确保 core-site.xml 服务于 0.0.0.0 而不是 127.0.0.1(localhost)。

如果收到EOF异常,则意味着该ip上的外部无法访问该端口,因此hadoop客户端/服务器ipc之间没有数据可读取。

Regarding hadoop : I fixed the error ! You need to make sure the core-site.xml is serving to 0.0.0.0 instead of 127.0.0.1(localhost).

If you get the EOF exception, it means that the port is not accessible externally on that ip, so there is no data to read between the hadoop client / server ipc.

故人爱我别走 2024-12-19 10:52:06

套接字上的 EOFException 意味着没有更多数据并且对等方已关闭连接。

EOFException on a socket means there's no more data and the peer has closed the connection.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文