Hbase 客户端扫描程序挂起
我已经使用 Hbase 几个月了,我已经加载了超过 6GB 数据的 Hbase 表。当我尝试使用 Java 客户端扫描行时,它挂起并报告以下错误,
Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs
此外,如果我登录 shell 并扫描它,它会完美工作,甚至 Java 客户端扫描程序也适用于具有少量数据的 hbase 表。
有什么解决方法吗?
I have been using Hbase for months and I have loaded Hbase table with more than 6GB of data. When I tried scanning the rows using Java client it hangs and reports the following error,
Could not seek StoreFileScanner[HFileScanner for reader reader=hdfs
Further if I login to shell and scan it works perfectly and even Java client scanner works fine for hbase table having small amount of data.
Any workaround for this?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对于大数据,您可以编写映射减少代码。当涉及到大数据时,简单的 Java 程序并不是非常有效。您可以查看 Pig 脚本来实现这一点。
查看这些以获得进一步的帮助:
http://sujee.net /tech/articles/hadoop/hbase-map-reduce-freq-counter/
http://wiki.apache.org/hadoop/Hbase/MapReduce
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html
或者你可以给尝试Pig 脚本也适用于 mapt reduce 程序。
http://pig .apache.org/docs/r0.9.1/api/org/apache/pig/backend/hadoop/hbase/HBaseTableInputFormat.html
另一种选择是增加 HBase 超时属性并给出尝试一下。从不同的 HBase 配置设置中,您可以参考:
http://hbase.apache。 org/docs/r0.20.6/hbase-conf.html
但是当涉及到大数据时,Map-reduce 代码总是更好,你也可以搜索 hbase 的优化指南/最佳实践。
For large data you can write map reduce code. simple Java programs are not really very effective when it comes to big data. You can look into pig script to achieve that.
Check out these for further help :
http://sujee.net/tech/articles/hadoop/hbase-map-reduce-freq-counter/
http://wiki.apache.org/hadoop/Hbase/MapReduce
http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/mapreduce/package-summary.html
Or else you can give a try to Pig Scripts also for mapt reduce programs.
http://pig.apache.org/docs/r0.9.1/api/org/apache/pig/backend/hadoop/hbase/HBaseTableInputFormat.html
One more option is there you increase the HBase time out Property and give a try. From different HBase configuration setting you can refer:
http://hbase.apache.org/docs/r0.20.6/hbase-conf.html
But when it comes to large data Map-reduce code is always better, and you can also search for optimizing guidelines/best practices for hbase.