对于Spark-Phoenix,我不想从HBase中获取全部数据。有什么方法可以根据我的状况从表格中获取数据?

发布于 2025-02-12 08:06:58 字数 149 浏览 1 评论 0原文

从HBASE中获取整个数据太慢。我只需要处理一个小时的数据。

因此,我想知道我是否可以在配置中指定SQL并将Spark支持对JDBC连接的支持等数据?否则它支持下推,我只需要写SparkSQL?

我正在使用Phoenix-4.14.1-HBase-1.3。

It is too slow to pull the whole data from Hbase. I only need to process data for one hour.

So I want to know whether I can specify the SQL in config and pull one-hour data like Spark's support to JDBC connection? or it supports push-down and I just need to write the SparkSQL?

I am using phoenix-4.14.1-HBase-1.3.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

亢潮 2025-02-19 08:06:58

好的。我找到答案。凤凰城确实支持下推。我的错是我首先使用substr,应该用startswith喜欢来代替。

证据如下所示。

 /*
    This is the buildScan() implementing Spark's PrunedFilteredScan.
    Spark SQL queries with columns or predicates specified will be pushed down
    to us here, and we can pass that on to Phoenix. According to the docs, this
    is an optimization, and the filtering/pruning will be re-evaluated again,
    but this prevents having to load the whole table into Spark first.
  */
  override def buildScan(requiredColumns: Array[String], filters: Array[Filter]): RDD[Row] = {
    new PhoenixRDD(
      sqlContext.sparkContext,
      tableName,
      requiredColumns,
      Some(buildFilter(filters)),
      Some(zkUrl),
      new Configuration(),
      dateAsTimestamp
    ).toDataFrame(sqlContext).rdd
  }

OK. I find the answer. The phoenix does support push-down. My fault is that I use substr at first, which should be replaced by startsWith or like.

The evidence is shown below.

 /*
    This is the buildScan() implementing Spark's PrunedFilteredScan.
    Spark SQL queries with columns or predicates specified will be pushed down
    to us here, and we can pass that on to Phoenix. According to the docs, this
    is an optimization, and the filtering/pruning will be re-evaluated again,
    but this prevents having to load the whole table into Spark first.
  */
  override def buildScan(requiredColumns: Array[String], filters: Array[Filter]): RDD[Row] = {
    new PhoenixRDD(
      sqlContext.sparkContext,
      tableName,
      requiredColumns,
      Some(buildFilter(filters)),
      Some(zkUrl),
      new Configuration(),
      dateAsTimestamp
    ).toDataFrame(sqlContext).rdd
  }
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文