hive tez mapr 分布查询与 tez 引擎存在问题

发布于 2025-01-10 04:08:09 字数 8045 浏览 2 评论 0原文

在 tez 上查询失败,但在 MR 上查询正常。 我们尝试了 set ... 参数的许多不同排列和组合,但无法在 tez 中成功运行此查询。 MR 中的查询运行大约 20 分钟,但 tez 引擎,我们看到查询在集群中消耗大量内存,当此查询运行时,我看到集群上的内存使用量持续上升,并在许多节点上达到 100%,并且查询因容器而失败丢失节点消息时退出

tmp_track_tempo_category_link_rc is 600 Million rows and less than 10GB size on hdfs


set hive.execution.engine=tez;
DROP TEMPORARY FUNCTION row_rank;
CREATE TEMPORARY FUNCTION row_rank AS 'com.mycompany.hadoop.hive.udf.UDFRank';
SET hive.map.aggr=true;
SET hive.exec.parallel=true;
SET hive.merge.mapfiles=false;

--set hive.mapjoin.hybridgrace.hashtable=false ;
SET mapred.max.split.size=100000000;
SET mapred.reduce.tasks=32;

--set mapreduce.map.memory.mb=4096;
--set mapreduce.reduce.memory.mb=4096;
--set mapreduce.map.java.opts=-Xmx3686m;
--set mapreduce.reduce.java.opts=-Xmx3686m;
set hive.tez.container.size=6144;
set hive.tez.java.opts=-Xmx4096m ;
--set tez.am.task.max.failed.attempts=20 ;
--set tez.am.max.app.attempts=10;
--set tez.am.node-unhealthy-reschedule-tasks=true;
set tez.task.am.heartbeat.interval-ms.max=500 ;
--set tez.task.am.heartbeat.counter.interval-ms.max=8000;
--set tez.task.resource.memory.mb=2048;

set tez.am.resource.memory.mb=6144
set tez.am.java.opts=-Xmx4096m

set hive.vectorized.execution.enabled=true;
set hive.vectorized.execution.reduce.enabled=true;
set hive.exec.parallel=true;
set hive.auto.convert.join=true;
set hive.cbo.enable=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;
set hive.exec.compress.output=true;
set hive.exec.compress.intermediate=true;
--set hive.tez.container.size=1024;
--set hive.tez.java.opts=-Xmx512m;
set tez.runtime.io.sort.mb=2048;
set tez.grouping.min-size=16777216;
set tez.grouping.max-size=1073741824; 
set tez.grouping.split-count=8;
set tez.am.resource.memory.mb=1024;
set hive.exec.reducers.bytes.per.reducer=10240000;
set hive.exec.reducers.max=100;
set hive.tez.auto.reducer.parallelism = true;
--set tez.runtime.unordered.output.buffer.size-mb=1024;
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.optimize.skewjoin=true ;
set hive.skewjoin.key=5000;

USE mydb;

SELECT track_id,
               (category_id + 100000) category_id,
               category_weight,
               CAST (1 AS TINYINT) section_type_id
          FROM (SELECT tmp4.*, row_rank (track_id) tempo_rank
                  FROM (SELECT tempo.track_id,
                               tempo.category_id,
                               tempo.category_weight
                          FROM tmp_track_tempo_category_link_rc tempo
                               JOIN
                               (SELECT tmp2.*
                                  FROM (SELECT tmp1.track_id,
                                               tmp1.apple_track_id,
                                               row_rank (track_id) RANK
                                          FROM (  SELECT b.track_id,
                                                         b.apple_track_id,
                                                         SUM (c.cnt) cnt,
                                                         SUM (
                                                            b.category_weight)
                                                            mx
                                                    FROM (  SELECT track_id,
                                                                   category_id,
                                                                   COUNT (
                                                                      category_id)
                                                                      cnt
                                                              FROM tmp_track_tempo_category_link_rc a
                                                             WHERE section_type_id =
                                                                      1
                                                          GROUP BY track_id,
                                                                   category_id)
                                                         c
                                                         JOIN
                                                         tmp_track_tempo_category_link_rc b
                                                            ON (    b.track_id =
                                                                       c.track_id
                                                                AND b.category_id =
                                                                       c.category_id
                                                                AND b.section_type_id =
                                                                       1)
                                                GROUP BY b.track_id,
                                                         b.apple_track_id
                                                DISTRIBUTE BY track_id SORT BY track_id, cnt DESC, mx DESC, apple_track_id
                                               ) tmp1) tmp2
                                 WHERE RANK = 1) tmp3
                                  ON (    tmp3.track_id = tempo.track_id
                                      AND tmp3.apple_track_id =
                                             tempo.apple_track_id
                                      AND tempo.section_type_id = 1)
                                 DISTRIBUTE BY track_id SORT BY track_id, category_weight DESC, category_id
                       ) tmp4) tmp5
         WHERE tempo_rank <= 3 ;


    Some of set now make memory situation better but it fails with Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception from MapJoinOperator : null

at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Hive Runtime Error while closing operators
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.close(MapRecordProcessor.java:488)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:199)
    ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception from MapJoinOperator : null
    at org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:546)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:697)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:711)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.close(MapRecordProcessor.java:477)
    ... 15 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception from MapJoinOperator : null

Query fails on tez but fine on MR.
We have tried many different permutations and combinations of set ... paarmeters but unable to run this query successfully in tez.
Query in MR runs in about 20 mins but tez engine, we saw that query was consuming huge memory in cluster, when this query runs, I see memory usage on cluster keep going up and reaches 100% on many nodes and query was failing with container exit on lost node message

tmp_track_tempo_category_link_rc is 600 Million rows and less than 10GB size on hdfs


set hive.execution.engine=tez;
DROP TEMPORARY FUNCTION row_rank;
CREATE TEMPORARY FUNCTION row_rank AS 'com.mycompany.hadoop.hive.udf.UDFRank';
SET hive.map.aggr=true;
SET hive.exec.parallel=true;
SET hive.merge.mapfiles=false;

--set hive.mapjoin.hybridgrace.hashtable=false ;
SET mapred.max.split.size=100000000;
SET mapred.reduce.tasks=32;

--set mapreduce.map.memory.mb=4096;
--set mapreduce.reduce.memory.mb=4096;
--set mapreduce.map.java.opts=-Xmx3686m;
--set mapreduce.reduce.java.opts=-Xmx3686m;
set hive.tez.container.size=6144;
set hive.tez.java.opts=-Xmx4096m ;
--set tez.am.task.max.failed.attempts=20 ;
--set tez.am.max.app.attempts=10;
--set tez.am.node-unhealthy-reschedule-tasks=true;
set tez.task.am.heartbeat.interval-ms.max=500 ;
--set tez.task.am.heartbeat.counter.interval-ms.max=8000;
--set tez.task.resource.memory.mb=2048;

set tez.am.resource.memory.mb=6144
set tez.am.java.opts=-Xmx4096m

set hive.vectorized.execution.enabled=true;
set hive.vectorized.execution.reduce.enabled=true;
set hive.exec.parallel=true;
set hive.auto.convert.join=true;
set hive.cbo.enable=true;
set hive.compute.query.using.stats=true;
set hive.stats.fetch.column.stats=true;
set hive.stats.fetch.partition.stats=true;
set hive.exec.compress.output=true;
set hive.exec.compress.intermediate=true;
--set hive.tez.container.size=1024;
--set hive.tez.java.opts=-Xmx512m;
set tez.runtime.io.sort.mb=2048;
set tez.grouping.min-size=16777216;
set tez.grouping.max-size=1073741824; 
set tez.grouping.split-count=8;
set tez.am.resource.memory.mb=1024;
set hive.exec.reducers.bytes.per.reducer=10240000;
set hive.exec.reducers.max=100;
set hive.tez.auto.reducer.parallelism = true;
--set tez.runtime.unordered.output.buffer.size-mb=1024;
set hive.exec.dynamic.partition=true;
set hive.exec.dynamic.partition.mode=nonstrict;
set hive.optimize.skewjoin=true ;
set hive.skewjoin.key=5000;

USE mydb;

SELECT track_id,
               (category_id + 100000) category_id,
               category_weight,
               CAST (1 AS TINYINT) section_type_id
          FROM (SELECT tmp4.*, row_rank (track_id) tempo_rank
                  FROM (SELECT tempo.track_id,
                               tempo.category_id,
                               tempo.category_weight
                          FROM tmp_track_tempo_category_link_rc tempo
                               JOIN
                               (SELECT tmp2.*
                                  FROM (SELECT tmp1.track_id,
                                               tmp1.apple_track_id,
                                               row_rank (track_id) RANK
                                          FROM (  SELECT b.track_id,
                                                         b.apple_track_id,
                                                         SUM (c.cnt) cnt,
                                                         SUM (
                                                            b.category_weight)
                                                            mx
                                                    FROM (  SELECT track_id,
                                                                   category_id,
                                                                   COUNT (
                                                                      category_id)
                                                                      cnt
                                                              FROM tmp_track_tempo_category_link_rc a
                                                             WHERE section_type_id =
                                                                      1
                                                          GROUP BY track_id,
                                                                   category_id)
                                                         c
                                                         JOIN
                                                         tmp_track_tempo_category_link_rc b
                                                            ON (    b.track_id =
                                                                       c.track_id
                                                                AND b.category_id =
                                                                       c.category_id
                                                                AND b.section_type_id =
                                                                       1)
                                                GROUP BY b.track_id,
                                                         b.apple_track_id
                                                DISTRIBUTE BY track_id SORT BY track_id, cnt DESC, mx DESC, apple_track_id
                                               ) tmp1) tmp2
                                 WHERE RANK = 1) tmp3
                                  ON (    tmp3.track_id = tempo.track_id
                                      AND tmp3.apple_track_id =
                                             tempo.apple_track_id
                                      AND tempo.section_type_id = 1)
                                 DISTRIBUTE BY track_id SORT BY track_id, category_weight DESC, category_id
                       ) tmp4) tmp5
         WHERE tempo_rank <= 3 ;


    Some of set now make memory situation better but it fails with Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception from MapJoinOperator : null

at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:211)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:168)
    at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73)
    at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1669)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61)
    at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37)
    at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: Hive Runtime Error while closing operators
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.close(MapRecordProcessor.java:488)
    at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:199)
    ... 14 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception from MapJoinOperator : null
    at org.apache.hadoop.hive.ql.exec.MapJoinOperator.closeOp(MapJoinOperator.java:546)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:697)
    at org.apache.hadoop.hive.ql.exec.Operator.close(Operator.java:711)
    at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.close(MapRecordProcessor.java:477)
    ... 15 more
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Unexpected exception from MapJoinOperator : null

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

我的影子我的梦 2025-01-17 04:08:09

我记得最后一次看到 Tez,它对排序和连接交换有一套非常简单的优化。如果您的(相对较大的)查询遇到这些限制,那么 IT 将非常合理。

MapReduce 不存在这些问题,因为每个步骤都完全写入磁盘,因此不存在任何内存中连接或排序。有时这会导致性能非常慢,但它也提供了一些简单的保证和稳健性。

您是否在相应的 Apache 邮件列表上询问过?我知道 Cloudera 几乎已经放弃了 Hadoop 社区,但周围可能仍然有一些相当精明的人。

Last I remember looking at Tez, it had a pretty simple-minded set of optimizations for sort and join exchanges. IT would be quite plausible if your (relatively large) query ran into those limits.

MapReduce doesn't have these problems because every step is completely written to disk so there aren't any in-memory joins or sorts. That leads to really slow performance sometimes, but it also gives some simple-minded guarantees and robustness.

Have you asked on the appropriate Apache mailing list? I know that Cloudera has pretty much abandoned the Hadoop community, but there are likely still some fairly astute people around.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文