如何在hive中将mapreduce任务的数量设置为1

发布于 2024-12-22 18:57:22 字数 1702 浏览 9 评论 0原文

我尝试在 hive 中执行以下操作 -

set hive.exec.reducers.max = 1;
set mapred.reduce.tasks = 1;

from flat_json
insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
reduce  log_time,
 req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
 using '${SCRIPT_LOC}/aggregator.pl' as 
 metric_id, metric_value, aggr_type, rule_name, category_name; 

尽管将最大数量和减少任务数量设置为 1,但我看到生成了 2 个映射减少任务。请看下面——

 > set hive.exec.reducers.max = 1;
hive>  set mapred.reduce.tasks = 1;
hive>
    > from flat_json
    > insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
    > reduce  log_time,
    >  req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
    >  using '${SCRIPT_LOC}/aggregator.pl' as
    >  metric_id, metric_value, aggr_type, rule_name, category_name;
converting to local s3://dsp-emr-test/anurag/dsp-test/60mins/script/aggregator.pl
Added resource: /mnt/var/lib/hive_07_1/downloaded_resources/aggregator.pl
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201112270825_0009, Tracking URL = http://ip-10-85-66-9.ec2.internal:9100/jobdetails.jsp?jobid=job_201112270825_0009
Kill Command = /home/hadoop/.versions/0.20.205/libexec/../bin/hadoop job  -Dmapred.job.tracker=10.85.66.9:9001 -kill job_201112270825_0009
2011-12-27 10:30:03,542 Stage-1 map = 0%,  reduce = 0%

I tried following in hive-

set hive.exec.reducers.max = 1;
set mapred.reduce.tasks = 1;

from flat_json
insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
reduce  log_time,
 req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
 using '${SCRIPT_LOC}/aggregator.pl' as 
 metric_id, metric_value, aggr_type, rule_name, category_name; 

Inspite of setting max number and number of reduced task to 1 I see 2 map reduce task getting generated. Please see below-

 > set hive.exec.reducers.max = 1;
hive>  set mapred.reduce.tasks = 1;
hive>
    > from flat_json
    > insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
    > reduce  log_time,
    >  req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
    >  using '${SCRIPT_LOC}/aggregator.pl' as
    >  metric_id, metric_value, aggr_type, rule_name, category_name;
converting to local s3://dsp-emr-test/anurag/dsp-test/60mins/script/aggregator.pl
Added resource: /mnt/var/lib/hive_07_1/downloaded_resources/aggregator.pl
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201112270825_0009, Tracking URL = http://ip-10-85-66-9.ec2.internal:9100/jobdetails.jsp?jobid=job_201112270825_0009
Kill Command = /home/hadoop/.versions/0.20.205/libexec/../bin/hadoop job  -Dmapred.job.tracker=10.85.66.9:9001 -kill job_201112270825_0009
2011-12-27 10:30:03,542 Stage-1 map = 0%,  reduce = 0%

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

-小熊_ 2024-12-29 18:57:22

你认为相关的两件事其实并不相关。您正在设置reduce任务的数量,而不是MapReduce作业的数量。 Hive 会将您的查询转换为多个 MapReduce 作业,正如需要完成的工作的性质一样。每个MapReduce作业由多个map任务和reduce任务组成。

您设置的是任务的最大数量。这意味着,每个 MapReduce 作业都将受到它可以启动的任务数量的限制。不过,您仍然需要运行两项作业。对于 Hive 的 MapReduce 作业数量,您无能为力。它需要运行每个阶段才能执行您的查询。

The two things you think are related are not. You are setting the number of reduce tasks, not MapReduce jobs. Hive will convert your query into several MapReduce jobs, just as the nature of what needs to be done. Each MapReduce job consists of multiple map tasks and reduce tasks.

What you are setting is the maximum number of tasks. That means, each MapReduce job will be constrained by the number of tasks it can fire up. You still need to run two jobs, though. There is nothing you can do about the number of MapReduce jobs with Hive. It needs to run each stage in order to execute your query.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文