hadoop hive 查询报错,求解!谢谢

发布于 2021-11-17 22:10:53 字数 34139 浏览 936 评论 1

小弟刚接触hadoop没多久,在接触中遇到一些问题,特来请教各位!

hive中查询:select  * from (select tc_iam,tc_back,tc_rlg,tc_calltype,tc_opc,tc_dpc,tc_pcm,tc_cic,tc_called,tc_orgcalled,tc_calling,tc_callpro,tc_respond from cdr00001 a where 1 = 1  and tc_cityid=8 and ( tc_calling like '153%' ))t sort by t.tc_iam;

输出过程:Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201212272039_0010, Tracking URL = http://NameNode:50030/jobdetails.jsp?jobid=job_201212272039_0010
Kill Command = /xwtech/hadoop-1.0.0/libexec/../bin/hadoop job  -Dmapred.job.tracker=hdfs://NameNode:54311/ -kill job_201212272039_0010
Hadoop job information for Stage-1: number of mappers: 9; number of reducers: 2
2013-01-07 14:56:58,666 Stage-1 map = 0%,  reduce = 0%
2013-01-07 14:57:04,750 Stage-1 map = 11%,  reduce = 0%, Cumulative CPU 2.92 sec
2013-01-07 14:57:05,756 Stage-1 map = 11%,  reduce = 0%, Cumulative CPU 2.92 sec
2013-01-07 14:57:08,783 Stage-1 map = 23%,  reduce = 0%, Cumulative CPU 2.92 sec
2013-01-07 14:57:09,788 Stage-1 map = 60%,  reduce = 0%, Cumulative CPU 8.86 sec
2013-01-07 14:57:10,795 Stage-1 map = 60%,  reduce = 0%, Cumulative CPU 8.86 sec
2013-01-07 14:57:11,799 Stage-1 map = 66%,  reduce = 0%, Cumulative CPU 16.07 sec
2013-01-07 14:57:12,803 Stage-1 map = 79%,  reduce = 0%, Cumulative CPU 40.52 sec
2013-01-07 14:57:13,807 Stage-1 map = 79%,  reduce = 2%, Cumulative CPU 40.52 sec
2013-01-07 14:57:14,812 Stage-1 map = 82%,  reduce = 4%, Cumulative CPU 50.96 sec
2013-01-07 14:57:15,816 Stage-1 map = 89%,  reduce = 4%, Cumulative CPU 61.41 sec
2013-01-07 14:57:16,821 Stage-1 map = 89%,  reduce = 4%, Cumulative CPU 61.41 sec
2013-01-07 14:57:17,827 Stage-1 map = 90%,  reduce = 4%, Cumulative CPU 62.48 sec
2013-01-07 14:57:18,830 Stage-1 map = 94%,  reduce = 4%, Cumulative CPU 63.79 sec
2013-01-07 14:57:20,837 Stage-1 map = 94%,  reduce = 4%, Cumulative CPU 63.79 sec
2013-01-07 14:57:21,842 Stage-1 map = 96%,  reduce = 4%, Cumulative CPU 63.79 sec
2013-01-07 14:57:22,847 Stage-1 map = 96%,  reduce = 11%, Cumulative CPU 63.79 sec
2013-01-07 14:57:23,851 Stage-1 map = 96%,  reduce = 19%, Cumulative CPU 63.79 sec
2013-01-07 14:57:24,854 Stage-1 map = 98%,  reduce = 19%, Cumulative CPU 63.79 sec
2013-01-07 14:57:26,861 Stage-1 map = 98%,  reduce = 19%, Cumulative CPU 63.79 sec
2013-01-07 14:57:27,867 Stage-1 map = 99%,  reduce = 19%, Cumulative CPU 63.79 sec
2013-01-07 14:57:28,873 Stage-1 map = 99%,  reduce = 24%, Cumulative CPU 63.79 sec
2013-01-07 14:57:29,878 Stage-1 map = 99%,  reduce = 30%, Cumulative CPU 63.79 sec
2013-01-07 14:57:30,894 Stage-1 map = 100%,  reduce = 30%, Cumulative CPU 79.82 sec
2013-01-07 14:57:33,909 Stage-1 map = 100%,  reduce = 30%, Cumulative CPU 79.82 sec
2013-01-07 14:57:34,914 Stage-1 map = 100%,  reduce = 30%, Cumulative CPU 88.64 sec
2013-01-07 14:57:39,939 Stage-1 map = 100%,  reduce = 30%, Cumulative CPU 88.64 sec
2013-01-07 14:57:40,944 Stage-1 map = 100%,  reduce = 15%, Cumulative CPU 79.82 sec
2013-01-07 14:57:48,981 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:57:49,985 Stage-1 map = 100%,  reduce = 4%, Cumulative CPU 79.82 sec
2013-01-07 14:57:50,988 Stage-1 map = 100%,  reduce = 11%, Cumulative CPU 79.82 sec
2013-01-07 14:57:51,991 Stage-1 map = 100%,  reduce = 11%, Cumulative CPU 79.82 sec
2013-01-07 14:57:54,000 Stage-1 map = 100%,  reduce = 20%, Cumulative CPU 79.82 sec
2013-01-07 14:57:55,004 Stage-1 map = 100%,  reduce = 20%, Cumulative CPU 79.82 sec
2013-01-07 14:57:56,007 Stage-1 map = 100%,  reduce = 43%, Cumulative CPU 79.82 sec
2013-01-07 14:57:57,012 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 79.82 sec
2013-01-07 14:57:58,016 Stage-1 map = 100%,  reduce = 67%, Cumulative CPU 79.82 sec
2013-01-07 14:57:59,021 Stage-1 map = 100%,  reduce = 33%, Cumulative CPU 79.82 sec
2013-01-07 14:58:06,051 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:07,054 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:08,057 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:09,060 Stage-1 map = 100%,  reduce = 7%, Cumulative CPU 79.82 sec
2013-01-07 14:58:10,063 Stage-1 map = 100%,  reduce = 7%, Cumulative CPU 79.82 sec
2013-01-07 14:58:11,067 Stage-1 map = 100%,  reduce = 7%, Cumulative CPU 79.82 sec
2013-01-07 14:58:12,074 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 86.77 sec
2013-01-07 14:58:13,079 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 86.77 sec
2013-01-07 14:58:14,083 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 86.77 sec
2013-01-07 14:58:15,087 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:20,109 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:21,113 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:22,117 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:23,120 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 79.82 sec
2013-01-07 14:58:24,123 Stage-1 map = 100%,  reduce = 15%, Cumulative CPU 79.82 sec
2013-01-07 14:58:25,126 Stage-1 map = 100%,  reduce = 15%, Cumulative CPU 79.82 sec
2013-01-07 14:58:26,131 Stage-1 map = 100%,  reduce = 15%, Cumulative CPU 79.82 sec
2013-01-07 14:58:27,135 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 79.82 sec
2013-01-07 14:58:28,140 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 79.82 sec
2013-01-07 14:58:29,144 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 79.82 sec
2013-01-07 14:58:33,161 Stage-1 map = 100%,  reduce = 17%, Cumulative CPU 79.82 sec
2013-01-07 14:58:34,166 Stage-1 map = 100%,  reduce = 17%, Cumulative CPU 79.82 sec
2013-01-07 14:58:35,171 Stage-1 map = 100%,  reduce = 17%, Cumulative CPU 79.82 sec
2013-01-07 14:58:36,174 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 79.82 sec
MapReduce Total cumulative CPU time: 1 minutes 19 seconds 820 msec
Ended Job = job_201212272039_0010 with errors
Error during job, obtaining debugging information...
Examining task ID: task_201212272039_0010_m_000010 (and more) from job job_201212272039_0010
Examining task ID: task_201212272039_0010_r_000000 (and more) from job job_201212272039_0010
Exception in thread "Thread-27" java.lang.RuntimeException: Error while reading from task log url
        at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:240)
        at org.apache.hadoop.hive.ql.exec.JobDebugger.showJobFailDebugInfo(JobDebugger.java:227)
        at org.apache.hadoop.hive.ql.exec.JobDebugger.run(JobDebugger.java:92)
        at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Server returned HTTP response code: 400 for URL: http://DataNode2:50060/tasklog?taskid=attempt_201212272039_0010_r_000000_3&start=-8193
        at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436)
        at java.net.URL.openStream(URL.java:1010)
        at org.apache.hadoop.hive.ql.exec.errors.TaskLogProcessor.getStackTraces(TaskLogProcessor.java:192)
        ... 3 more
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 9  Reduce: 2   Cumulative CPU: 79.82 sec   HDFS Read: 1252462645 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 1 minutes 19 seconds 820 msec

 

 

 

报错的日志:

Task Logs: 'attempt_201212272039_0011_r_000000_2'


stdout logs

--------------------------------------------------------------------------------


stderr logs

log4j:WARN No appenders could be found for logger (org.apache.hadoop.hdfs.DFSClient).
log4j:WARN Please initialize the log4j system properly.


--------------------------------------------------------------------------------


syslog logs

2013-01-07 16:20:02,978 INFO org.apache.hadoop.util.NativeCodeLoader: Loaded the native-hadoop library
2013-01-07 16:20:03,039 INFO org.apache.hadoop.mapred.TaskRunner: Creating symlink: /xwtech/HadoopRun/var/taskTracker/distcache/7221898507594610294_-1206148131_337075329/NameNode/xwtech/HadoopRun/tmp/hive-root/hive_2013-01-07_16-18-58_305_7778475650145532510/-mr-10004/471a2b87-4326-492a-b44e-e45662480183 <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/HIVE_PLAN471a2b87-4326-492a-b44e-e45662480183
2013-01-07 16:20:03,044 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/job.jar <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/job.jar
2013-01-07 16:20:03,045 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/.job.jar.crc <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/.job.jar.crc
2013-01-07 16:20:03,046 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/org <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/org
2013-01-07 16:20:03,047 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/javaewah <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/javaewah
2013-01-07 16:20:03,048 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/META-INF <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/META-INF
2013-01-07 16:20:03,049 INFO org.apache.hadoop.filecache.TrackerDistributedCacheManager: Creating symlink: /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/hive-exec-log4j.properties <- /xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/hive-exec-log4j.properties
2013-01-07 16:20:03,123 WARN org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Metrics system not started: Cannot locate configuration: tried hadoop-metrics2-reducetask.properties, hadoop-metrics2.properties
2013-01-07 16:20:03,337 INFO org.apache.hadoop.util.ProcessTree: setsid exited with exit code 0
2013-01-07 16:20:03,340 INFO org.apache.hadoop.mapred.Task:  Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@7aa89ce3
2013-01-07 16:20:03,423 INFO org.apache.hadoop.mapred.ReduceTask: ShuffleRamManager: MemoryLimit=130514944, MaxSingleShuffleLimit=32628736
2013-01-07 16:20:03,428 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Thread started: Thread for merging on-disk files
2013-01-07 16:20:03,428 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Thread waiting: Thread for merging on-disk files
2013-01-07 16:20:03,430 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Need another 16 map output(s) where 0 is already in progress
2013-01-07 16:20:03,430 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Thread started: Thread for polling Map Completion Events
2013-01-07 16:20:03,430 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 0 outputs (0 slow hosts and0 dup hosts)
2013-01-07 16:20:03,430 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Thread started: Thread for merging in memory files
2013-01-07 16:20:08,430 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 8 outputs (0 slow hosts and0 dup hosts)
2013-01-07 16:20:08,881 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 4 outputs (0 slow hosts and4 dup hosts)
2013-01-07 16:20:08,908 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 1 outputs (0 slow hosts and3 dup hosts)
2013-01-07 16:20:08,926 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 1 outputs (0 slow hosts and2 dup hosts)
2013-01-07 16:20:08,945 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 1 outputs (0 slow hosts and1 dup hosts)
2013-01-07 16:20:09,262 INFO org.apache.hadoop.mapred.ReduceTask: attempt_201212272039_0011_r_000000_2 Scheduled 1 outputs (0 slow hosts and0 dup hosts)
2013-01-07 16:20:10,443 INFO org.apache.hadoop.mapred.ReduceTask: GetMapEventsThread exiting
2013-01-07 16:20:10,443 INFO org.apache.hadoop.mapred.ReduceTask: getMapsEventsThread joined.
2013-01-07 16:20:10,443 INFO org.apache.hadoop.mapred.ReduceTask: Closed ram manager
2013-01-07 16:20:10,443 INFO org.apache.hadoop.mapred.ReduceTask: Interleaved on-disk merge complete: 0 files left.
2013-01-07 16:20:10,443 INFO org.apache.hadoop.mapred.ReduceTask: In-memory merge complete: 16 files left.
2013-01-07 16:20:10,455 INFO org.apache.hadoop.mapred.Merger: Merging 16 sorted segments
2013-01-07 16:20:10,455 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 16 segments left of total size: 11412808 bytes
2013-01-07 16:20:10,703 INFO org.apache.hadoop.mapred.ReduceTask: Merged 16 segments, 11412808 bytes to disk to satisfy reduce memory limit
2013-01-07 16:20:10,703 INFO org.apache.hadoop.mapred.ReduceTask: Merging 1 files, 11412782 bytes from disk
2013-01-07 16:20:10,704 INFO org.apache.hadoop.mapred.ReduceTask: Merging 0 segments, 0 bytes from memory into reduce
2013-01-07 16:20:10,704 INFO org.apache.hadoop.mapred.Merger: Merging 1 sorted segments
2013-01-07 16:20:10,706 INFO org.apache.hadoop.mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 11412778 bytes
2013-01-07 16:20:10,715 INFO ExecReducer: maximum memory = 186449920
2013-01-07 16:20:10,716 INFO ExecReducer: conf classpath = [file:/xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/classes, file:/xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/, file:/xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/]
2013-01-07 16:20:10,716 INFO ExecReducer: thread classpath = [file:/xwtech/hadoop-config/, file:/usr/java/jdk1.6.0_33/lib/tools.jar, file:/xwtech/hadoop-1.0.0/, file:/xwtech/hadoop-1.0.0/hadoop-core-1.0.0.jar, file:/xwtech/hadoop-1.0.0/lib/asm-3.2.jar, file:/xwtech/hadoop-1.0.0/lib/aspectjrt-1.6.5.jar, file:/xwtech/hadoop-1.0.0/lib/aspectjtools-1.6.5.jar, file:/xwtech/hadoop-1.0.0/lib/commons-beanutils-1.7.0.jar, file:/xwtech/hadoop-1.0.0/lib/commons-beanutils-core-1.8.0.jar, file:/xwtech/hadoop-1.0.0/lib/commons-cli-1.2.jar, file:/xwtech/hadoop-1.0.0/lib/commons-codec-1.4.jar, file:/xwtech/hadoop-1.0.0/lib/commons-collections-3.2.1.jar, file:/xwtech/hadoop-1.0.0/lib/commons-configuration-1.6.jar, file:/xwtech/hadoop-1.0.0/lib/commons-daemon-1.0.1.jar, file:/xwtech/hadoop-1.0.0/lib/commons-digester-1.8.jar, file:/xwtech/hadoop-1.0.0/lib/commons-el-1.0.jar, file:/xwtech/hadoop-1.0.0/lib/commons-httpclient-3.0.1.jar, file:/xwtech/hadoop-1.0.0/lib/commons-lang-2.4.jar, file:/xwtech/hadoop-1.0.0/lib/commons-logging-1.1.1.jar, file:/xwtech/hadoop-1.0.0/lib/commons-logging-api-1.0.4.jar, file:/xwtech/hadoop-1.0.0/lib/commons-math-2.1.jar, file:/xwtech/hadoop-1.0.0/lib/commons-net-1.4.1.jar, file:/xwtech/hadoop-1.0.0/lib/core-3.1.1.jar, file:/xwtech/hadoop-1.0.0/lib/hadoop-capacity-scheduler-1.0.0.jar, file:/xwtech/hadoop-1.0.0/lib/hadoop-fairscheduler-1.0.0.jar, file:/xwtech/hadoop-1.0.0/lib/hadoop-thriftfs-1.0.0.jar, file:/xwtech/hadoop-1.0.0/lib/hsqldb-1.8.0.10.jar, file:/xwtech/hadoop-1.0.0/lib/jackson-core-asl-1.0.1.jar, file:/xwtech/hadoop-1.0.0/lib/jackson-mapper-asl-1.0.1.jar, file:/xwtech/hadoop-1.0.0/lib/jasper-compiler-5.5.12.jar, file:/xwtech/hadoop-1.0.0/lib/jasper-runtime-5.5.12.jar, file:/xwtech/hadoop-1.0.0/lib/jdeb-0.8.jar, file:/xwtech/hadoop-1.0.0/lib/jersey-core-1.8.jar, file:/xwtech/hadoop-1.0.0/lib/jersey-json-1.8.jar, file:/xwtech/hadoop-1.0.0/lib/jersey-server-1.8.jar, file:/xwtech/hadoop-1.0.0/lib/jets3t-0.6.1.jar, file:/xwtech/hadoop-1.0.0/lib/jetty-6.1.26.jar, file:/xwtech/hadoop-1.0.0/lib/jetty-util-6.1.26.jar, file:/xwtech/hadoop-1.0.0/lib/jsch-0.1.42.jar, file:/xwtech/hadoop-1.0.0/lib/junit-4.5.jar, file:/xwtech/hadoop-1.0.0/lib/kfs-0.2.2.jar, file:/xwtech/hadoop-1.0.0/lib/log4j-1.2.15.jar, file:/xwtech/hadoop-1.0.0/lib/mockito-all-1.8.5.jar, file:/xwtech/hadoop-1.0.0/lib/oro-2.0.8.jar, file:/xwtech/hadoop-1.0.0/lib/servlet-api-2.5-20081211.jar, file:/xwtech/hadoop-1.0.0/lib/slf4j-api-1.4.3.jar, file:/xwtech/hadoop-1.0.0/lib/slf4j-log4j12-1.4.3.jar, file:/xwtech/hadoop-1.0.0/lib/xmlenc-0.52.jar, file:/xwtech/hadoop-1.0.0/lib/jsp-2.1/jsp-2.1.jar, file:/xwtech/hadoop-1.0.0/lib/jsp-2.1/jsp-api-2.1.jar, file:/xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/classes, file:/xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/jars/, file:/xwtech/HadoopRun/var/taskTracker/root/distcache/-5426283128724111310_841257112_337075900/NameNode/xwtech/HadoopRun/tmp/mapred/staging/root/.staging/job_201212272039_0011/libjars/hive-builtins-0.9.0.jar/, file:/xwtech/HadoopRun/var/taskTracker/root/distcache/115564924351178157_1787424588_337075923/NameNode/xwtech/HadoopRun/tmp/mapred/staging/root/.staging/job_201212272039_0011/libjars/hive-hbase-handler-0.9.0.jar/, file:/xwtech/HadoopRun/var/taskTracker/root/distcache/-8376894729149403917_2069158576_337075963/NameNode/xwtech/HadoopRun/tmp/mapred/staging/root/.staging/job_201212272039_0011/libjars/hbase-0.92.0.jar/, file:/xwtech/HadoopRun/var/taskTracker/root/distcache/-4933434979655182018_-681351817_337076534/NameNode/xwtech/HadoopRun/tmp/mapred/staging/root/.staging/job_201212272039_0011/libjars/zookeeper-3.4.2.jar/, file:/xwtech/HadoopRun/var/taskTracker/root/jobcache/job_201212272039_0011/attempt_201212272039_0011_r_000000_2/work/]
2013-01-07 16:20:10,735 WARN org.apache.hadoop.hive.conf.HiveConf: hive-site.xml not found on CLASSPATH
2013-01-07 16:20:11,087 INFO ExecReducer:
<OP>Id =9
  <Children>
    <LIM>Id =10
      <Children>
        <FS>Id =11
          <Parent>Id = 10 null<Parent>
        <FS>
      <Children>
      <Parent>Id = 9 null<Parent>
    <LIM>
  <Children>
<OP>
2013-01-07 16:20:11,087 INFO org.apache.hadoop.hive.ql.exec.ExtractOperator: Initializing Self 9 OP
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.ExtractOperator: Operator 9 OP initialized
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.ExtractOperator: Initializing children of 9 OP
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.LimitOperator: Initializing child 10 LIM
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.LimitOperator: Initializing Self 10 LIM
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.LimitOperator: Operator 10 LIM initialized
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.LimitOperator: Initializing children of 10 LIM
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Initializing child 11 FS
2013-01-07 16:20:11,092 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Initializing Self 11 FS
2013-01-07 16:20:11,103 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Operator 11 FS initialized
2013-01-07 16:20:11,103 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Initialization Done 11 FS
2013-01-07 16:20:11,103 INFO org.apache.hadoop.hive.ql.exec.LimitOperator: Initialization Done 10 LIM
2013-01-07 16:20:11,103 INFO org.apache.hadoop.hive.ql.exec.ExtractOperator: Initialization Done 9 OP
2013-01-07 16:20:11,116 INFO ExecReducer: ExecReducer: processing 1 rows: used memory = 14751864
2013-01-07 16:20:11,116 INFO org.apache.hadoop.hive.ql.exec.ExtractOperator: 9 forwarding 1 rows
2013-01-07 16:20:11,116 INFO org.apache.hadoop.hive.ql.exec.LimitOperator: 10 forwarding 1 rows
2013-01-07 16:20:11,117 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Final Path: FS hdfs://NameNode:54310/xwtech/HadoopRun/tmp/hive-root/hive_2013-01-07_16-18-58_305_7778475650145532510/_tmp.-mr-10002/000000_2
2013-01-07 16:20:11,117 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: Writing to temp file: FS hdfs://NameNode:54310/xwtech/HadoopRun/tmp/hive-root/hive_2013-01-07_16-18-58_305_7778475650145532510/_task_tmp.-mr-10002/_tmp.000000_2
2013-01-07 16:20:11,117 INFO org.apache.hadoop.hive.ql.exec.FileSinkOperator: New Final Path: FS hdfs://NameNode:54310/xwtech/HadoopRun/tmp/hive-root/hive_2013-01-07_16-18-58_305_7778475650145532510/_tmp.-mr-10002/000000_2
2013-01-07 16:20:11,162 FATAL ExecReducer: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) [Error getting row data with exception java.lang.ArrayIndexOutOfBoundsException: 130
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:287)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:188)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:138)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:195)
at org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:61)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:219)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:251)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
]
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:256)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ArrayIndexOutOfBoundsException: 130
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:287)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:188)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:138)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:195)
at org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:61)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:246)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:202)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:568)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.LimitOperator.processOp(LimitOperator.java:51)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
... 7 more

2013-01-07 16:20:11,165 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs&apos; truncater with mapRetainSize=-1 and reduceRetainSize=-1
2013-01-07 16:20:11,220 INFO org.apache.hadoop.io.nativeio.NativeIO: Initialized cache for UID to User mapping with a cache timeout of 14400 seconds.
2013-01-07 16:20:11,220 INFO org.apache.hadoop.io.nativeio.NativeIO: Got UserName root for UID 0 from the native implementation
2013-01-07 16:20:11,223 WARN org.apache.hadoop.mapred.Child: Error running child
java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) [Error getting row data with exception java.lang.ArrayIndexOutOfBoundsException: 130
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:287)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:188)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:138)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:195)
at org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:61)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:219)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:251)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
]
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:268)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) [Error getting row data with exception java.lang.ArrayIndexOutOfBoundsException: 130
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:287)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:188)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:138)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:195)
at org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:61)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
at org.apache.hadoop.hive.serde2.SerDeUtils.buildJSONString(SerDeUtils.java:349)
at org.apache.hadoop.hive.serde2.SerDeUtils.getJSONString(SerDeUtils.java:219)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:251)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:519)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:420)
at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1083)
at org.apache.hadoop.mapred.Child.main(Child.java:249)
]
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:256)
... 7 more
Caused by: java.lang.ArrayIndexOutOfBoundsException: 130
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.readVInt(LazyBinaryUtils.java:287)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryUtils.checkObjectByteInfo(LazyBinaryUtils.java:188)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.parse(LazyBinaryStruct.java:138)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinaryStruct.getField(LazyBinaryStruct.java:195)
at org.apache.hadoop.hive.serde2.lazybinary.objectinspector.LazyBinaryStructObjectInspector.getStructFieldData(LazyBinaryStructObjectInspector.java:61)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serializeStruct(LazyBinarySerDe.java:246)
at org.apache.hadoop.hive.serde2.lazybinary.LazyBinarySerDe.serialize(LazyBinarySerDe.java:202)
at org.apache.hadoop.hive.ql.exec.FileSinkOperator.processOp(FileSinkOperator.java:568)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.LimitOperator.processOp(LimitOperator.java:51)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:762)
at org.apache.hadoop.hive.ql.exec.ExtractOperator.processOp(ExtractOperator.java:45)
at org.apache.hadoop.hive.ql.exec.Operator.process(Operator.java:471)
at org.apache.hadoop.hive.ql.exec.ExecReducer.reduce(ExecReducer.java:247)
... 7 more
2013-01-07 16:20:11,231 INFO org.apache.hadoop.mapred.Task: Runnning cleanup for the task


--------------------------------------------------------------------------------

 

我的问题:

小弟有2个问题想请教一下:第一、为什么我的reduce会做到一半又回到0%开始重新做啊?

                                   第二、我不知道我这个查询为什么会报错?(如果将SQL改成:select  * from (select tc_iam,tc_back,tc_rlg,tc_calltype,tc_opc,tc_dpc,tc_pcm,tc_cic,tc_called,tc_orgcalled,tc_calling,tc_callpro,tc_respond from cdr00001 a where 1 = 1  and tc_cityid=8 and ( tc_calling like '153%' ))t limit 10;  就不会报错了)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

屌丝范 2021-11-18 07:22:07

有谁能够帮帮小弟的?为什么多了个 sort by 就会出现上边2个问题啊?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文