hadoop2.10执行自带的测试程序不输出output目录

发布于 2022-09-12 22:07:36 字数 5419 浏览 30 评论 0

执行的命令:1. ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'

使用的jdk8,hadoop2.0,执行测试数据时,并不会生成output目录,而是生成以下目录:

hadoop@code-shop:/usr/local/hadoop$ ./bin/hdfs dfs -ls
Found 2 items
drwxr-xr-x   - hadoop supergroup          0 2021-01-19 23:39 grep-temp-1442452675
drwxr-xr-x   - hadoop supergroup          0 2021-01-19 22:51 input
hadoop@code-shop:/usr/local/hadoop$ 

运行过程中的日志:貌似并未发现明显错误。

hadoop@code-shop:/usr/local/hadoop$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'
21/01/19 23:43:20 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
21/01/19 23:43:20 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
21/01/19 23:43:20 INFO input.FileInputFormat: Total input files to process : 8
21/01/19 23:43:21 INFO mapreduce.JobSubmitter: number of splits:8
21/01/19 23:43:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_local1446384809_0001
21/01/19 23:43:22 INFO mapreduce.Job: The url to track the job: http://localhost:8080/
21/01/19 23:43:22 INFO mapreduce.Job: Running job: job_local1446384809_0001
21/01/19 23:43:22 INFO mapred.LocalJobRunner: OutputCommitter set in config null
21/01/19 23:43:22 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
21/01/19 23:43:22 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
21/01/19 23:43:22 INFO mapred.LocalJobRunner: OutputCommitter is org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
21/01/19 23:43:22 INFO mapred.LocalJobRunner: Waiting for map tasks
21/01/19 23:43:22 INFO mapred.LocalJobRunner: Starting task: attempt_local1446384809_0001_m_000000_0
21/01/19 23:43:22 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
21/01/19 23:43:22 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
21/01/19 23:43:22 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
21/01/19 23:43:22 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/hadoop-policy.xml:0+10206
21/01/19 23:43:49 INFO mapreduce.Job: Job job_local1446384809_0001 running in uber mode : false
21/01/19 23:43:49 INFO mapred.MapTask: (EQUATOR) 0 kvi 26214396(104857584)
21/01/19 23:43:49 INFO mapred.MapTask: mapreduce.task.io.sort.mb: 100
21/01/19 23:43:49 INFO mapred.MapTask: soft limit at 83886080
21/01/19 23:43:49 INFO mapreduce.Job:  map 0% reduce 0%
21/01/19 23:43:49 INFO mapred.MapTask: bufstart = 0; bufvoid = 104857600
21/01/19 23:43:49 INFO mapred.MapTask: kvstart = 26214396; length = 6553600
21/01/19 23:43:49 INFO mapred.MapTask: Map output collector class = org.apache.hadoop.mapred.MapTask$MapOutputBuffer
21/01/19 23:43:59 INFO mapred.LocalJobRunner: 
21/01/19 23:43:59 INFO mapred.MapTask: Starting flush of map output
21/01/19 23:43:59 INFO mapred.MapTask: Spilling map output
21/01/19 23:43:59 INFO mapred.MapTask: bufstart = 0; bufend = 17; bufvoid = 104857600
21/01/19 23:43:59 INFO mapred.MapTask: kvstart = 26214396(104857584); kvend = 26214396(104857584); length = 1/6553600
21/01/19 23:43:59 INFO mapred.MapTask: Finished spill 0
21/01/19 23:43:59 INFO mapred.Task: Task:attempt_local1446384809_0001_m_000000_0 is done. And is in the process of committing
21/01/19 23:43:59 INFO mapred.LocalJobRunner: map
21/01/19 23:43:59 INFO mapred.Task: Task 'attempt_local1446384809_0001_m_000000_0' done.
21/01/19 23:43:59 INFO mapred.Task: Final Counters for attempt_local1446384809_0001_m_000000_0: Counters: 23
    File System Counters
        FILE: Number of bytes read=304459
        FILE: Number of bytes written=803155
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=10206
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=5
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=1
    Map-Reduce Framework
        Map input records=237
        Map output records=1
        Map output bytes=17
        Map output materialized bytes=25
        Input split bytes=122
        Combine input records=1
        Combine output records=1
        Spilled Records=1
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=13
        Total committed heap usage (bytes)=234881024
    File Input Format Counters 
        Bytes Read=10206
21/01/19 23:43:59 INFO mapred.LocalJobRunner: Finishing task: attempt_local1446384809_0001_m_000000_0
21/01/19 23:43:59 INFO mapred.LocalJobRunner: Starting task: attempt_local1446384809_0001_m_000001_0
21/01/19 23:43:59 INFO output.FileOutputCommitter: File Output Committer Algorithm version is 1
21/01/19 23:43:59 INFO output.FileOutputCommitter: FileOutputCommitter skip cleanup _temporary folders under output directory:false, ignore cleanup failures: false
21/01/19 23:43:59 INFO mapred.Task:  Using ResourceCalculatorProcessTree : [ ]
21/01/19 23:43:59 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/hadoop/input/capacity-scheduler.xml:0+8814
Killed

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文