- Pig 教程
- Pig 体系结构
- Pig 安装
- Pig 执行
- Pig Grunt Shell
- Pig Latin 基础
- Pig 读取数据
- Pig 存储数据
- Pig Dump 运算符
- Pig Describe 运算符
- Pig Explain 运算符
- Pig illustrate 运算符
- Pig GROUP 运算符
- Pig Cogroup 运算符
- Pig JOIN 运算符
- Pig Cross 运算符
- Pig Union 运算符
- Pig SPLIT 运算符
- Pig FILTER 运算符
- Pig DISTINCT 运算符
- Pig FOREACH 运算符
- Pig ORDER BY 运算符
- Pig LIMIT 运算符
- Pig eval(求值) 函数
- Pig Load & Store 函数
- Pig Bag & Tuple 函数
- Pig 字符串(String)函数
- Pig 日期时间函数
- Pig 数学函数
文章来源于网络收集而来,版权归原创者所有,如有侵权请及时联系!
Pig Dump 运算符
Dump 运算符,也就是诊断运算符。
load 语句将简单地将数据加载到Apache pig 指定的关系。要验证Load语句的执行,您必须使用Diagnostic(诊断)操作符。Pig Latin提供四种不同类型的诊断运算符:
- Dump 运算符
- Describe 运算符
- Explanation 运算符
- Illustration 运算符
在本章中,我们将讨论Pig Latin的Dump运算符。
Dump 运算符
Dump 操作来运行的Pig Latin语句并在屏幕上显示的结果。它通常用于调试目的。
语法
下面给出的是Dump运算符的语法。
grunt> Dump Relation_Name
假设我们在HDFS中有一个具有以下内容的文件Student_data.txt。
001,Rajiv,Reddy,9848022337,Hyderabad
002,siddarth,Battacharya,9848022338,Kolkata
003,Rajesh,Khanna,9848022339,Delhi
004,Preethi,Agarwal,9848022330,Pune
005,Trupthi,Mohanthy,9848022336,Bhuwaneshwar
006,Archana,Mishra,9848022335,Chennai.
如下所示,我们已使用LOAD运算符将其读入关系学生。
grunt> student = LOAD 'hdfs://localhost:9000/pig_data/student_data.txt'
USING PigStorage(',')
as ( id:int, firstname:chararray, lastname:chararray, phone:chararray,city:chararray );
现在,让我们使用Dump运算符打印关系的内容,如下所示。
grunt> Dump student
一旦执行了上述Pig Latin语句,它将启动MapReduce作业以从HDFS读取数据。它将产生以下输出。
2020-10-01 15:05:27,642 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher -
100% complete
2020-10-01 15:05:27,652 [main]
INFO org.apache.pig.tools.pigstats.mapreduce.SimplePigStats - Script Statistics:
HadoopVersion PigVersion UserId StartedAt FinishedAt Features
2.6.0 0.15.0 Hadoop 2020-10-01 15:03:11 2020-10-01 05:27 UNKNOWN
Success!
Job Stats (time in seconds):
JobId job_14459_0004
Maps 1
Reduces 0
MaxMapTime n/a
MinMapTime n/a
AvgMapTime n/a
MedianMapTime n/a
MaxReduceTime 0
MinReduceTime 0
AvgReduceTime 0
MedianReducetime 0
Alias student
Feature MAP_ONLY
Outputs hdfs://localhost:9000/tmp/temp580182027/tmp757878456,
Input(s): Successfully read 0 records from: "hdfs://localhost:9000/pig_data/
student_data.txt"
Output(s): Successfully stored 0 records in: "hdfs://localhost:9000/tmp/temp580182027/
tmp757878456"
Counters: Total records written : 0 Total bytes written : 0 Spillable Memory Manager
spill count : 0Total bags proactively spilled: 0 Total records proactively spilled: 0
Job DAG: job_1443519499159_0004
2020-10-01 15:06:28,403 [main]
INFO org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLau ncher - Success!
2020-10-01 15:06:28,441 [main] INFO org.apache.pig.data.SchemaTupleBackend -
Key [pig.schematuple] was not set... will not generate code.
2020-10-01 15:06:28,485 [main]
INFO org.apache.hadoop.mapreduce.lib.input.FileInputFormat - Total input paths
to process : 1
2020-10-01 15:06:28,485 [main]
INFO org.apache.pig.backend.hadoop.executionengine.util.MapRedUtil - Total input paths
to process : 1
(1,Rajiv,Reddy,9848022337,Hyderabad)
(2,siddarth,Battacharya,9848022338,Kolkata)
(3,Rajesh,Khanna,9848022339,Delhi)
(4,Preethi,Agarwal,9848022330,Pune)
(5,Trupthi,Mohanthy,9848022336,Bhuwaneshwar)
(6,Archana,Mishra,9848022335,Chennai)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论