在Python中使用Hadoop处理大型csv文件
我有一个巨大的 CSV 文件,想在 Amazon EMR (python) 上使用 Hadoop MapReduce 进行处理。
该文件有 7 个字段,但是,我只查看日期和数量字段。
"date" "receiptId" "productId" "quantity" "price" "posId" "cashierId"
首先,我的mapper.py
import sys
def main(argv):
line = sys.stdin.readline()
try:
while line:
list = line.split('\t')
#If date meets criteria, add quantity to express key
if int(list[0][11:13])>=17 and int(list[0][11:13])<=19:
print '%s\t%s' % ("Express", int(list[3]))
#Else, add quantity to non-express key
else:
print '%s\t%s' % ("Non-express", int(list[3]))
line = sys.stdin.readline()
except "end of file":
return None
if __name__ == "__main__":
main(sys.argv)
对于reducer,我将使用流命令:aggregate。
问题:
我的代码正确吗?我在 Amazon EMR 中运行它,但输出为空。
所以我的最终结果应该是:快递,XXX 和非快递,YYY。我可以让它在返回结果之前进行除法运算吗?只是 XXX/YYY 的结果。我应该把这段代码放在哪里?减速器??
此外,这是一个巨大的 CSV 文件,因此映射会将其分成几个分区吗?或者我需要显式调用 FileSplit 吗?如果是这样,我该怎么做?
I have a huge CSV file I would like to process using Hadoop MapReduce on Amazon EMR (python).
The file has 7 fields, however, I am only looking at the date and quantity field.
"date" "receiptId" "productId" "quantity" "price" "posId" "cashierId"
Firstly, my mapper.py
import sys
def main(argv):
line = sys.stdin.readline()
try:
while line:
list = line.split('\t')
#If date meets criteria, add quantity to express key
if int(list[0][11:13])>=17 and int(list[0][11:13])<=19:
print '%s\t%s' % ("Express", int(list[3]))
#Else, add quantity to non-express key
else:
print '%s\t%s' % ("Non-express", int(list[3]))
line = sys.stdin.readline()
except "end of file":
return None
if __name__ == "__main__":
main(sys.argv)
For the reducer, I will be using the streaming command: aggregate.
Question:
Is my code right? I ran it in Amazon EMR but i got an empty output.
So my end result should be: express, XXX and non-express, YYY. Can I have it do a divide operation before returning the result? Just the result of XXX/YYY. Where should i put this code? A reducer??
Also, this is a huge CSV file, so will mapping break it up into a few partitions? Or do I need to explicitly call a FileSplit? If so, how do I do that?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
在这里回答我自己的问题!
代码错误。如果您使用聚合库来减少,您的输出不会遵循通常的键值对。它需要一个“前缀”。
其他可用的“前缀”有:DoubleValueSum、LongValueMax、LongValueMin、StringValueMax、StringValueMin、UniqValueCount、ValueHistogram。有关更多信息,请查看此处 http://hadoop.apache.org/common/docs/r0.15.2/api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html。
是的,如果您想做的不仅仅是基本总和、最小值、最大值或计数,您需要编写自己的减速器。
是的
我还没有答案。
Answering my own question here!
The code is wrong. If you're using aggregate library to reduce, your output does not follow the usual key value pair. It requires a "prefix".
The other "prefixes" available are: DoubleValueSum, LongValueMax, LongValueMin, StringValueMax, StringValueMin, UniqValueCount, ValueHistogram. For more info, look here http://hadoop.apache.org/common/docs/r0.15.2/api/org/apache/hadoop/mapred/lib/aggregate/package-summary.html.
Yes, if you want to do more than just the basic sum, min, max or count, you need to write your own reducer.
I do not yet have the answer.