使用hadoop流写入不同的文件

发布于 2024-12-06 19:38:03 字数 1169 浏览 0 评论 0原文

我目前正在 10 个服务器的 hadoop 集群上处理大约 300 GB 的日志文件。我的数据保存在名为 YYMMDD 的文件夹中,因此每天都可以快速访问。

我的问题是,我今天刚刚发现日志文件中的时间戳采用 DST (GMT -0400),而不是预期的 UTC。简而言之,这意味着logs/20110926/*.log.lzo包含从2011-09-26 04:00到2011-09-27 20:00的元素,并且它几乎破坏了对该数据所做的任何map/reduce(即生成统计数据)。

有没有办法执行映射/减少作业来正确地重新分割每个日志文件?据我所知,似乎没有办法使用流式传输输出文件 A 中的某些记录和输出文件 B 中的其余记录。

这是我当前使用的命令:

/opt/hadoop/bin/hadoop jar /opt/hadoop/contrib/streaming/hadoop-streaming-0.20.2-cdh3u1.jar \
-D mapred.reduce.tasks=15 -D mapred.output.compress=true \
-D mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec \
-mapper map-ppi.php -reducer reduce-ppi.php \
-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat \
-file map-ppi.php -file reduce-ppi.php \
-input "logs/20110922/*.lzo" -output "logs-processed/20110922/"

我什么都不知道关于 java 和/或创建自定义类。我确实尝试了 http:// /blog.aggregateknowledge.com/2011/08/30/custom-inputoutput-formats-in-hadoop-streaming/ (几乎复制/粘贴了那里的内容)但我根本无法让它工作。无论我尝试什么,我都会收到“-outputformat:找不到类”错误。

非常感谢您的时间和帮助:)。

I'm currently processing about 300 GB of log files on a 10 servers hadoop cluster. My data is being saved in folders named YYMMDD so each day can be accessed quickly.

My problem is that I just found out today that the timestamps I have in my log files are in DST (GMT -0400) instead of UTC as expected. In short, this means that logs/20110926/*.log.lzo contains elements from 2011-09-26 04:00 to 2011-09-27 20:00 and it's pretty much ruining any map/reduce done on that data (i.e. generating statistics).

Is there a way to do a map/reduce job to re-split every log files correctly? From what I can tell, there doesn't seem to be a way using streaming to send certain records in output file A and the rest of the records in output file B.

Here is the command I currently use:

/opt/hadoop/bin/hadoop jar /opt/hadoop/contrib/streaming/hadoop-streaming-0.20.2-cdh3u1.jar \
-D mapred.reduce.tasks=15 -D mapred.output.compress=true \
-D mapred.output.compression.codec=com.hadoop.compression.lzo.LzopCodec \
-mapper map-ppi.php -reducer reduce-ppi.php \
-inputformat com.hadoop.mapred.DeprecatedLzoTextInputFormat \
-file map-ppi.php -file reduce-ppi.php \
-input "logs/20110922/*.lzo" -output "logs-processed/20110922/"

I don't know anything about java and/or creating custom classes. I did try the code posted at http://blog.aggregateknowledge.com/2011/08/30/custom-inputoutput-formats-in-hadoop-streaming/ (pretty much copy/pasted what was on there) but I couldn't get it to work at all. No matter what I tried, I would get a "-outputformat : class not found" error.

Thank you very much for your time and help :).

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

情定在深秋 2024-12-13 19:38:03

据我所知,似乎没有办法使用流式传输来发送输出文件 A 中的某些记录以及输出文件 B 中的其余记录。

通过使用自定义 Partitioner,您可以指定哪个键进入哪个reducer。默认情况下 HashPartitioner 被使用。看起来唯一的其他分区器流支持是 KeyFieldBasedPartitioner

您可以在 Streaming 上下文中找到有关 KeyFieldBasedPartitioner 的更多详细信息在这里。您无需了解 Java 即可使用 Streaming 配置 KeyFieldBasedPartitioner。

有没有办法执行映射/减少作业来正确地重新分割每个日志文件?

您应该能够编写 MR 作业来重新分割文件,但我认为 Partitioner 应该可以解决问题。

From what I can tell, there doesn't seem to be a way using streaming to send certain records in output file A and the rest of the records in output file B.

By using a custom Partitioner, you can specify which key goes to which reducer. By default the HashPartitioner is used. Looks like the only other Partitioner Streaming supports is KeyFieldBasedPartitioner.

You can find more details about the KeyFieldBasedPartitioner in the context of Streaming here. You need not know Java to configure the KeyFieldBasedPartitioner with Streaming.

Is there a way to do a map/reduce job to re-split every log files correctly?

You should be able to write a MR job to re-split the files, but I think Partitioner should solve the problem.

绝不服输 2024-12-13 19:38:03

自定义 MultipleOutputFormat 和 Partitioner 似乎是按天分割数据的正确方法。

作为该文章的作者,很抱歉让您度过了如此艰难的时光。听起来如果您收到“未找到类”错误,那么在将其包含在“-libjars”中后,您的自定义输出格式未找到,则存在一些问题。

A custom MultipleOutputFormat and Partitioner seems like the correct way split your data by day.

As the author of that post, sorry that you had such a rough time. It sounds like if you were getting a "class not found" error, there was some issue with your custom output format not being found after you included it with "-libjars".

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文