加入 PIG 对阵 COGROUP
当我在 pig 中使用 COGROUP 而不是 JOIN 时,有什么优势(性能/没有地图减少)吗?
http://developer.yahoo.com/hadoop/tutorial/module6.html谈论它们产生的输出类型的差异。但是,忽略“输出模式”,性能是否有显着差异?
Are there any advantages (wrt performance / no of map reduces ) when i use COGROUP instead of JOIN in pig ?
http://developer.yahoo.com/hadoop/tutorial/module6.html talks about the difference in the type of output they produce. But, ignoring the "output schema", are there any significant difference in performance ?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
没有重大的性能差异。我这么说的原因是它们最终都是一个 MapReduce 作业,将相同的数据发送到减速器。两者都需要以外键为键转发所有记录。如果有的话,
COGROUP
可能会更快一点,因为它不会对命中进行笛卡尔积,并将它们保存在单独的包中。如果您的一个数据集很小,您可以使用名为 “复制连接”。这将在所有映射任务中分配第二个数据集并将其加载到主内存中。这样,它可以在映射器中完成整个连接,而不需要减速器。根据我的经验,这是非常值得的,因为连接和联合组的瓶颈是将整个数据集混洗到减速器。据我所知,您无法使用
COGROUP
做到这一点。There are no major performance differences. The reason I say this is they both end up being a single MapReduce job that send the same data forward to the reducers. Both need to send all of the records forward with the key being the foreign key. If at all, the
COGROUP
might be a bit faster because it does not do the cartesian product across the hits and keeps them in separate bags.If one of your data sets is small, you can use a join option called "replicated join". This will distribute the second data set across all map tasks and load it into main memory. This way, it can do the entire join in the mapper and not need a reducer. In my experience, this is very worth it because the bottleneck in joins and cogroups is the shuffling of the entire data set to the reducer. You can't do this with
COGROUP
, to my knowledge.