使用 Hadoop,我的减速器是否能保证获得具有相同键的所有记录?
我正在使用 Hive 运行 Hadoop 作业,实际上它应该是许多文本文件中的 uniq
行。在reduce步骤中,它为每个键选择最近带时间戳的记录。
Hadoop是否保证映射步骤输出的具有相同键的每条记录都将进入单个reducer,即使多个reducer跨集群运行?
我担心映射器输出可能会被分割在具有相同键的一组记录中间发生随机播放之后。
I'm running a Hadoop job using Hive actually that is supposed to uniq
lines in many text files. In the reduce step, it chooses the most recently timestamped record for each key.
Does Hadoop guarantee that every record with the same key, output by the map step, will go to a single reducer, even if many reducers are running across a cluster?
I worry that the mapper output might be split after the shuffle happens in the middle of a set of records with the same key.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
键的所有值都发送到同一个减速器。请参阅此 Yahoo!教程以进行更多讨论。
此行为由分区程序确定,如果您使用默认分区程序以外的分区程序,则可能不正确。
All values for a key are sent to the same reducer. See this Yahoo! tutorial for more discussion.
This behavior is determined by the partitioner, and might not be true if you use a partitioner other than the default.
事实上,不!您可以创建一个
Partitioner
,每次调用getPartition
时将相同的键发送到不同的缩减器。对于大多数应用程序来说,这通常不是一个好主意。Actually, no! You could create a
Partitioner
that sent the same key to a different reducer each time thegetPartition
is called. It's just not generally a good idea for most applications.是的,Hadoop 确实保证所有相同的键都将进入同一个Reducer。这是通过使用分区函数来实现的,该函数使用哈希函数对键进行存储。
有关分区过程的更多信息,请查看:分区数据
它具体讨论了处理相同键的不同映射器如何确保给定值的所有键最终位于同一分区中,从而由相同的减速器处理。
Yes, Hadoop does guarantee that all keys that are the same will go to the same Reducer. This is achieved using a Partition function which buckets the keys using a hash function.
For more information on the Partitioning process take a look at this: Partitioning Data
It specifically talks about how different mappers that process the same key ensure that all keys of a given value end up in the same partition, and thus are processed by the same reducer.