mapReduce 模式的最佳 python 实现是什么?

发布于 2024-12-02 19:11:33 字数 228 浏览 0 评论 0原文

MapReduce 的最佳 Python 实现是什么,是一个框架还是一个库,可能与 Apache hadoop 一样好,但只要它是用 Python 实现的,并且在良好的文档和文档方面是最好的易于理解,完全实现MapReduce模式,高扩展性、高稳定性、轻量级。

我在谷歌上搜索了一个名为 mincemeat 的东西,不太确定,但是还有其他众所周知的吗?

谢谢

What's the best Python implementation for MapReduce, a framework or a library, probably as good as Apache hadoop one, but if only it's in Python and best in terms of good documented and easy understanding, fully implemented for MapReduce pattern, high scalability, high stability, and lightweight.

I googled one called mincemeat, not sure about it, but any others well known?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

对岸观火 2024-12-09 19:11:33

如果你搜索的话,到处都有一些作品。例如 OctopyDisco 以及 Hadoopy

但是,我不认为它们中的任何一个可以在成熟度、稳定性、可扩展性、性能等方面与Hadoop竞争。对于小案例来说它们应该足够了,但是对于更“辉煌”的东西,你必须坚持使用Hadoop。

请记住,您仍然可以使用 python/jython 在 Hadoop 中编写 map/reduce 程序。

编辑:我最近遇到了 mrjob。这看起来很棒,因为它简化了编写 Map/Reduce 程序,然后在 Hadoop 或 Amazon Elastic MapReduce 平台上启动它们的方式。带来好消息的文章是

There are some pieces here and there if you search for them. For example Octopy and Disco as well as Hadoopy.

However, I don't believe that any of them can compete Hadoop in terms of maturity, stability, scalability, performance, etc. For small cases they should suffice, but for something more "glorious", you have to stick to Hadoop.

Remember that you can still write map/reduce programs in Hadoop with python/jython.

EDIT : I've recently came across mrjob. It seems great, as it eases the way to write map/reduce programs and then launch them on Hadoop or on Amazon's Elastic MapReduce platform. The article that brough the good news is here

海的爱人是光 2024-12-09 19:11:33

2019年更新:
强烈推荐 Apache Beam

===

另一个不错的选择是Dumbo

下面是运行 Map/Reduce 进行字数统计的代码。

def mapper(key,value):
  for word in value.split(): yield word,1
def reducer(key,values):
  yield key,sum(values)

if __name__ == "__main__":
  import dumbo
  dumbo.run(mapper,reducer)

要运行它,只需输入文本文件 wc_input.txt 进行计数,输出保存为 wc_output

 python -m dumbo wordcount.py -hadoop /path/to/hadoop -input wc_input.txt -output wc_output

Update in 2019:
Would highly recommend Apache Beam.

===

Another good option is Dumbo.

Below is the code to run a map/reduce for word counting.

def mapper(key,value):
  for word in value.split(): yield word,1
def reducer(key,values):
  yield key,sum(values)

if __name__ == "__main__":
  import dumbo
  dumbo.run(mapper,reducer)

To run it, just feed your text file wc_input.txt for counting, the output is saved as wc_output.

 python -m dumbo wordcount.py -hadoop /path/to/hadoop -input wc_input.txt -output wc_output
短叹 2024-12-09 19:11:33

您还应该看看夫人:http://code.google.com/p/mrs-mapreduce /

它特别适合计算密集型迭代程序。

You should also look at Mrs: http://code.google.com/p/mrs-mapreduce/

It is particularly well-suited for computationally intensive iterative programs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文