如何在Windows环境下使用Mahout?
我正在尝试在 Windows 上运行的应用程序中使用 Mahout。我想使用 k-means 从 lucene 索引构建集群。
一旦我必须创建序列文件(从 lucene 索引创建向量),我就会收到 Hadoop 异常,因为 Hadoop 对 Windows 环境中未知的程序进行命令行调用(例如 chmod)。在 Cygwin 中运行不是一个选项,因为我希望能够从 eclipse 运行该应用程序。
所以我的问题是
I am trying to use Mahout in an application running on Windows. I want to build clusters from a lucene index using k-means.
As soon as I have to create sequence files (creating vectors from a lucene index), I get a Hadoop-Exception, since Hadoop makes command line calls to programs unknown in a Windows environment (e.g. chmod). Running in Cygwin is not an option, since I want to be able to run the App from eclipse.
So my question is
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
在 Windows 环境中运行 Hadoop 的唯一方法是安装 Cygwin。有关详细信息,请参阅此博客文章:
http://hayesdavis。 net/2008/06/14/running-hadoop-on-windows/
Cygwin 将提供 Hadoop 依赖的所有命令行实用程序(如 chmod)。如果需要,您仍然可以从 Eclipse 中运行 Hadoop 作业。
The only way you can run Hadoop on a Windows environment is to install Cygwin. For more info, see this blog post:
http://hayesdavis.net/2008/06/14/running-hadoop-on-windows/
Cygwin will provide all the command-line utilities (like chmod) that Hadoop relies on. You can still run your Hadoop jobs from within Eclipse if you want.
您知道
SequenceFile
API 吗?看看这里: http:// /hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/SequenceFile.html您可以尝试自己写入/读取数据。
我认为你可以在 Windows 中以独立模式从 eclipse 运行 Mahout。但你会出现一些缺点和障碍。你应该尝试一下你能走多远。
在我看来,你不应该坚持从 Eclipse 运行 mahout。 ;-)
Do you know the
SequenceFile
API? Have a look here: http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/SequenceFile.htmlYou can try to write/read the data by yourself.
I think you can run Mahout from eclipse in Windowns in stand-alone mode. But you will appear several short comings and barriers. You should try how far you come.
In my opinion you shouldn't insist on running mahout from eclipse. ;-)
您可以使用虚拟机来运行 Hadoop 环境。
对我来说,最好的解决方案是使用 http://hortonworks.com/ 项目。
一切都很顺利。
You can use a virtual machine to run you Hadoop environment.
As for me, the best solution is using http://hortonworks.com/ project.
Everything works pretty.