如何以编程方式有效地将文件从 HDFS 复制到 S3
我的 hadoop 作业在 HDFS 上生成大量文件,我想编写一个单独的线程将这些文件从 HDFS 复制到 S3。
任何人都可以向我指出任何处理它的 java API 吗?
谢谢
My hadoop job generate large number of files on HDFS and I want to write a separate thread which will copy these files from HDFS to S3.
Could any one point me to any java API that handles it.
Thanks
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
“Hadoop 0.11.0 中的 ${HADOOP_HOME}/bin/hadoop distcp 工具添加了对 S3 块文件系统的支持(请参阅 HADOOP-862)。distcp 工具设置一个 MapReduce 作业来运行副本。使用 distcp,许多成员的集群可以快速复制大量数据。映射任务的数量是通过计算源中的文件数量来计算的:即每个映射任务负责复制一个文件,源和目标可能引用不同的文件系统类型。例如,源可能指以 S3 作为目标的本地文件系统或 hdfs。“
在此处查看在 S3 中运行批量复制http://wiki.apache.org/hadoop/AmazonS3
"Support for the S3 block filesystem was added to the ${HADOOP_HOME}/bin/hadoop distcp tool in Hadoop 0.11.0 (See HADOOP-862). The distcp tool sets up a MapReduce job to run the copy. Using distcp, a cluster of many members can copy lots of data quickly. The number of map tasks is calculated by counting the number of files in the source: i.e. each map task is responsible for the copying one file. Source and target may refer to disparate filesystem types. For example, source might refer to the local filesystem or hdfs with S3 as the target. "
Check out Running Bulk Copies in and out of S3 here http://wiki.apache.org/hadoop/AmazonS3