如何使用 Maven 自动下载程序所需的依赖项?
我刚开始使用 Maven。
我有具有依赖关系的 Java 文件。 例如:
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
在开发服务器中,没有Java编译器。我打算在桌面上编译Java文件并将类打包成jar文件,然后在开发服务器上执行程序。
开发服务器拥有所有必需的文件,但桌面还没有。我们如何使用Maven来处理在桌面上编译的问题?
编辑:
我想使用 Java 将数据添加到 hbase 表中。 Hbase 在开发服务器中运行良好。我可以通过开发服务中的命令行在那里创建表。但桌面环境中不存在hbase/hadoop。
那么下载 jar 有帮助吗,还是我需要在本地设置 hadoop 并安装 hbase ?
I am new to using Maven.
I have Java files that have dependencies.
Like for example:
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;
In the development server, there is no Java compiler. I am planning to compile the Java files on the desktop and pack the classes into jar files and then execute the program on the development server.
The development server has all the required files, the desktop not yet. How can we use Maven to care of the issue while compiling it on the desktop?
EDIT:
I want to add data into hbase tables using Java. Hbase works fine in the dev server. I am able to create tables there through command line in dev serv. But hbase/hadoop is not there in the desktop environment.
So will downloading jars help, or do I need to setup hadoop and install hbase locally?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
如果我理解正确,您需要配置一些 编译时-Maven pom 中的仅依赖项。您可以在
dependencies
部分中完成此操作,例如:您需要确定每个所需的Hadoop包的artifactId和版本,然后将它们添加到依赖项中。
If I understand correctly, you need to configure some compile-time-only dependencies in your Maven pom. You can do it in the
dependencies
section, e.g.:You need to determine the artifactId and version for each of the needed Hadoop packages, then add them to the dependencies.
是的,Maven 有一个近乎垂直的学习曲线。
看起来您正在独自或与一个非常小的团队一起做一项非常小的内部工作。
在这种情况下,如果您设置一个 IDE(Eclipse 或 NetBeans)可能就足够了,
手动解决依赖关系(将 jar 下载到项目的 /lib 文件夹中),并手动编译并导出二进制文件 (jar)。
Yes, Maven has a near vertical learning curve.
It looks like you are doing a very small inhouse job on your own, or with a very small team.
In that case, it's probably enough if you set up an IDE (eclipse or NetBeans),
resolving the dependencies manually (downloading jars in a /lib folder in the project), and compile and export a binary (jar) manually.
hadoop-core、hbase 和zookeeper 是必需的HBase 依赖项。此外,您应该尝试使用 Cloudera 的,因为它们修复了 Apache jar 存在的一些其他错误。请查看此处。
此外,您不必在本地安装 HBase。创建 HBase 配置时,只需更改 Zookeeper quorom 以指向 Zookeeper 所在的服务器。
hadoop-core, hbase, and zookeeper are the required HBase dependencies. Additionally, you should try and use the Cloudera ones as they fix some additional bugs the Apache jars have. Look here.
Additionally, you do not have to install HBase locally. When you create the HBase configuration just change the zookeeper quorom to point to the server in which the Zookeeper resides.
从 maven 站点,将
scope
设置为为依赖项提供
。From the maven site, set the
scope
toprovided
for the dependency.