从 JobTracker 上已完成的作业中获取 org.apache.hadoop.mapreduce.Job

发布于 2024-12-09 15:05:20 字数 390 浏览 0 评论 0原文

我正在使用 org.apache.hadoop.mapreduce.Job 来创建/提交/运行 MR 作业(Cloudera3,20.2),完成后,在一个单独的应用程序中,我尝试让作业抓取计数器来对它们进行一些工作,这样我就不必每次都重新运行整个 MR 作业来测试我的代码是否有效。

我可以从 JobClient 获取 RunningJob,但不能获取 org.apache.hadoop.mapreduce.Job。 RunningJob 为我提供来自mapred 包的计数器,而Job 为我提供来自mapreduce 包的计数器。我尝试使用 new Job(conf, "job_id"),但这只是创建了一个状态为 DEFINE 的空白作业,而不是 FINISHED

I'm using org.apache.hadoop.mapreduce.Job to create/submit/run a MR Job (Cloudera3, 20.2), and after it completes, in a separate application, I'm trying to get the Job to grab the counters to do some work with them so I don't have to re-run the entire MR Job every time to test my code that does work.

I can get a RunningJob from a JobClient, but not a org.apache.hadoop.mapreduce.Job. RunningJob gives me Counters from the mapred package, while Job gives me counters from the mapreduce package. I tried using new Job(conf, "job_id"), but that just creates a blank Job in status DEFINE, not FINISHED.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

杯别 2024-12-16 15:05:20

这是我的做法:

package org.apache.hadoop.mapred;

import java.io.IOException;
import java.net.InetSocketAddress;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;

public class FinishedJobHelper {

    public static Counters getCounters(String jobTrackerHost, int jobTrackerPort, String jobIdentifier, int jobId) throws IOException {
        InetSocketAddress link = new InetSocketAddress(jobTrackerHost, jobTrackerPort);
        JobSubmissionProtocol client = (JobSubmissionProtocol) RPC.getProxy(JobSubmissionProtocol.class, JobSubmissionProtocol.versionID, link, new Configuration());
        return client.getJobCounters(new JobID(jobIdentifier, jobId));
    }
}

包应该是 org.apache.hadoop.mapred (不要更改它),因为 JobSubmissionProtocol 是受保护的接口。此方法的问题是您无法检索“退休”的职位。因此,我宁愿不依赖于此,而是在工作完成后立即按下计数器。

...
job.waitForCompletion(true);
//get counters after job completes and push them elsewhere
Counters counters = job.getCounters();
...

希望这会有所帮助。

Here is a how I do it :

package org.apache.hadoop.mapred;

import java.io.IOException;
import java.net.InetSocketAddress;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.ipc.RPC;

public class FinishedJobHelper {

    public static Counters getCounters(String jobTrackerHost, int jobTrackerPort, String jobIdentifier, int jobId) throws IOException {
        InetSocketAddress link = new InetSocketAddress(jobTrackerHost, jobTrackerPort);
        JobSubmissionProtocol client = (JobSubmissionProtocol) RPC.getProxy(JobSubmissionProtocol.class, JobSubmissionProtocol.versionID, link, new Configuration());
        return client.getJobCounters(new JobID(jobIdentifier, jobId));
    }
}

The package should be org.apache.hadoop.mapred (don't change it) since JobSubmissionProtocol is protected interface. The problem with this method is you can't retrieve jobs that are "retired". So I prefer not relaying on this and push the counters as soon as the job completes.

...
job.waitForCompletion(true);
//get counters after job completes and push them elsewhere
Counters counters = job.getCounters();
...

Hope this would help.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文