Hadoop作业如何自行杀死
有没有办法杀死 Hadoop 作业本身或发送信号来杀死它。 我已经从 jobConf 中阅读了配置设置,其中表示如果用户指定了错误的设置,我需要终止作业或引发错误,因为 map/reduce config 方法不允许引发异常。
public void configure(JobConf job) {
System.out.println("Inside config start processing");
try {
String strFileName = job.get("hadoop.rules");
LoadFile(strFileName );
} catch (Exception e) {
e.printStackTrace();
//Here i need to write code to kill job
}
}
Is there any way to kill a Hadoop job itself or send a signal to kill it.
I've read the Configuration settings from jobConf where it says that if a user specify the wrong settings I need to kill the job or throw an error, since map/reduce config method does not allow throwing an exception.
public void configure(JobConf job) {
System.out.println("Inside config start processing");
try {
String strFileName = job.get("hadoop.rules");
LoadFile(strFileName );
} catch (Exception e) {
e.printStackTrace();
//Here i need to write code to kill job
}
}
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
在configure()方法中,只需抛出一个RuntimeException即可。
更好的是,如果可能的话,您最好在作业运行之前执行验证步骤。
In the configure() method, just throw a RuntimeException.
Better yet, if possible, you're better off performing your validation step before the job is run.
只需将状态保存到名为
kill
的布尔变量中,并在映射步骤内评估该变量,然后抛出IOException
即可。Just save the state into a boolean variable called
kill
and evaluate the variable inside the map step and then throw anIOException
.