Boto:如何在完成/失败后保持 EMR 作业流程运行?

发布于 2024-12-13 09:03:53 字数 307 浏览 0 评论 0原文

如何使用 boto 向正在等待的 Amazon EMR 作业流程添加步骤,而作业流程在完成后不会终止?

我在 Amazon 的 Elastic Map Reduce 上创建了一个交互式作业流程并加载了一些表。当我使用 Boto 的 emr_conn.add_jobflow_steps(...) 将新步骤传递到作业流程时,作业流程会在完成或失败后终止。

我知道我可以使用带有 keep_alive 参数的 run_jobflow 来启动 boto 作业流程 - 但我想使用已经在运行的流程。

How can I add steps to a waiting Amazon EMR job flow using boto without the job flow terminating once complete?

I've created an interactive job flow on Amazon's Elastic Map Reduce and loaded some tables. When I pass in new steps to the job flow using Boto's emr_conn.add_jobflow_steps(...), the job flow terminates after it finishes or fails.

I know I can start a job flow with boto using run_jobflow with the keep_alive parameter -- but I'd like to work with flows that are already running.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

花海 2024-12-20 09:03:53

如果正确完成,则不应以 keep_alive=True 终止。也就是说,它通常会在失败时退出,因此您需要将 terminate_on_failure="CONTINUE" 添加到您的 add_job_steps 参数中。

If it finishes correctly, it should not terminate with keep_alive=True. That said, it would normally exit on failure, so you want to add terminate_on_failure="CONTINUE" to your add_job_steps arguments.

梦明 2024-12-20 09:03:53

我使用类似这样的

创建方法,

import boto.emr

conn = boto.emr.connect_to_region('us-west-2')
jobid = conn.run_jobflow(name='cluster-name',
                     ec2_keyname="yourkeyhere",
                     num_instances=3,
                     master_instance_type='m1.medium',
                     slave_instance_type='m1.medium',
                     keep_alive=True,
)

将作业添加到现有集群(稍等一下,让集群处于等待状态)

import boto.emr

conn = boto.emr.connect_to_region('us-west-2')
# get the list of waiting cluster and take the first one
jobid =  conn.describe_jobflows(states=["WAITING"])[0].jobflowid
print jobid
flow_steps = list()
runThing = boto.emr.step.ScriptRunnerStep(
                        name="job step name",
                        step_args = ["s3://yours3bucket/dosmthg.sh"])
flow_steps.append(runThing)
conn.add_jobflow_steps(jobid, flow_steps)

,注意

  • 您需要填写 ~/.aws/credentials(aws 配置)
  • 亚马逊区域 us-west- 2 目前有更新的 ami 版本,
  • 如果您需要 hive、pig 或自定义安装步骤,您可能需要添加 bootstrap_actions=

I use something like this

create with

import boto.emr

conn = boto.emr.connect_to_region('us-west-2')
jobid = conn.run_jobflow(name='cluster-name',
                     ec2_keyname="yourkeyhere",
                     num_instances=3,
                     master_instance_type='m1.medium',
                     slave_instance_type='m1.medium',
                     keep_alive=True,
)

add a job to an existing cluster with (wait a little for cluster to be in waiting state)

import boto.emr

conn = boto.emr.connect_to_region('us-west-2')
# get the list of waiting cluster and take the first one
jobid =  conn.describe_jobflows(states=["WAITING"])[0].jobflowid
print jobid
flow_steps = list()
runThing = boto.emr.step.ScriptRunnerStep(
                        name="job step name",
                        step_args = ["s3://yours3bucket/dosmthg.sh"])
flow_steps.append(runThing)
conn.add_jobflow_steps(jobid, flow_steps)

notes

  • you need to have ~/.aws/credentials filled up (aws configure)
  • amazon region us-west-2 currently has the more recent ami version
  • you can may have to add bootstrap_actions= if you need hive, pig or custom installation steps
北座城市 2024-12-20 09:03:53

您还可以使用“KeepJobFlowAliveWhenNoSteps”标志来执行此操作。

response = emr.run_job_flow(
        Name="start-my-cluster",
        ReleaseLabel="emr-5.3.1",
        LogUri='s3://logs',
        Instances={
            'InstanceGroups': [
                {'Name': 'EmrMaster',
                 'InstanceRole': 'MASTER',
                 'InstanceType': 'm3.xlarge',
                 'InstanceCount': 1},
                {'Name': 'EmrCore',
                 'InstanceRole': 'CORE',
                 'InstanceType': 'm3.xlarge',
                 'InstanceCount': 2}
            ],
            'Ec2KeyName': 'my-key-name',
            'KeepJobFlowAliveWhenNoSteps' : True,
        },
        Applications=[{'Name': 'Hadoop'}, {'Name': 'Spark'}, {'Name': 'Hive'}],
        JobFlowRole='EMR_EC2_DefaultRole',
        ServiceRole='EMR_DefaultRole',
        VisibleToAllUsers=True,
        Steps=[
            # steps go here...
        ]
        )

You can also do this with the 'KeepJobFlowAliveWhenNoSteps' flag.

response = emr.run_job_flow(
        Name="start-my-cluster",
        ReleaseLabel="emr-5.3.1",
        LogUri='s3://logs',
        Instances={
            'InstanceGroups': [
                {'Name': 'EmrMaster',
                 'InstanceRole': 'MASTER',
                 'InstanceType': 'm3.xlarge',
                 'InstanceCount': 1},
                {'Name': 'EmrCore',
                 'InstanceRole': 'CORE',
                 'InstanceType': 'm3.xlarge',
                 'InstanceCount': 2}
            ],
            'Ec2KeyName': 'my-key-name',
            'KeepJobFlowAliveWhenNoSteps' : True,
        },
        Applications=[{'Name': 'Hadoop'}, {'Name': 'Spark'}, {'Name': 'Hive'}],
        JobFlowRole='EMR_EC2_DefaultRole',
        ServiceRole='EMR_DefaultRole',
        VisibleToAllUsers=True,
        Steps=[
            # steps go here...
        ]
        )
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文