helm - 使用预定义的流程顺序运行 Pod 和依赖项

发布于 2025-01-10 05:31:17 字数 875 浏览 0 评论 0原文

我正在使用带有 helm 的 K8S。

我需要使用预定义的流程顺序运行 Pod 和依赖项。

如何创建仅运行 pod 一次(即首次填充数据库)并在第一次成功后退出的 helm 依赖项?

另外,如果我有多个 pod,并且我只想在某些条件发生时以及创建 pod 之后运行该 pod。

需要构建2个pod,描述如下:

我有一个数据库。

第一步是创建数据库。

第二步是填充数据库。

一旦我填充数据库,这项工作就需要完成。

第三步是另一个使用该数据库的 pod(不是 db pod),并且始终处于监听模式(永不停止)。

我可以定义依赖项运行的顺序(并且并不总是并行的)。

我在 helm create 命令中看到的是,有用于deployment.yaml 和service.yaml 的模板,也许pod.yaml 是更好的选择?

对于这种情况,最好的图表类型是什么?

另外,需要知道什么是图表层次结构。

即:当有一个类型为侦听器的图表、一个用于数据库创建的 pod 和一个用于数据库填充的 pod(完成后删除)时,我可能有一个解释流程的图​​表树层次结构。

输入图片这里的描述

主图表使用填充的数据(在所有子图表和模板都正确运行之后 - 顺便说一句,我可以为同一个图表拥有多个模板吗?)。

什么是正确的树流程

谢谢。

I am using K8S with helm.

I need to run pods and dependencies with a predefined flow order.

How can I create helm dependencies that run the pod only once (i.e - populate database for the first time), and exits after first success?

Also, if I have several pods, and I want to run the pod only on certain conditions occurs and after creating a pod.

Need to build 2 pods, as is described as following:

I have a database.

1st step is to create the database.

2nd step is to populate the db.

Once I populate the db, this job need to finish.

3rd step is another pod (not the db pod) that uses that database, and always in listen mode (never stops).

Can I define in which order the dependencies are running (and not always parallel).

What I see for helm create command that there are templates for deployment.yaml and service.yaml, and maybe pod.yaml is better choice?

What are the best charts types for this scenario?

Also, need the to know what is the chart hierarchy.

i.e: when having a chart of type: listener, and one pod for database creation, and one pod for the database population (that is deleted when finished), I may have a chart tree hierarchy that explain the flow.

enter image description here

The main chart use the populated data (after all the sub-charts and templates are run properly - BTW, can I have several templates for same chart?).

What is the correct tree flow

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

掀纱窥君容 2025-01-17 05:31:17

helm 创建资源有一个固定的顺序,除了 hooks 之外,您无法影响该顺序。

根据我的经验,Helm hook 引起的问题比它们解决的问题还要多。这是因为大多数情况下它们实际上依赖于只有在挂钩完成后才可用的资源。例如,配置映射、机密和服务帐户/角色绑定。导致你将越来越多的东西移入钩子生命周期,这在我看来并不符合习惯。卸载版本时,它们也会悬而未决。

我倾向于使用作业和初始化容器,这些容器会阻塞直到作业完成。

---
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  containers:
    - name: mysql
      image: mysql
---
apiVersion: batch/v1
kind: Job
metadata:
  name: migration
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      initContainers:
        - name: wait-for-db
          image: bitnami/kubectl
          args:
            - wait
            - pod/mysql
            - --for=condition=ready
            - --timeout=120s
      containers:
        - name: migration
          image: myapp
          args: [--migrate]
      restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  replicas: 3
  template:
    metadata:
      labels:
        app: myapp
    spec:
      initContainers:
        - name: wait-for-migration
          image: bitnami/kubectl
          args:
            - wait
            - job/migration
            - --for=condition=complete
            - --timeout=120s
      containers:
        - name: myapp
          image: myapp
          args: [--server]

如果您想水平扩展应用程序,则将迁移移至其自己的工作中是有益的。您的迁移只需运行 1 次。因此,为每个部署的副本运行它是没有意义的。

此外,如果 Pod 崩溃并重新启动,迁移不需要再次运行。因此,将其放在单独的一次性工作中是有意义的。

主要图表结构如下所示。

.
├── Chart.lock
├── charts
│   └── mysql-8.8.26.tgz
├── Chart.yaml
├── templates
│   ├── deployment.yaml    # waits for db migration job
│   └── migration-job.yaml # waits for mysql statefulset master pod
└── values.yaml

There is a fixed order with which helm with create resources, which you cannot influence apart from hooks.

Helm hooks can cause more problems than they solve, in my experience. This is because most often they actually rely on resources which are only available after the hooks are done. For example, configmaps, secrets and service accounts / rolebindings. Leading you to move more and more things into the hook lifecycle, which isn't idiomatic IMO. It also leaves them dangling when uninstalling a release.

I tend to use jobs and init containers that blocks until the jobs are done.

---
apiVersion: v1
kind: Pod
metadata:
  name: mysql
  labels:
    name: mysql
spec:
  containers:
    - name: mysql
      image: mysql
---
apiVersion: batch/v1
kind: Job
metadata:
  name: migration
spec:
  ttlSecondsAfterFinished: 100
  template:
    spec:
      initContainers:
        - name: wait-for-db
          image: bitnami/kubectl
          args:
            - wait
            - pod/mysql
            - --for=condition=ready
            - --timeout=120s
      containers:
        - name: migration
          image: myapp
          args: [--migrate]
      restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
spec:
  selector:
    matchLabels:
      app: myapp
  replicas: 3
  template:
    metadata:
      labels:
        app: myapp
    spec:
      initContainers:
        - name: wait-for-migration
          image: bitnami/kubectl
          args:
            - wait
            - job/migration
            - --for=condition=complete
            - --timeout=120s
      containers:
        - name: myapp
          image: myapp
          args: [--server]

Moving the migration into its own job, is beneficial if you want to scale your application horizontally. Your migration need to run only 1 time. So it doesn't make sense to run it for each deployed replica.

Also, in case a pod crashes and restarts, the migration doest need to run again. So having it in a separate one time job, makes sense.

The main chart structure would look like this.

.
├── Chart.lock
├── charts
│   └── mysql-8.8.26.tgz
├── Chart.yaml
├── templates
│   ├── deployment.yaml    # waits for db migration job
│   └── migration-job.yaml # waits for mysql statefulset master pod
└── values.yaml
就像说晚安 2025-01-17 05:31:17

您可以使用 helm hooks 和 K8s Jobs 来实现这一点,下面为 Rails 应用程序定义相同的设置。

第一步,定义一个 k8s 作业来创建并填充数据库,

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "my-chart.name" . }}-db-prepare
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded
  labels:
    app: {{ template "my-chart.name" . }}
    chart: {{ template "my-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  backoffLimit: 4
  template:
    metadata:
      labels:
        app: {{ template "my-chart.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
      - name: {{ template "my-chart.name" . }}-db-prepare
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/docker-entrypoint.sh"]
        args: ["rake", "db:extensions", "db:migrate", "db:seed"]
        envFrom:
        - configMapRef:
            name: {{ template "my-chart.name" . }}-configmap
        - secretRef:
            name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
      initContainers:
      - name: init-wait-for-dependencies
        image: wshihadeh/wait_for:v1.2
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/docker-entrypoint.sh"]
        args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
        envFrom:
        - configMapRef:
            name: {{ template "my-chart.name" . }}-configmap
        - secretRef:
            name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
      imagePullSecrets:
      - name: {{ .Values.imagePullSecretName }}
      restartPolicy: Never

请注意以下事项:
1- 作业定义具有在每个部署上运行的 helm 挂钩,并且是第一个任务

    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded

2- 容器命令将负责准备数据库

command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]

3- 作业将在数据库连接启动之前启动(这是实现的)通过 initContainers)

args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]

第二步是定义应用程序部署对象。这可以是常规部署对象(确保您不使用 helm hooks )示例:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "my-chart.name" . }}-web
  annotations:
    checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum  }}
    checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum  }}
  labels:
    app: {{ template "my-chart.name" . }}
    chart: {{ template "my-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.webReplicaCount }}
  selector:
    matchLabels:
      app: {{ template "my-chart.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum  }}
        checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum  }}
      labels:
        app: {{ template "my-chart.name" . }}
        release: {{ .Release.Name }}
        service: web
    spec:
      imagePullSecrets:
      - name: {{ .Values.imagePullSecretName }}
      containers:
        - name: {{ template "my-chart.name" . }}-web
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          command: ["/docker-entrypoint.sh"]
          args: ["web"]
          envFrom:
          - configMapRef:
              name: {{ template "my-chart.name" . }}-configmap
          - secretRef:
              name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          resources:
{{ toYaml .Values.resources | indent 12 }}
      restartPolicy: {{ .Values.restartPolicy  }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}

You can achieve this using helm hooks and K8s Jobs, below is defining the same setup for Rails applications.

The first step, define a k8s job to create and populate the db

apiVersion: batch/v1
kind: Job
metadata:
  name: {{ template "my-chart.name" . }}-db-prepare
  annotations:
    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded
  labels:
    app: {{ template "my-chart.name" . }}
    chart: {{ template "my-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  backoffLimit: 4
  template:
    metadata:
      labels:
        app: {{ template "my-chart.name" . }}
        release: {{ .Release.Name }}
    spec:
      containers:
      - name: {{ template "my-chart.name" . }}-db-prepare
        image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/docker-entrypoint.sh"]
        args: ["rake", "db:extensions", "db:migrate", "db:seed"]
        envFrom:
        - configMapRef:
            name: {{ template "my-chart.name" . }}-configmap
        - secretRef:
            name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
      initContainers:
      - name: init-wait-for-dependencies
        image: wshihadeh/wait_for:v1.2
        imagePullPolicy: {{ .Values.image.pullPolicy }}
        command: ["/docker-entrypoint.sh"]
        args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]
        envFrom:
        - configMapRef:
            name: {{ template "my-chart.name" . }}-configmap
        - secretRef:
            name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
      imagePullSecrets:
      - name: {{ .Values.imagePullSecretName }}
      restartPolicy: Never

Note the following :
1- The Job definitions have helm hooks to run on each deployment and to be the first task

    "helm.sh/hook": pre-install,pre-upgrade
    "helm.sh/hook-weight": "-1"
    "helm.sh/hook-delete-policy": hook-succeeded

2- the container command, will take care of preparing the db

command: ["/docker-entrypoint.sh"]
args: ["rake", "db:extensions", "db:migrate", "db:seed"]

3- The job will not start until the db-connection is up (this is achieved via initContainers)

args: ["wait_for_tcp", "postgress:DATABASE_HOST:DATABASE_PORT"]

the second step is to define the application deployment object. This can be a regular deployment object (make sure that you don't use helm hooks ) example :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ template "my-chart.name" . }}-web
  annotations:
    checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum  }}
    checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum  }}
  labels:
    app: {{ template "my-chart.name" . }}
    chart: {{ template "my-chart.chart" . }}
    release: {{ .Release.Name }}
    heritage: {{ .Release.Service }}
spec:
  replicas: {{ .Values.webReplicaCount }}
  selector:
    matchLabels:
      app: {{ template "my-chart.name" . }}
      release: {{ .Release.Name }}
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum  }}
        checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum  }}
      labels:
        app: {{ template "my-chart.name" . }}
        release: {{ .Release.Name }}
        service: web
    spec:
      imagePullSecrets:
      - name: {{ .Values.imagePullSecretName }}
      containers:
        - name: {{ template "my-chart.name" . }}-web
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          command: ["/docker-entrypoint.sh"]
          args: ["web"]
          envFrom:
          - configMapRef:
              name: {{ template "my-chart.name" . }}-configmap
          - secretRef:
              name: {{ if .Values.existingSecret }}{{ .Values.existingSecret }}{{- else }}{{ template "my-chart.name" . }}-secrets{{- end }}
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          resources:
{{ toYaml .Values.resources | indent 12 }}
      restartPolicy: {{ .Values.restartPolicy  }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}
何处潇湘 2025-01-17 05:31:17

如果我理解正确的话,您希望在部署策略中构建依赖链,以确保在任何应用程序启动之前准备好某些事情。就您而言,您需要在应用程序启动之前部署和预填充数据库。

我建议不要构建这样的依赖链,因为它会使部署管道中的事情变得复杂,并且如果您将来开始部署多个应用程序,则会阻止部署流程的适当扩展。在像 kubernetes 这样的高度动态环境中,每个部署都应该能够检查自己启动所需的先决条件,而不依赖于部署顺序。

这可以通过结合使用 initContainersprobes 来实现。两者都可以在每个部署中指定,以防止在不满足某些先决条件的情况下失败和/或在服务开始将流量路由到您的部署(在您的情况下是数据库)之前满足某些先决条件。

简而言之

  • 要在数据库启动之前填充数据库卷,请使用 initContainer
  • 让数据库在初始化和预填充后提供流量,定义 probes > 检查这些条件。您的数据库只有在 livenessProbereadinessProbe 成功后才会开始提供流量。如果需要额外的时间,请使用 startupProbe 保护 Pod 不被终止。
  • 为了确保您的应用部署在数据库准备就绪之前不会启动并失败,请使用 initContainer 在您的应用启动之前检查数据库是否已准备好提供流量。

查看

了解更多信息。

if I understand correctly, you want to build a dependency chain in your deployment strategy to ensure certain things are prepared before any of your applications starts. in your case, you want a deployed and pre-populated database, before your app starts.

I propose to not build a dependency chain like this, because it makes things complicated in your deployment pipeline and prevents proper scaling of your deployment processes if you start to deploy more than a couple apps in the future. in highly dynamic environments like kubernetes, every deployment should be able to check the prerequisites it needs to start on its own without depending on a order of deployments.

this can be achieved with a combination of initContainers and probes. both can be specified per deployment to prevent it from failing if certain prerequisites are not met and/or to fullfill certain prerequisites before a service starts routing traffic to your deployment (in your case the database).

in short:

  • to populate a database volume before the database starts, use an initContainer
  • to let the database serve traffic after its initialization and prepopulation, define probes to check for these conditions. your database will only start to serve traffic after its livenessProbe and readinessProbe has succeeded. if it needs extra time, protect the pod from beeing terminated with a startupProbe.
  • to ensure the deployment of your app does not start and fail before the database is ready, use an initContainer to check if the database is ready to serve traffic before your app starts.

check out

for more information.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文