仅读取文件系统错误(EFS作为EKS中的持续存储)

发布于 2025-02-04 06:47:28 字数 4812 浏览 4 评论 0原文

----Storages files----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
    name: aws-efs
provisioner: aws.io/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-claim
namespace: dev
annotations:  
    volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
    accessModes:
        - ReadWriteMany
    resources:
        requests:
            storage: 20Gi

--------Deployment file--------------------
apiVersion: v1
kind: ServiceAccount
metadata:
  name: efs-provisioner
  namespace: dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: efs-provisioner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]


resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: efs-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: efs-provisioner
subjects:
- kind: ServiceAccount
  name: efs-provisioner
  namespace: dev

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
  namespace: dev
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
  namespace: dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
  name: efs-provisioner
  namespace: dev

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: efs-provisioner
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: efs-provisioner
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      serviceAccount: efs-provisioner
      containers:
      - name: efs-provisioner
        image: quay.io/external_storage/efs-provisioner:latest
        env:
        - name: FILE_SYSTEM_ID
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: file.system.id
        - name: AWS_REGION
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: aws.region
        - name: DNS_NAME
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: dns.name
              optional: true
        - name: PROVISIONER_NAME
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: provisioner.name
        volumeMounts:
        - name: pv-volume
          mountPath: /efs-mount
      volumes:
      - name: pv-volume
        nfs:
          server: <File-system-dns>
          path: /

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: efs-provisioner-config
  namespace: dev
data:
  file.system.id: <File-system-id>
  aws.region: us-east-2
  provisioner.name: aws.io/aws-efs
  dns.name: ""


------release file----
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: airflow
  namespace: dev
  annotations:
    flux.weave.works/automated: "true"
spec:
  releaseName: airflow-dev
  chart:
    repository: https://airflow.apache.org
    name: airflow
    version: 1.6.0
  values:
    fernetKey: <fernet-key>
    defaultAirflowTag: "2.3.0"
    env:
      - name: "AIRFLOW__KUBERNETES__DAGS_IN_IMAGE"
        value: "False"
      - name: "AIRFLOW__KUBERNETES__NAMESPACE"
        value: "dev"
        value: "apache/airflow"
      - name: "AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG"
        value: "latest"
      - name: "AIRFLOW__KUBERNETES__RUN_AS_USER"
        value: "50000"
      - name: "AIRFLOW__CORE__LOAD_EXAMPLES"
        value: "False"
    executor: "KubernetesExecutor"
    dags:
      persistence:
        enabled: true
        size: 20Gi
        storageClassName: aws-efs
        existingClaim: efs-claim
        accessMode: ReadWriteMany
      gitSync:
        enabled: true
        repo: [email protected]: <git-repo>
        branch: master
        maxFailures: 0
        subPath: ""
        sshKeySecret: airflow-git-private-dags
        wait: 30

当我进入调度程序吊舱,然后转到Directory/opt/airflow/dags $时,我只会读取仅读取文件系统错误。但是当我做“ DF -H”时,我可以看到文件系统安装在吊舱上。但是我只读错误。

kubectl获取PV -N DEV 这给了我PV具有“ RWX”访问权限,并且表明它已安装到我的气流触发器和气流调度器吊舱上

----Storages files----
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
    name: aws-efs
provisioner: aws.io/aws-efs
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: efs-claim
namespace: dev
annotations:  
    volume.beta.kubernetes.io/storage-class: "aws-efs"
spec:
    accessModes:
        - ReadWriteMany
    resources:
        requests:
            storage: 20Gi

--------Deployment file--------------------
apiVersion: v1
kind: ServiceAccount
metadata:
  name: efs-provisioner
  namespace: dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: efs-provisioner
rules:
- apiGroups: [""]
  resources: ["persistentvolumes"]
  verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
  resources: ["persistentvolumeclaims"]
  verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]


resources: ["storageclasses"]
  verbs: ["get", "list", "watch"]
- apiGroups: [""]
  resources: ["events"]
  verbs: ["create", "update", "patch"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: efs-provisioner
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: efs-provisioner
subjects:
- kind: ServiceAccount
  name: efs-provisioner
  namespace: dev

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
  namespace: dev
rules:
- apiGroups: [""]
  resources: ["endpoints"]
  verbs: ["get", "list", "watch", "create", "update", "patch"]

---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-efs-provisioner
  namespace: dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: leader-locking-efs-provisioner
subjects:
- kind: ServiceAccount
  name: efs-provisioner
  namespace: dev

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: efs-provisioner
  namespace: dev
spec:
  replicas: 1
  selector:
    matchLabels:
      app: efs-provisioner
  template:
    metadata:
      labels:
        app: efs-provisioner
    spec:
      serviceAccount: efs-provisioner
      containers:
      - name: efs-provisioner
        image: quay.io/external_storage/efs-provisioner:latest
        env:
        - name: FILE_SYSTEM_ID
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: file.system.id
        - name: AWS_REGION
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: aws.region
        - name: DNS_NAME
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: dns.name
              optional: true
        - name: PROVISIONER_NAME
          valueFrom:
            configMapKeyRef:
              name: efs-provisioner-config
              key: provisioner.name
        volumeMounts:
        - name: pv-volume
          mountPath: /efs-mount
      volumes:
      - name: pv-volume
        nfs:
          server: <File-system-dns>
          path: /

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: efs-provisioner-config
  namespace: dev
data:
  file.system.id: <File-system-id>
  aws.region: us-east-2
  provisioner.name: aws.io/aws-efs
  dns.name: ""


------release file----
apiVersion: helm.fluxcd.io/v1
kind: HelmRelease
metadata:
  name: airflow
  namespace: dev
  annotations:
    flux.weave.works/automated: "true"
spec:
  releaseName: airflow-dev
  chart:
    repository: https://airflow.apache.org
    name: airflow
    version: 1.6.0
  values:
    fernetKey: <fernet-key>
    defaultAirflowTag: "2.3.0"
    env:
      - name: "AIRFLOW__KUBERNETES__DAGS_IN_IMAGE"
        value: "False"
      - name: "AIRFLOW__KUBERNETES__NAMESPACE"
        value: "dev"
        value: "apache/airflow"
      - name: "AIRFLOW__KUBERNETES__WORKER_CONTAINER_TAG"
        value: "latest"
      - name: "AIRFLOW__KUBERNETES__RUN_AS_USER"
        value: "50000"
      - name: "AIRFLOW__CORE__LOAD_EXAMPLES"
        value: "False"
    executor: "KubernetesExecutor"
    dags:
      persistence:
        enabled: true
        size: 20Gi
        storageClassName: aws-efs
        existingClaim: efs-claim
        accessMode: ReadWriteMany
      gitSync:
        enabled: true
        repo: [email protected]: <git-repo>
        branch: master
        maxFailures: 0
        subPath: ""
        sshKeySecret: airflow-git-private-dags
        wait: 30

When im going to the scheduler pod, and going to directory /opt/airflow/dags$ , im get the read only file system error. But when i did "df -h", i can see that the file system is mounted on the pod. But i get read only error.

Kubectl get pv -n dev
This gives me the PV has "RWX" access and it shows that it has been mounted to my airflow trigger and airflow scheduler pods

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。
列表为空,暂无数据
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文