无法在rke2 kubernetes群集上部署pgadmin

发布于 2025-02-12 18:19:57 字数 3425 浏览 0 评论 0原文

我想在rke2 kubernetes群集上部署pgadmin访问数据库。不幸的是,由于我认为PSP问题,PGADMIN POD崩溃。我知道PSP已弃用,我们计划尽快切换到OPA,但是在此期间,使用pgadmin有效。

部署文件看起来像这样:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
spec:
  selector:
   matchLabels:
    app: pgadmin
  replicas: 1
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin4
          image: dpage/pgadmin4:latest
          env:
           - name: PGADMIN_DEFAULT_EMAIL
             value: "[email protected]"
           - name: PGADMIN_DEFAULT_PASSWORD
             value: "test"
           - name: PGADMIN_PORT
             value: "80"
          ports:
            - containerPort: 80
              name: pgadminport
          securityContext:
            runAsUser: 0
            runAsGroup: 0
            allowPrivilegeEscalation: true
            readOnlyRootFilesystem: false
---
apiVersion: v1
kind: Service
metadata:
  name: pgadmin
  labels:
    app: pgadmin
spec:
  selector:
   app: pgadmin
  type: NodePort
  ports:
   - port: 80
     nodePort: 30200

它返回具有权限问题的日志:

/entrypoint.sh: line 62: /venv/bin/python3: Operation not permitted
sudo: PERM_SUDOERS: setresuid(-1, 1, -1): Operation not permitted
sudo: no valid sudoers sources found, quitting
sudo: error initializing audit plugin sudoers_audit
/entrypoint.sh: line 84: /venv/bin/python3: Operation not permitted
/entrypoint.sh: exec: line 92: /venv/bin/gunicorn: Operation not permitted

当我编辑runasuserrunasGroup变量时,它会返回以下日志:

/entrypoint.sh: line 62: /venv/bin/python3: Operation not permitted
sudo: unable to change to root gid: Operation not permitted
sudo: error initializing audit plugin sudoers_audit
/entrypoint.sh: line 84: /venv/bin/python3: Operation not permitted
/entrypoint.sh: exec: line 92: /venv/bin/gunicorn: Operation not permitted

当我编辑runasGroup 变量回到0,它返回以下日志:

/entrypoint.sh: line 62: /venv/bin/python3: Operation not permitted
sudo: PERM_SUDOERS: setresuid(-1, 1, -1): Operation not permitted
sudo: no valid sudoers sources found, quitting
sudo: setresuid() [0, 0, 0] -> [5050, -1, -1]: Operation not permitted
sudo: error initializing audit plugin sudoers_audit
/entrypoint.sh: line 84: /venv/bin/python3: Operation not permitted
/entrypoint.sh: exec: line 92: /venv/bin/gunicorn: Operation not permitted

更新1: 被使用的PSP看起来像这样:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    psp.rke2.io/global-restricted: resolved
  creationTimestamp: "2022-06-30T14:00:25Z"
  name: global-restricted-psp
  resourceVersion: "3493795"
  uid: b7209f38-9609-4b81-b3ef-ab7a17b39bbd
spec:
  allowPrivilegeEscalation: true
  fsGroup:
    ranges:
    - max: 65535
      min: 0
    rule: MustRunAs
  requiredDropCapabilities:
  - ALL
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    ranges:
    - max: 65535
      min: 0
    rule: MustRunAs
  volumes:
  - configMap
  - emptyDir
  - projected
  - secret
  - downwardAPI
  - persistentVolumeClaim

有人想法吗?

I want to deploy pgadmin on a RKE2 Kubernetes cluster to access databases. Unfortunately the pgadmin pod crashes due to PSP issues I think. I know PSP is deprecated and we're planning to switch to OPA soon, but it would be efficient to use pgadmin in the meantime.

The deployment file looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
spec:
  selector:
   matchLabels:
    app: pgadmin
  replicas: 1
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin4
          image: dpage/pgadmin4:latest
          env:
           - name: PGADMIN_DEFAULT_EMAIL
             value: "[email protected]"
           - name: PGADMIN_DEFAULT_PASSWORD
             value: "test"
           - name: PGADMIN_PORT
             value: "80"
          ports:
            - containerPort: 80
              name: pgadminport
          securityContext:
            runAsUser: 0
            runAsGroup: 0
            allowPrivilegeEscalation: true
            readOnlyRootFilesystem: false
---
apiVersion: v1
kind: Service
metadata:
  name: pgadmin
  labels:
    app: pgadmin
spec:
  selector:
   app: pgadmin
  type: NodePort
  ports:
   - port: 80
     nodePort: 30200

It returns logs with permission issues:

/entrypoint.sh: line 62: /venv/bin/python3: Operation not permitted
sudo: PERM_SUDOERS: setresuid(-1, 1, -1): Operation not permitted
sudo: no valid sudoers sources found, quitting
sudo: error initializing audit plugin sudoers_audit
/entrypoint.sh: line 84: /venv/bin/python3: Operation not permitted
/entrypoint.sh: exec: line 92: /venv/bin/gunicorn: Operation not permitted

When I edit the runAsUser and runAsGroup variable to 5050, it returns these logs:

/entrypoint.sh: line 62: /venv/bin/python3: Operation not permitted
sudo: unable to change to root gid: Operation not permitted
sudo: error initializing audit plugin sudoers_audit
/entrypoint.sh: line 84: /venv/bin/python3: Operation not permitted
/entrypoint.sh: exec: line 92: /venv/bin/gunicorn: Operation not permitted

When I edit the runAsGroup variable back to 0, it returns these logs:

/entrypoint.sh: line 62: /venv/bin/python3: Operation not permitted
sudo: PERM_SUDOERS: setresuid(-1, 1, -1): Operation not permitted
sudo: no valid sudoers sources found, quitting
sudo: setresuid() [0, 0, 0] -> [5050, -1, -1]: Operation not permitted
sudo: error initializing audit plugin sudoers_audit
/entrypoint.sh: line 84: /venv/bin/python3: Operation not permitted
/entrypoint.sh: exec: line 92: /venv/bin/gunicorn: Operation not permitted

UPDATE 1:
The PSP that's being used looks like this:

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  annotations:
    psp.rke2.io/global-restricted: resolved
  creationTimestamp: "2022-06-30T14:00:25Z"
  name: global-restricted-psp
  resourceVersion: "3493795"
  uid: b7209f38-9609-4b81-b3ef-ab7a17b39bbd
spec:
  allowPrivilegeEscalation: true
  fsGroup:
    ranges:
    - max: 65535
      min: 0
    rule: MustRunAs
  requiredDropCapabilities:
  - ALL
  runAsUser:
    rule: RunAsAny
  seLinux:
    rule: RunAsAny
  supplementalGroups:
    ranges:
    - max: 65535
      min: 0
    rule: MustRunAs
  volumes:
  - configMap
  - emptyDir
  - projected
  - secret
  - downwardAPI
  - persistentVolumeClaim

Anybody ideas?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

慈悲佛祖 2025-02-19 18:19:57

我认为您在这里想念的是处理持久数据的配置。我尝试了与您的部署文件相同的部署文件,只是添加了& folumemounts config,尽管是一个空dir(虽然您可能需要持久数据),并且它有效。

然后,我使用该命令

kubectl port-forward pgadmin-6ff557759c-m5cxn 8080:80 

能够通过http://127.0.0.0.1:8080在本地访问PG-ADMIN控制台。

这是exployment.yaml文件:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
spec:
  selector:
   matchLabels:
    app: pgadmin
  replicas: 1
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin4
          image: dpage/pgadmin4:latest
          env:
           - name: PGADMIN_DEFAULT_EMAIL
             value: "[email protected]"
           - name: PGADMIN_DEFAULT_PASSWORD
             value: "test"
           - name: PGADMIN_PORT
             value: "80"
          ports:
            - containerPort: 80
              name: pgadminport
          securityContext:
            runAsUser: 5050
            runAsGroup: 5050
            allowPrivilegeEscalation: true
            readOnlyRootFilesystem: false
          volumeMounts:
          - mountPath: /var/lib/pgadmin
            name: pgadmin-data
      volumes:
      - emptyDir: {}
        name: pgadmin-data

嗯,我还更改了runasuser& runasGroup至5050(在此处从掌舵图中获取一些灵感: https:> https:> https: //artifacthub.io/packages/helm/runix/pgadmin4 (虽然不需要)

Storageclass

> PersistentVolumeclaim 或

I think what you're misssing here is the configuration to handle persistent-data. I tried the same deployment file as yours and just added the volumes & volumeMounts config, albeit an emptyDir (you might want to persist data though), and it works.

I then use the command

kubectl port-forward pgadmin-6ff557759c-m5cxn 8080:80 

to be able to access the pg-admin console locally on http://127.0.0.1:8080.

Here's the deployment.yaml file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pgadmin
spec:
  selector:
   matchLabels:
    app: pgadmin
  replicas: 1
  template:
    metadata:
      labels:
        app: pgadmin
    spec:
      containers:
        - name: pgadmin4
          image: dpage/pgadmin4:latest
          env:
           - name: PGADMIN_DEFAULT_EMAIL
             value: "[email protected]"
           - name: PGADMIN_DEFAULT_PASSWORD
             value: "test"
           - name: PGADMIN_PORT
             value: "80"
          ports:
            - containerPort: 80
              name: pgadminport
          securityContext:
            runAsUser: 5050
            runAsGroup: 5050
            allowPrivilegeEscalation: true
            readOnlyRootFilesystem: false
          volumeMounts:
          - mountPath: /var/lib/pgadmin
            name: pgadmin-data
      volumes:
      - emptyDir: {}
        name: pgadmin-data

Well, I also changed the runAsUser & runAsGroup to 5050 (taking some inspiration from the helm chart here: https://artifacthub.io/packages/helm/runix/pgadmin4 (it may not be needed though).

Having said that, it'd be a lot easier for you to use a helm-chart as it allows you easily handle config to add PersistentVolume via an existing PersistentVolumeClaim or a storageClass.

Hope this helps!

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文