我的 Kubernetes MongoDB 服务未保存数据

发布于 2025-01-15 01:43:53 字数 1771 浏览 5 评论 0原文

这是 mongodb yaml 文件。

apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-mongo-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth-mongo
  template:
    metadata:
      labels:
        app: auth-mongo
    spec:
      containers:
        - name: auth-mongo
          image: mongo
          volumeMounts:
            - mountPath: "/data/db/auth"
              name: auth-db-storage
      volumes:
        - name: auth-db-storage
          persistentVolumeClaim:
            claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo
  ports:
    - name: db
      protocol: TCP
      port: 27017
      targetPort: 27017

这就是持久卷文件。

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-pv
  labels:
    type: local
spec:
  storageClassName: mongo
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  storageClassName: mongo
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

我使用 kubectlminikube v1.25.1 在 Ubuntu 上运行它。

当我运行describe pod时,我在mongodb pod上看到了这一点。

Volumes:
  auth-db-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongo-pvc
    ReadOnly:   false

我对其他 pod 也有类似的设置来存储文件,并且运行良好。但是使用 mongodb,每次重新启动 pod 时,数据都会丢失。有人可以帮助我吗?

编辑:我注意到,如果我将 mongodb mountPath 更改为 /data/db,它就可以正常工作。但是,如果我在 /data/db 上运行多个 mongodb pod,它们就不起作用。那么我需要为每个 mongodb pod 分配一个持久卷声明吗?

This is the mongodb yaml file.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-mongo-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth-mongo
  template:
    metadata:
      labels:
        app: auth-mongo
    spec:
      containers:
        - name: auth-mongo
          image: mongo
          volumeMounts:
            - mountPath: "/data/db/auth"
              name: auth-db-storage
      volumes:
        - name: auth-db-storage
          persistentVolumeClaim:
            claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: auth-mongo-srv
spec:
  selector:
    app: auth-mongo
  ports:
    - name: db
      protocol: TCP
      port: 27017
      targetPort: 27017

And this is the persistent volume file.

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mongo-pv
  labels:
    type: local
spec:
  storageClassName: mongo
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mongo-pvc
spec:
  storageClassName: mongo
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

I'm running this on Ubuntu using kubectl and minikube v1.25.1.

When I run describe pod, I see this on the mongodb pod.

Volumes:
  auth-db-storage:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongo-pvc
    ReadOnly:   false

I have a similar setup for other pods to store files, and it's working fine. But with mongodb, every time I restart the pods, the data is lost. Can someone help me?

EDIT: I noticed that if I change the mongodb mountPath to /data/db, it works fine. But if I have multiple mongodb pods running on /data/db, they don't work. So I need to have one persistent volume claim for EACH mongodb pod?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

江挽川 2025-01-22 01:43:53

使用这些 yaml 文件时,您将 minikube 节点上的 /data/db 目录挂载到 auth-mongo pod 中的 /data/db/auth

首先,您应该在 k8s 部署中将 /data/db/auth 更改为 /data/db,以便您的 mongodb 可以从默认数据库位置读取数据库。
即使删除部署,数据库仍将保留在 minikube 节点上的“/data/db”目录中。从该部署运行新的 pod 后,mongodb 将打开此现有数据库(所有数据已保存)。

其次,您不能仅通过扩展部署中的副本来使用多个 mongodb pod,因为其他 Pod 中的第二个 mongodb 无法使用第一个 Pod db 已使用的数据。 Mongodb 会抛出此错误:

Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory

因此,解决方案是在部署中仅使用 1 个副本,或者使用 Bitnami 打包的 MongoDB helm Chart。
https://github.com/bitnami/charts/tree/master/bitnami/ MongoDB

此图表使用 Helm 包管理器在 Kubernetes 集群上引导 MongoDB(®) 部署。

$ helm install my-release bitnami/mongodb --set architecture=replicaset --set replicaCount=2

了解 MongoDB 架构选项

另外,请查看此链接MongoDB Community Kubernetes Operator

这是一个 Kubernetes Operator,它将 MongoDB Community 部署到 Kubernetes 集群中。

When using these yaml files, you are mounting the /data/db dir on the minikube node to /data/db/auth in auth-mongo pods.

First, you should change /data/db/auth to /data/db in your k8s deployment so that your mongodb can read the database from the default db location.
Even if you delete the deployment, the db will stay in '/data/db' dir on the minikube node. And after running the new pod from this deployment, mongodb will open this existing db (all data saved).

Second, you can't use multiple mongodb pods like this by just scaling replicas in the deployment because the second mongodb in other Pod can't use already used by the first Pod db. Mongodb will throw this error:

Unable to lock the lock file: /data/db/mongod.lock (Resource temporarily unavailable). Another mongod instance is already running on the /data/db directory

So, the solution is either to use only 1 replica in your deployment or, for example, use MongoDB packaged by Bitnami helm chart.
https://github.com/bitnami/charts/tree/master/bitnami/mongodb

This chart bootstraps a MongoDB(®) deployment on a Kubernetes cluster using the Helm package manager.

$ helm install my-release bitnami/mongodb --set architecture=replicaset --set replicaCount=2

Understand MongoDB Architecture Options.

Also, check this link MongoDB Community Kubernetes Operator.

This is a Kubernetes Operator which deploys MongoDB Community into Kubernetes clusters.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文