我的 Kubernetes MongoDB 服务未保存数据
这是 mongodb yaml 文件。
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
volumeMounts:
- mountPath: "/data/db/auth"
name: auth-db-storage
volumes:
- name: auth-db-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
这就是持久卷文件。
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: mongo
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
我使用 kubectl
和 minikube v1.25.1
在 Ubuntu 上运行它。
当我运行describe pod时,我在mongodb pod上看到了这一点。
Volumes:
auth-db-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-pvc
ReadOnly: false
我对其他 pod 也有类似的设置来存储文件,并且运行良好。但是使用 mongodb,每次重新启动 pod 时,数据都会丢失。有人可以帮助我吗?
编辑:我注意到,如果我将 mongodb mountPath
更改为 /data/db
,它就可以正常工作。但是,如果我在 /data/db
上运行多个 mongodb pod,它们就不起作用。那么我需要为每个 mongodb pod 分配一个持久卷声明吗?
This is the mongodb yaml file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
volumeMounts:
- mountPath: "/data/db/auth"
name: auth-db-storage
volumes:
- name: auth-db-storage
persistentVolumeClaim:
claimName: mongo-pvc
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
And this is the persistent volume file.
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv
labels:
type: local
spec:
storageClassName: mongo
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/db"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pvc
spec:
storageClassName: mongo
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
I'm running this on Ubuntu using kubectl
and minikube v1.25.1
.
When I run describe pod, I see this on the mongodb pod.
Volumes:
auth-db-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-pvc
ReadOnly: false
I have a similar setup for other pods to store files, and it's working fine. But with mongodb, every time I restart the pods, the data is lost. Can someone help me?
EDIT: I noticed that if I change the mongodb mountPath
to /data/db
, it works fine. But if I have multiple mongodb pods running on /data/db
, they don't work. So I need to have one persistent volume claim for EACH mongodb pod?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
使用这些 yaml 文件时,您将 minikube 节点上的
/data/db
目录挂载到 auth-mongo pod 中的/data/db/auth
。首先,您应该在 k8s 部署中将
/data/db/auth
更改为/data/db
,以便您的 mongodb 可以从默认数据库位置读取数据库。即使删除部署,数据库仍将保留在 minikube 节点上的“/data/db”目录中。从该部署运行新的 pod 后,mongodb 将打开此现有数据库(所有数据已保存)。
其次,您不能仅通过扩展部署中的副本来使用多个 mongodb pod,因为其他 Pod 中的第二个 mongodb 无法使用第一个 Pod db 已使用的数据。 Mongodb 会抛出此错误:
因此,解决方案是在部署中仅使用 1 个副本,或者使用 Bitnami 打包的 MongoDB helm Chart。
https://github.com/bitnami/charts/tree/master/bitnami/ MongoDB
了解 MongoDB 架构选项。
另外,请查看此链接MongoDB Community Kubernetes Operator。
When using these yaml files, you are mounting the
/data/db
dir on the minikube node to/data/db/auth
in auth-mongo pods.First, you should change
/data/db/auth
to/data/db
in your k8s deployment so that your mongodb can read the database from the default db location.Even if you delete the deployment, the db will stay in '/data/db' dir on the minikube node. And after running the new pod from this deployment, mongodb will open this existing db (all data saved).
Second, you can't use multiple mongodb pods like this by just scaling replicas in the deployment because the second mongodb in other Pod can't use already used by the first Pod db. Mongodb will throw this error:
So, the solution is either to use only 1 replica in your deployment or, for example, use MongoDB packaged by Bitnami helm chart.
https://github.com/bitnami/charts/tree/master/bitnami/mongodb
Understand MongoDB Architecture Options.
Also, check this link MongoDB Community Kubernetes Operator.