使用命令运行脚本的准备探测器Reathiness-Probe.sh在openshift上失败了MongodB Helm Chart Bitnami
通过将Bitnami的MongoDB头盔图表部署到OpenShift中,我发现了“准备就绪探针失败”的错误,
以准备就绪和LIVENICE探针的健康检查设置看起来像是
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
exec:
command:
- /bitnami/scripts/ping-mongodb.sh
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bitnami/scripts/readiness-probe.sh
通过命令(/bitnami/scripts/readiness-probe.sh)调用的脚本)通过运行此脚本看起来像这样
#!/bin/bash
# Run the proper check depending on the version
[[ $(mongod -version | grep "db version") =~ ([0-9]+\.[0-9]+\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]}
. /opt/bitnami/scripts/libversion.sh
VERSION_MAJOR="$(get_sematic_version "$VERSION" 1)"
VERSION_MINOR="$(get_sematic_version "$VERSION" 2)"
VERSION_PATCH="$(get_sematic_version "$VERSION" 3)"
if [[ ( "$VERSION_MAJOR" -ge 5 ) || ( "$VERSION_MAJOR" -ge 4 && "$VERSION_MINOR" -ge 4 && "$VERSION_PATCH" -ge 2 ) ]]; then
mongosh $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
else
mongosh $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true'
fi
,POD变得非常慢。 无论我有多大的准备时间准备探测,都无法正常工作。
我检查脚本是否存在于运行pod中 - > POD
中存在File/Bitnami/scripts/readiness-probe.sh- i在Reackiness Pooke设置中将运行脚本运行的命令更改为“ CAT/BITNAMI/Scripts/readiness-probe.sh” - &GT ;它起作用
liveseprobe: 失败:6 InitiaDelayseconds:30 周期:20 SuccessThreshold:1 超时网络:10 执行: 命令: - 猫 - /bitnami/scripts/ping-mongodb.sh REACHINCESPROBE: 失败:6 InitiaDelayseconds:5 周期:10 SuccessThreshold:1 超时路要数:5 执行: 命令: - 猫 - /bitnami/scripts/readiness-probe.sh
我增加了CPU和内存 - >不成功!
我注意到,一旦执行mongoDB命令,豆荚就会变得非常慢。
by Deploying the Mongodb Helm Chart of Bitnami to openshift, i got the Error "Readiness Probe failed"
the Health Check setting for Readiness and liveness Probe is looking like this
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 20
successThreshold: 1
timeoutSeconds: 10
exec:
command:
- /bitnami/scripts/ping-mongodb.sh
readinessProbe:
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
exec:
command:
- /bitnami/scripts/readiness-probe.sh
the scripts calling by the Command (/bitnami/scripts/readiness-probe.sh) is looking like this
#!/bin/bash
# Run the proper check depending on the version
[[ $(mongod -version | grep "db version") =~ ([0-9]+\.[0-9]+\.[0-9]+) ]] && VERSION=${BASH_REMATCH[1]}
. /opt/bitnami/scripts/libversion.sh
VERSION_MAJOR="$(get_sematic_version "$VERSION" 1)"
VERSION_MINOR="$(get_sematic_version "$VERSION" 2)"
VERSION_PATCH="$(get_sematic_version "$VERSION" 3)"
if [[ ( "$VERSION_MAJOR" -ge 5 ) || ( "$VERSION_MAJOR" -ge 4 && "$VERSION_MINOR" -ge 4 && "$VERSION_PATCH" -ge 2 ) ]]; then
mongosh $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.hello().isWritablePrimary || db.hello().secondary' | grep -q 'true'
else
mongosh $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval 'db.isMaster().ismaster || db.isMaster().secondary' | grep -q 'true'
fi
By running this script, the Pod becomes very slow.
No matter how high I set the time for Readiness probe, it doesn't work.
i check if the script are existing in the running Pod --> there are the file /bitnami/scripts/readiness-probe.sh existing in the Pod
i change the Command running the script to just " cat /bitnami/scripts/readiness-probe.sh" in readiness probe setting --> IT WORKING
livenessProbe: failureThreshold: 6 initialDelaySeconds: 30 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 10 exec: command: - cat - /bitnami/scripts/ping-mongodb.sh readinessProbe: failureThreshold: 6 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 exec: command: - cat - /bitnami/scripts/readiness-probe.sh
i increase the CPU and memory --> NOT Success!
I have noticed that the Pod becomes very slow as soon as a Mongodb command is executed.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
没有足够的超时路要数
我遇到了同样的问题,发现部署中 ,所以我将超时时间增加到20,错误消失了。
idk,它需要多少资源才能稳定(
I had the same problem and saw that there was not enough timeoutSeconds
in the deployment, so I increased the timeout time to 20 and the error went away.
Idk, how many resourses it needs to work stable(