Fabric CA 健康检查
我有一个 hyperledger
Fabric 网络 v2.2.0,在 kubernetes
集群中部署了 2 个对等组织和一个排序者组织。每个组织都有自己的 CA 服务器。 CA pod 有时会不断重新启动。为了了解 CA 服务器的服务是否可达,我尝试使用端口 9443 上的 healthz
API
。 io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">livenessProbe
在 CA 部署中的情况如下:
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 9443
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
配置此活性探针后,pod 继续重新启动,并出现事件 Livenessprobe failed: HTTPprobe failed with status code :400。为什么会发生这种情况?
I have a hyperledger
fabric network v2.2.0 deployed with 2 peer orgs and an orderer org in a kubernetes
cluster. Each org has its own CA server. The CA pod keeps on restarting sometimes. In order to know whether the service of the CA server is reachable or not, I am trying to use the healthz
API on port 9443.
I have used the livenessProbe
condition in the CA deployment like so:
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 9443
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
After configuring this liveness probe, the pod keeps on restarting with the event Liveness probe failed: HTTP probe failed with status code: 400
. Why might this be happening?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
HTTP 400 代码:
这表明 Kubernetes 正在以 Hyperledger 拒绝的方式发送数据,但如果没有更多信息,很难说问题出在哪里。首先进行一些快速检查:
hyperledger
/healthz
资源发送一些 GET 请求。你得到什么?如果一切正常,您应该返回200“OK”
,或者返回503“服务不可用”
,其中包含哪些节点已关闭的详细信息 (文档)。kubectl 描述 pod liveness-request
。您应该在底部看到几行,更详细地描述了活性探针的状态:其他一些需要调查的事情:
httpGet
选项:clientAuthRequired
设置为true
)。HTTP 400 code:
This indicates that Kubernetes is sending the data in a way
hyperledger
is rejecting, but without more information it is hard to say where the problem is. Some quick checks to start with:hyperledger
/healthz
resource yourself. What do you get? You should get back either a200 "OK"
if everything is functioning, or a503 "Service Unavailable"
with details of which nodes are down (docs).kubectl describe pod liveness-request
. You should see a few lines towards the bottom describing the state of the liveness probe in more detail:Some other things to investigate:
httpGet
options that might be helpful:clientAuthRequired
is set totrue
).