Chaincode(调用)无法在所有三个组织的远程集群上进行背书,org1 成功,但 org2 和 org3 失败。可能出什么问题了?
我配置了一个 Kubernetes 集群,通过 Docker Desktop 运行时可以完美构建,包括通过网络中的所有三个 Chaincode 容器进行成功认可的调用。
在远程端,我使用 AWS EKS
来部署我的节点,并且最近我遵循了 有关部署生产就绪对等点的指南。我已经设置了 EFS 并将其用作 k8s 持久卷,每次我使用所有配置假脱机网络时都会填充该卷。这意味着所有加密材料、连接配置文件等都安装到相关容器中,并且根据最佳实践,对这些 TLS 证书的引用位于此目录中。
这一切都按预期工作...我的管理 Pod 可以与我的同事通信、排序者连接等等。我能够完全安装链码、批准它并将其成功提交给我的所有三个同行。
当调用链代码时,我的 org1 容器总是成功,并成功与其组织中的对等方进行通信。
我知道 core.yaml
设置 localMspId
并且它被每组对等点的环境变量 CORE_PEER_LOCALMSPID
覆盖,这样在我的 org1 对等体中,该值为 Org1MSP
,在 org2 中,该值为 Org2MSP
,依此类推。
当运行 对等体链代码时调用
时,第一个容器 (org1) 很快就成功了,另外两个容器尝试联系其对等方,并在默认 gRPC 设置中设置的超时期限内挂起(110000 毫秒等待)。我还在对等点上设置了 CORE_PEER_ADDRESS_AUTODETECT: "true"
的环境变量,以确保它不会尝试使用 peer0.org1
等主机名进行解析(此显然适用于 org1 但不适用于其他两个)。
每个容器中为 TLS 设置的环境变量与我使用调用命令传递的环境变量(按正确顺序)的内容相对应:
peer chaincode invoke --ctor '${CC_INIT_ARGS}' --channelID ${CHANNEL_ID} --name ${CC_NAME} --cafile \$ORDERER_TLS_ROOTCERT_FILE \
--tls true -o orderer.${ORG}:7050 \
--peerAddresses peer0.org1:7051 \
--peerAddresses peer0.org2:7051 \
--peerAddresses peer0.org3:7051 \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org1-cert.pem \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org2-cert.pem \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org3-cert.pem >&invoke-log.txt
cat invoke-log.txt
该命令在我的容器内执行,如上所述,我已通过检查所有容器来手动确认三个容器,然后cat
文件的内容,而不是对上述路径执行相同的操作,并且它们完全匹配。也就是说/etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org1-cert.pem
的内容相当于CORE_PEER_TLS_ROOTCERT_FILE
中的设置每个组织的 org1 等。
示例 org1
链码容器日志:
2022-02-23T13:47:07.255Z debug [c-api:lib/handler.js] [allorgs-5e707801] Calling chaincode Invoke(), response status: 200
2022-02-23T13:47:07.256Z info [c-api:lib/handler.js] [allorgs-5e707801] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
对于 org2
和 org3
容器,一旦最终完成超时,它会输出:
2022-02-23T12:24:05.045Z error [c-api:lib/handler.js] Chat stream with peer - on error: %j "Error: 14 UNAVAILABLE: No connection established\n at Object.callErrorFromStatus (/usr/local/src/node_modules/@grpc/grpc-js/build/src/call.js:31:26)\n at Object.onReceiveStatus (/usr/local/src/node_modules/@grpc/grpc-js/build/src/client.js:391:49)\n at Object.onReceiveStatus (/usr/local/src/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)\n at /usr/local/src/node_modules/@grpc/grpc-js/build/src/call-stream.js:182:78\n at processTicksAndRejections (internal/process/task_queues.js:79:11)"
2022-02-23T12:24:05.045Z debug [c-api:lib/handler.js] Chat stream ending
我还启用了 DEBUG 记录了所有内容,但我没有从中收集到任何有用的信息。任何帮助或建议将不胜感激!
I have a Kubernetes cluster configured which builds perfectly when running via Docker Desktop, including invoking with successful endorsement via all three Chaincode containers in the network.
On the remote side, I'm using AWS EKS
to deploy my nodes and I have more recently followed this guide on deploying a production ready peer. I already had EFS set up and in use as a k8s Persistent Volume, and this is populated each time I spool up a network with all the config. This means all the crypto materials, connection profiles, etc are mounted to the relevant containers and as per best practice the reference to these TLS certs is in this directory.
This all works as expected... my admin pods can communicate with my peers, the orderers connect, etcetera. I'm able to fully install chaincode, approve it and commit it to all three of my peers successfully.
When it comes to invoking the chaincode, my org1
container always succeeds, and successfully communicates with the peer in its organization.
I'm aware of the core.yaml
setting localMspId
and this is being overridden by the environment variable CORE_PEER_LOCALMSPID
for each set of peers, such that in my org1 peer the value is Org1MSP
, in org2 it's Org2MSP
, etc.
When running peer chaincode invoke
, the first container (org1) succeeds very quickly, the other two try to contact their peers and hang for the timeout period set in the default gRPC settings (110000ms wait). I also have set the env var of CORE_PEER_ADDRESS_AUTODETECT: "true"
on my peer in order to ensure it doesn't try to resolve using the hostnames like peer0.org1
(this clearly works for org1 but not the other two).
The environment variables set for TLS in each of the containers corresponds to the contents of the ones I am passing (in correct order) with my invoke command:
peer chaincode invoke --ctor '${CC_INIT_ARGS}' --channelID ${CHANNEL_ID} --name ${CC_NAME} --cafile \$ORDERER_TLS_ROOTCERT_FILE \
--tls true -o orderer.${ORG}:7050 \
--peerAddresses peer0.org1:7051 \
--peerAddresses peer0.org2:7051 \
--peerAddresses peer0.org3:7051 \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org1-cert.pem \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org2-cert.pem \
--tlsRootCertFiles /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org3-cert.pem >&invoke-log.txt
cat invoke-log.txt
That command is executed inside my container, and as mentioned, I have manually confirmed by inspecting all three containers, then cat
ing the contents of the files, versus doing the same with the above paths, and they match exactly. That is to say the contents of /etc/hyperledger/fabric-peer/client-root-tlscas/tlsca.org1-cert.pem
are equivalent to the CORE_PEER_TLS_ROOTCERT_FILE
setting in org1, and so on per organization.
Example org1
chaincode container logs:
2022-02-23T13:47:07.255Z debug [c-api:lib/handler.js] [allorgs-5e707801] Calling chaincode Invoke(), response status: 200
2022-02-23T13:47:07.256Z info [c-api:lib/handler.js] [allorgs-5e707801] Calling chaincode Invoke() succeeded. Sending COMPLETED message back to peer
For org2
and org3
containers, once it finally finishes the timeout, it outputs:
2022-02-23T12:24:05.045Z error [c-api:lib/handler.js] Chat stream with peer - on error: %j "Error: 14 UNAVAILABLE: No connection established\n at Object.callErrorFromStatus (/usr/local/src/node_modules/@grpc/grpc-js/build/src/call.js:31:26)\n at Object.onReceiveStatus (/usr/local/src/node_modules/@grpc/grpc-js/build/src/client.js:391:49)\n at Object.onReceiveStatus (/usr/local/src/node_modules/@grpc/grpc-js/build/src/client-interceptors.js:328:181)\n at /usr/local/src/node_modules/@grpc/grpc-js/build/src/call-stream.js:182:78\n at processTicksAndRejections (internal/process/task_queues.js:79:11)"
2022-02-23T12:24:05.045Z debug [c-api:lib/handler.js] Chat stream ending
I have also enabled DEBUG
logs on everything and I'm gleaning nothing useful from it. Any help or suggestions would be greatly appreciated!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
三个对等点共享同一端口。这可能吗?
另外,当从命令行运行调用时,我通常会使用以下模式,对每个对等点重复。
不是三个对等点后跟三个 TLS 证书文件路径。
The three peers share the same port. Is that even possible?
Also, when running invoke from the command line, I would normally use the following pattern, repeated for each peer.
not the three peers followed by the three TLS cert file paths.