可以更改Sentry Charts中的helm Zookeeper sTorageClass values.yaml
我们希望使用 Sentry 进行错误日志记录(对于我们的用例而言是本地部署),但由于我们使用 k8s 来处理所有内容,所以我们选择了 Sentry Kubernetes 图表。
我们使用的云提供商将 PVC 的 storageClass
保留为空白/空不会创建 PVC,而是使系统处于挂起状态,因此我们需要手动更改 storageClass
,如果您深入研究 Sentry for k8s Helm 图表的values.yaml 文件,就会或多或少地描述这一点。
所需的魔法是 storageClass: csi-disk
,它让我们的云提供商知道它可以附加该类型的 PVC(而不是如上所述不执行任何操作),
我们在下面所做的操作也与值匹配。 bitnami 提供的 yaml: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml,我们应该按照图表的values.yaml中提到的方式进行检查:https://github.com/sentry-kubernetes/charts/blob/develop/sentry/values.yaml#L714
以及所有其他 Bitnami 图表都可以工作(PGDB 等),我在下面留下了一个示例并注释掉了其余的内容。
但无论我做什么,我都无法将 storageClass 解析为desiredManifest,而且我无法实时更改清单,因为它是 StatefulSet,所以我需要以某种方式正确解析 storageClass。
已经花了相当多的时间尝试一切,寻找拼写错误等。
我们使用 Helm 和 ArgoCD,这是 ArgoCD 应用程序:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sentry-dev
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: sentry
server: https://kubernetes.default.svc
project: default
source:
repoURL: https://sentry-kubernetes.github.io/charts
chart: sentry
targetRevision: 13.0.1
helm:
values: |
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hostname: ...
tls:
# ...
clickhouse:
# ..
filestore:
# ..
redis:
master:
#...
replica:
#...
rabbitmq:
persistence:
enabled: true
annotations:
everest.io/disk-volume-type: SSD
labels:
failure-domain.beta.kubernetes.io/region: eu-de
failure-domain.beta.kubernetes.io/zone:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClass: csi-disk
kafka:
# ...
postgresql:
# ...
zookeeper:
enabled: true
persistence:
enabled: true
annotations:
everest.io/disk-volume-type: SSD
labels:
failure-domain.beta.kubernetes.io/region: eu-de
failure-domain.beta.kubernetes.io/zone:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClass: csi-disk
storageClassName: csi-disk # tried both storageClass and storageClassName, together and separately!
所需的清单始终卡在(更改元数据和任何其他规范也失败,因此图表不接受任何 value.yaml 更改)
volumeClaimTemplates:
- metadata:
annotations: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
同时还存在 GH 问题: https://github.com/sentry-kubernetes/charts/issues/606
We want to use Sentry for error logging (on-prem for our use case) but since we use k8s for everything we chose the Sentry Kubernetes charts.
We are using a cloud provider where leaving the storageClass
for PVC blank/empty does not create PVCs and instead leaves the system in pending status, so we need to manually change the storageClass
, which is described more or less if you dig into the values.yaml file of the Sentry for k8s Helm charts.
The magic needed is storageClass: csi-disk
, which let's our cloud provider know it can attach PVCs of that type (instead of doing nothing as described above)
What we've done below also matches the values.yaml provided by bitnami: https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml, which we are supposed to check as mentioned in your charts' values.yaml: https://github.com/sentry-kubernetes/charts/blob/develop/sentry/values.yaml#L714
AND all the other Bitnami charts work (PGDB etc.) I have left one example below and commented out the rest.
but no matter what I do I cannot get storageClass parsed in as the desiredManifest and I can't do I live manifest change since it's a StatefulSet, so I somehow need to get storageClass parsed correctly.
Already spent quite a lot of time trying everything, looking for typos etc.
We use Helm and ArgoCD and this is the ArgoCD app:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: sentry-dev
namespace: argocd
finalizers:
- resources-finalizer.argocd.argoproj.io
spec:
destination:
namespace: sentry
server: https://kubernetes.default.svc
project: default
source:
repoURL: https://sentry-kubernetes.github.io/charts
chart: sentry
targetRevision: 13.0.1
helm:
values: |
ingress:
enabled: true
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
hostname: ...
tls:
# ...
clickhouse:
# ..
filestore:
# ..
redis:
master:
#...
replica:
#...
rabbitmq:
persistence:
enabled: true
annotations:
everest.io/disk-volume-type: SSD
labels:
failure-domain.beta.kubernetes.io/region: eu-de
failure-domain.beta.kubernetes.io/zone:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClass: csi-disk
kafka:
# ...
postgresql:
# ...
zookeeper:
enabled: true
persistence:
enabled: true
annotations:
everest.io/disk-volume-type: SSD
labels:
failure-domain.beta.kubernetes.io/region: eu-de
failure-domain.beta.kubernetes.io/zone:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
storageClass: csi-disk
storageClassName: csi-disk # tried both storageClass and storageClassName, together and separately!
The desired manifest is always stuck at (changing metadata and any other spec also fails so somehow the Chart does not accept any values.yaml changes)
volumeClaimTemplates:
- metadata:
annotations: null
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
Also have a GH issue open: https://github.com/sentry-kubernetes/charts/issues/606
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您可以在掌舵之前创建PVC并将其用于现有索赔吗?
Can you create the pvc prior to helm and use that for existing claim?
最后,我从GitHub问题中得到了答案,我在这里重新发布它:
Kafka有其自己的内部动物园依赖者依赖性,因此您可以做这样的事情:
And finally I got my answer from the GitHub issue and I am reposting it here:
Kafka has its own internal zookeeper dependancy, so you can do something like this: