Vault HA模式(OSS)与Vault Enterprise
Hashcorp Vault Enterprise提供了三个主要功能:Performance Replica,灾难恢复和名称空间。好吧,我的用例不需要与灾难恢复一起使用,对于性能复制品,我可以设置带有领事后端的保险库OSS,并运行许多活动群集,这将等同于性能复制品,我的理解是正确的意愿,不可行许可证,仍然具有相同的保险库企业
Hashcorp vault enterprise provides three main features, performance replica, disaster recovery, and namespace. Well my use case is not required to go with disaster recovery and for performance replica, i can setup Vault OSS with consul backend and run many active clusters which will be equivalent to performance replica, Is my understanding s correct will that feasible to not to use license and still have the same what Vault Enterprise
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
Hashicorp在保险库企业中包括更多:
...但是所有这些功能都分为层,因此您不会以一个价格获得所有这些功能,这些内容通常是通过谈判的基于您的客户端计数。但是,我不会将命名空间和复制簇称为企业的“主要”功能,贵公司的用例将决定您的需求。
性能复制簇之所以成为一件事情,是因为写差异。从本质上讲,如果您从同一存储中运行两个单独的簇,则两个群集将具有领导者节点,它们都将尝试写入存储,这可能会导致数据丢失。 (在集群中,任何节点都可以响应读取请求,但是如果非领导节点收到写入请求,则将其转发给领导者节点以管理和执行,然后将响应转发回非领导者节点。这是可以接受的,因为写操作比阅读操作要少得多。 =“ https://www.vaultproject.io/docs/enterprise/replication#performance-replication-rplication'rel =“ nofollow noreferrer”>性能复制通过enterprise。
绩效次要集群保持自己的租赁,从而减轻了大量的写入流量,并且读取操作始终可以通过非主要节点来处理,因此这并不是什么新鲜事。但是,仍然有一些写操作需要由主要群集上的领导者节点管理。性能次要和性能复制功能旨在知道要转发哪些请求以及要保留哪些请求。还有过滤器的概念将GDPR数据保存在其欧洲数据中心和美国数据中心中的美国政府数据的规则,并防止将这些数据存储在该数据中心,同时仍在任何地方提供大多数数据。
那么,可以在Vault Oss中复制这一点吗?并不真地。您可能能够欺骗Vault,以为其某些HA节点通过人为的网络在不同的数据中心中,可能会使用一些VPN隧道来连接群集网络。当然,我不能推荐这个;除了我从未对其进行测试的事实,我对网络延迟干扰应用程序功能和存储访问的担忧,租赁管理不会扩展,Hashicorp建议最大的五个节点到一个群集,写节点会大规模超载,那是我头顶的。这是一个坏主意,当然不是企业提供的“相同”。
tl; dr:您无法做到这一点,而解决方法会有巨大的问题,并且肯定不会像性能复制的次要群集一样。
Hashicorp includes a lot more in Vault Enterprise:
...but all those features are broken up into tiers, so you don't get all those features for one price, these things are generally negotiated based on your client count. However, I wouldn't call Namespaces and Replication clusters the "main" features of Enterprise, your company's use case is going to dictate your needs.
The reason that Performance Replication clusters are a thing is because of write divergence. Essentially, if you run two separate clusters off the same storage, two clusters are going to have leader nodes which are both going to try to write to the storage, and this can result in data loss. (Within a cluster, any node can respond to read requests, but if a non-leader node receives a write request, it gets forwarded to the leader node to manage and execute, and the response is forwarded back to the non-leader node. This is acceptable because write operations are much less frequent than read operations.) So, to prevent the data loss but still have the ability to have multiple datacenters in different regions stood up which all have access to the same information, Hashicorp provided Performance Replication via Enterprise.
Performance Secondary clusters maintain their own leases, which alleviates a great deal of write traffic, and read operations are always able to be handled by non-primary nodes so this is nothing new. However, there are still some write operations that need to be managed by the leader node on the primary cluster. Performance Secondary and Performance Replication features are designed to know which requests to forward and which to keep local. There's also the concept of filters, which allows a Vault administrator to define rules to keep GDPR data in their European datacenter, and US Government data in their US datacenter, and prevent this data from being stored somewhere it shouldn't be, while still providing a majority of data from anywhere.
So, can this be reproduced in Vault OSS? Not really. You might be able to trick Vault into thinking that some of its HA nodes are in a different datacenter via contrived networking, possibly use some VPN tunnels to connect the cluster networks. I can't recommend this of course; besides the fact I've never tested it, I'd have concerns about network latency interfering with application functionality and storage access, lease management wouldn't scale, Hashicorp recommends maximum five nodes to a cluster, the write node would become overloaded at scale, and that's just off the top of my head. It's a bad idea, and certainly not "the same" as what Enterprise offers.
TL;DR: You can't do this, and a workaround would have huge issues, and would certainly NOT be the same as a performance replication secondary cluster.