Mongo 与 cassandra:单点故障
在 Cassandra 与 Mongo 的争论中,据说由于 mongo 具有主从架构,因此它存在单点故障(主节点),当主节点发生故障时,从节点需要时间来决定新的主节点,因此存在停机时间窗口。
使用 Cassandra,我们就不存在这个问题,因为所有节点都是平等的。 但 Cassandra 也有一个系统,其中节点使用八卦协议来保持自身更新。在八卦协议中,参与需要最少数量的节点。假设如果其中一个参与节点发生故障,则需要一个新节点来替代它。但是产生新的替换节点需要时间,这类似于 mongo 中的 master 故障的情况。
那么就单点故障而言,2 之间有什么区别呢?
In Cassandra vs Mongo debate, it is said that as mongo has master-slave architecture so it has a single point of failure(master) as when master fails, slave nodes take time to decide for new master, hence a window for downtime.
With Cassandra we don't have this problem as all nodes are equal.
But then Cassandra too has a system wherein nodes use gossip protocol to keep themselves updated. In gossip protocol a minimum number of nodes are needed to take part. Suppose if one of the participating node goes down, then a new node needs to replace it. But it would take time to spawn a new replacement node, and this is a situation similar to master failure in mongo.
So what's the difference between 2 as far as single point of failure is concerned?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
您对 Cassandra 的假设不正确,所以请允许我解释一下。
Gossip 不需要多个节点即可工作。有可能有一个单节点集群,并且八卦仍然有效,因此以下陈述是不正确的:
为了获得最佳实践,我们建议每个数据中心有 3 个副本(复制因子为 3),因此每个数据中心至少需要 3 个节点。复制因子为 3 时,您的应用程序可以在节点中断时保持
ONE
、LOCAL_ONE
或推荐的LOCAL_QUORUM
的一致性级别,因此这些语句是也不正确:向 Cassandra 集群引入单点故障的唯一方法是:
)请注意,这是一个友好的警告,其他用户可能会投票结束您的问题,因为比较通常是不受欢迎的,因为答案通常基于意见。干杯!
Your assumptions about Cassandra are not correct so allow me to explain.
Gossip does not require multiple nodes for it to work. It is possible to have a single-node cluster and gossip will still work so this statement is incorrect:
For best practice, we recommend 3 replicas in each data centre (replication factor of 3) so you need a minimum of 3 nodes in each data centre. With a replication factor of 3, your application can survive a node outage for consistency levels of
ONE
,LOCAL_ONE
or the recommendedLOCAL_QUORUM
so these statements are incorrect too:The only ways to introduce single points-of-failure to your Cassandra cluster are:
As a side note, a friendly warning that other users may vote to close your question because comparisons are usually frowned upon since the answers are often based on opinions. Cheers!
另外,您对 MongoDB 的假设也不正确。
主从复制在 MongoDB 4.0 版本中已被删除(当前版本为 5.0),现在它使用 副本集
通常,您连接到一个副本集而不是单个副本集成员。当当前 PRIMARY 出现故障时,将选举出一个新的 PRIMARY您的应用程序将自动重新连接并重试对新 PRIMARY 的写入操作。它可能会挂起几秒钟,但应该继续运行。
Also your assumptions about MongoDB are not correct.
Master-Slave replication has been removed in MongoDB version 4.0 (current version is 5.0), nowadays it uses Replica Sets
Typically you connect to a Replica Set rather than a single Replica Set member. When the current PRIMARY goes down, then a new PRIMARY will elected and your application will re-connect automatically and retries write operations to the new PRIMARY. It may hang for a few seconds but it should continue running.