Mongo 与 cassandra:单点故障

发布于 2025-01-10 21:54:49 字数 304 浏览 0 评论 0原文

在 Cassandra 与 Mongo 的争论中,据说由于 mongo 具有主从架构,因此它存在单点故障(主节点),当主节点发生故障时,从节点需要时间来决定新的主节点,因此存在停机时间窗口。

使用 Cassandra,我们就不存在这个问题,因为所有节点都是平等的。 但 Cassandra 也有一个系统,其中节点使用八卦协议来保持自身更新。在八卦协议中,参与需要最少数量的节点。假设如果其中一个参与节点发生故障,则需要一个新节点来替代它。但是产生新的替换节点需要时间,这类似于 mongo 中的 master 故障的情况。

那么就单点故障而言,2 之间有什么区别呢?

In Cassandra vs Mongo debate, it is said that as mongo has master-slave architecture so it has a single point of failure(master) as when master fails, slave nodes take time to decide for new master, hence a window for downtime.

With Cassandra we don't have this problem as all nodes are equal.
But then Cassandra too has a system wherein nodes use gossip protocol to keep themselves updated. In gossip protocol a minimum number of nodes are needed to take part. Suppose if one of the participating node goes down, then a new node needs to replace it. But it would take time to spawn a new replacement node, and this is a situation similar to master failure in mongo.

So what's the difference between 2 as far as single point of failure is concerned?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

神妖 2025-01-17 21:54:49

您对 Cassandra 的假设不正确,所以请允许我解释一下。

Gossip 不需要多个节点即可工作。有可能有一个单节点集群,并且八卦仍然有效,因此以下陈述是不正确的:

在八卦协议中,参与需要最少数量的节点。

为了获得最佳实践,我们建议每个数据中心有 3 个副本(复制因子为 3),因此每个数据中心至少需要 3 个节点。复制因子为 3 时,您的应用程序可以在节点中断时保持 ONELOCAL_ONE 或推荐的 LOCAL_QUORUM 的一致性级别,因此这些语句是也不正确:

假设如果其中一个参与节点发生故障,则需要一个新节点来替代它。但生成新的替换节点需要时间,这类似于 mongo 中 master 故障的情况。

向 Cassandra 集群引入单点故障的唯一方法是:

  • 在单个物理主机上部署多个实例(不推荐),
  • 对所有节点使用共享存储(例如 SAN、NAS、NFS)(不推荐

)请注意,这是一个友好的警告,其他用户可能会投票结束您的问题,因为比较通常是不受欢迎的,因为答案通常基于意见。干杯!

Your assumptions about Cassandra are not correct so allow me to explain.

Gossip does not require multiple nodes for it to work. It is possible to have a single-node cluster and gossip will still work so this statement is incorrect:

In gossip protocol a minimum number of nodes are needed to take part.

For best practice, we recommend 3 replicas in each data centre (replication factor of 3) so you need a minimum of 3 nodes in each data centre. With a replication factor of 3, your application can survive a node outage for consistency levels of ONE, LOCAL_ONE or the recommended LOCAL_QUORUM so these statements are incorrect too:

Suppose if one of the participating node goes down, then a new node needs to replace it. But it would take time to spawn a new replacement node, and this is a situation similar to master failure in mongo.

The only ways to introduce single points-of-failure to your Cassandra cluster are:

  • deploying multiple instances on a single physical host (not recommended)
  • using shared storage (e.g. SAN, NAS, NFS) for all nodes (not recommended)

As a side note, a friendly warning that other users may vote to close your question because comparisons are usually frowned upon since the answers are often based on opinions. Cheers!

酷炫老祖宗 2025-01-17 21:54:49

另外,您对 MongoDB 的假设也不正确。

主从复制在 MongoDB 4.0 版本中已被删除(当前版本为 5.0),现在它使用 副本集

通常,您连接到一个副本集而不是单个副本集成员。当当前 PRIMARY 出现故障时,将选举出一个新的 PRIMARY您的应用程序将自动重新连接并重试对新 PRIMARY 的写入操作。它可能会挂起几秒钟,但应该继续运行。

Also your assumptions about MongoDB are not correct.

Master-Slave replication has been removed in MongoDB version 4.0 (current version is 5.0), nowadays it uses Replica Sets

Typically you connect to a Replica Set rather than a single Replica Set member. When the current PRIMARY goes down, then a new PRIMARY will elected and your application will re-connect automatically and retries write operations to the new PRIMARY. It may hang for a few seconds but it should continue running.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文