After some experimentation I've discovered the following:
Mnesia considered the network to be partitioned if between two nodes there is a node disconnect and a reconnect without an mnesia restart.
This is true even if no Mnesia read/write operations occur during the time of the disconnection.
Mnesia itself must be restarted in order to clear the partitioned network event - you cannot force_load_table after the network is partitioned.
Only Mnesia needs to be restarted in order to clear the network partitioned event. You don't need to restart the entire node.
Mnesia resolves the network partitioning by having the newly restarted Mnesia node overwrite its table data with data from another Mnesia node (the startup table load algorithm).
Generally nodes will copy tables from the node that's been up the longest (this was the behaviour I saw, I haven't verified that this explicitly coded for and not a side-effect of something else). If you disconnect a node from a cluster, make writes in both partitions (the disconnected node and its old peers), shutdown all nodes and start them all back up again starting the disconnected node first, the disconnected node will be considered the master and its data will overwrite all the other nodes. There is no table comparison/checksumming/quorum behaviour.
So to answer my question, one can perform semi online recovery by executing mnesia:stop(), mnesia:start() on the nodes in the partition whose data you decide to discard (which I'll call the losing partition). Executing the mnesia:start() call will cause the node to contact the nodes on the other side of the partition. If you have more than one node in the losing partition, you may want to set the master nodes for table loading to nodes in the winning partition - otherwise I think there is a chance it will load tables from another node in the losing partition and thus return to the partitioned network state.
Unfortunately mnesia provides no support for merging/reconciling table contents during the startup table load phase, nor does it provide for going back into the table load phase once started.
A merge phase would be suitable for ejabberd in particular as the node would still have user connections and thus know which user records it owns/should be the most up-to-date for (assuming one user conneciton per cluster). If a merge phase existed, the node could filter userdata tables, save all records for connected users, load tables as per usual and then write the saved records back to the mnesia cluster.
Sara 的回答很棒,甚至可以看看关于 CAP 的文章。 Mnesia 开发者为了 CA 牺牲了 P。 如果您需要 P,那么您应该选择您想要牺牲的 CAP,而不是选择其他存储。 例如 CouchDB (牺牲 C)或 Scalaris(牺牲 A)。
Sara's answer is great, even look at article about CAP. Mnesia developers sacrifice P for CA. If you need P, then you should choice what of CAP you want sacrifice and than choice another storage. For example CouchDB (sacrifice C) or Scalaris (sacrifice A).
更好地描述了 “分布式快照:确定分布式系统的全局状态” K. MANI CHANDY 和 LESLIE LAMPORT
** 我认为在尝试重播所发生的事情时,决定追随谁的时钟存在问题
It works like this. Imagine the sky full of birds. Take pictures until you got all the birds. Place the pictures on the table. Map pictures over each other. So you see every bird one time. Do you se every bird? Ok. Then you know, at that time. The system was stable. Record what all the birds sounds like(messages) and take some more pictures. Then repeat.
If you have a node split. Go back to the latest common stable snapshot. And try** to replay what append after that. :)
It's better described in "Distributed Snapshots: Determining Global States of Distributed Systems" K. MANI CHANDY and LESLIE LAMPORT
** I think there are a problem deciding who's clock to go after when trying to replay what happend
发布评论
评论(3)
经过一些实验,我发现了以下内容:
force_load_table
。因此,为了回答我的问题,可以通过在分区中您决定丢弃其数据的节点上执行 mnesia:stop()、mnesia:start() 来执行半在线恢复(我将其称为丢失的分区)。 执行
mnesia:start()
调用将导致节点联系分区另一侧的节点。 如果丢失分区中有多个节点,您可能需要将用于表加载的主节点设置为获胜分区中的节点 - 否则我认为它有可能从丢失分区中的另一个节点加载表,从而返回分区网络状态。不幸的是,mnesia 不支持在启动表加载阶段合并/协调表内容,也不支持在启动后返回表加载阶段。
合并阶段特别适合 ejabberd,因为节点仍然具有用户连接,因此知道它拥有/应该是最新的用户记录(假设每个集群有一个用户连接)。 如果存在合并阶段,节点可以过滤用户数据表,保存连接用户的所有记录,照常加载表,然后将保存的记录写回 mnesia 集群。
After some experimentation I've discovered the following:
force_load_table
after the network is partitioned.So to answer my question, one can perform semi online recovery by executing
mnesia:stop(), mnesia:start()
on the nodes in the partition whose data you decide to discard (which I'll call the losing partition). Executing themnesia:start()
call will cause the node to contact the nodes on the other side of the partition. If you have more than one node in the losing partition, you may want to set the master nodes for table loading to nodes in the winning partition - otherwise I think there is a chance it will load tables from another node in the losing partition and thus return to the partitioned network state.Unfortunately mnesia provides no support for merging/reconciling table contents during the startup table load phase, nor does it provide for going back into the table load phase once started.
A merge phase would be suitable for ejabberd in particular as the node would still have user connections and thus know which user records it owns/should be the most up-to-date for (assuming one user conneciton per cluster). If a merge phase existed, the node could filter userdata tables, save all records for connected users, load tables as per usual and then write the saved records back to the mnesia cluster.
Sara 的回答很棒,甚至可以看看关于 CAP 的文章。 Mnesia 开发者为了 CA 牺牲了 P。 如果您需要 P,那么您应该选择您想要牺牲的 CAP,而不是选择其他存储。 例如 CouchDB (牺牲 C)或 Scalaris(牺牲 A)。
Sara's answer is great, even look at article about CAP. Mnesia developers sacrifice P for CA. If you need P, then you should choice what of CAP you want sacrifice and than choice another storage. For example CouchDB (sacrifice C) or Scalaris (sacrifice A).
它的工作原理是这样的。 想象一下天空中布满了鸟儿。 拍照直到拍到所有的鸟。
将图片放在桌子上。 将图片相互映射。 所以你每只鸟都会看到一次。 你看到每只鸟了吗? 好的。 然后你就知道了,那个时候。 系统稳定。
记录所有鸟类的声音(信息)并拍摄更多照片。 然后重复。
如果您有节点分裂。 返回到最新的通用稳定版快照。 并尝试**重播此后附加的内容。 :)
更好地描述了
“分布式快照:确定分布式系统的全局状态”
K. MANI CHANDY 和 LESLIE LAMPORT
** 我认为在尝试重播所发生的事情时,决定追随谁的时钟存在问题
It works like this. Imagine the sky full of birds. Take pictures until you got all the birds.
Place the pictures on the table. Map pictures over each other. So you see every bird one time. Do you se every bird? Ok. Then you know, at that time. The system was stable.
Record what all the birds sounds like(messages) and take some more pictures. Then repeat.
If you have a node split. Go back to the latest common stable snapshot. And try** to replay what append after that. :)
It's better described in
"Distributed Snapshots: Determining Global States of Distributed Systems"
K. MANI CHANDY and LESLIE LAMPORT
** I think there are a problem deciding who's clock to go after when trying to replay what happend