mongoDB未能重新同步复制品集的陈旧成员

发布于 2025-01-24 13:19:50 字数 665 浏览 0 评论 0原文

我有带有3个节点的mongodb(版本4.2)replicaset-主要,次要,仲裁者, 主要占用接近250 GB的磁盘空间,OPLOG的大小为15 GB

次级持续了几个小时,试图通过重新启动来恢复它,它将永远恢复。

尝试通过在数据路径上删除文件,花费15个小时,数据路径大小到140GB而尝试的初始同步尝试,而失败了

尝试从emerial中复制文件并播种以恢复辅助节点 跟随 https:///www.mongodb.com/ docs/v4.2/tutorial/resync-replica-stet-member/ 这是不起作用的 - (再次是陈旧的)

在最新的文档(5.0)中,他们提到使用新会员ID,它也适用于4.2吗? 更改成员ID引发错误,因为IP和端口对于节点是相同的,我正在尝试恢复

此方法也不成功,计划使用不同的数据路径恢复节点,而端口则可能将其视为新节点,然后次要结束后,会更改我想要的端口并重新启动,它将起作用吗?

请提供其他建议,以恢复具有大量数据(例如250 GB)的副本节点

I have mongodb (version 4.2) replicaset with 3 nodes - primary, secondary, arbiter,
primary occupies close to 250 GB disk space, oplog size is 15 GB

secondary was down for few hours, tried recovering it by restarting, it went into recovering forever.

tried initial sync by deleting files on data path, took 15 hours, data path size went to 140GB and failed

tried to copy files from primary and seed it to recover secondary node
followed https://www.mongodb.com/docs/v4.2/tutorial/resync-replica-set-member/
This did not work - (again stale)

in the latest doc (5.0) they mention to use a new member ID, does it apply for 4.2 as well?
changing the member ID throws error as IP and port is same for node I am trying to recover

This method was also unsuccessful, planning to recover the node using different data path and port as primary might consider it as a new node, then once the secondary is up, will change the port to which I want and restart, will it work?

please provide any other suggestions to recover a replica node with large data like 250 GB

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

暖风昔人 2025-01-31 13:19:50
  1. 关闭主节点
  2. 从主节点复制数据文件,将其放置在新的DB路径(除了恢复节点db路径之外)
  3. 更改日志路径,
  4. 以不同的端口(除了恢复节点使用的端口)启动mongo服务(除了恢复节点所使用的端口外)
  5. 其添加到使用RS.ADD(“ IP:New Port”)的Replicaset在主要

工作中可以看到辅助节点成功地出现

  1. shut down primary
  2. Copying the data files from primary node, placing it in new db path (other than the recovering nodes db path)
  3. changing log path
  4. starting mongo service with different port (other than the one used by recovering node)
  5. start primary
  6. adding it to replicaset using rs.add("IP:new port") on primary

worked, could see the secondary node coming up successfully

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文