干净地关闭分片 MongoDB 集群

发布于 2024-10-05 18:08:42 字数 254 浏览 9 评论 0原文

我目前有一个 mongoDB 设置,其中包含 mongos 服务器、配置服务器和 2 个分片,每个分片包含 3 个 mongod(主从)服务器。我想确保当我关闭它们时,它们会被干净地关闭,以免丢失任何排队的数据,或者当服务器确定要写入的分片等时。

关闭它们的当前最佳实践是什么MongoDB 服务器集群?

最好按哪种顺序关闭、发出 fsync、写锁等。

我想编写一个脚本来自动执行此操作,以方便备份、新代码推送以及其他需要数据库的操作一致的状态。

I currently have a mongoDB setup with a mongos server, a config server, and 2 shards of 3 mongod (master-slave) servers each. I would like to ensure that when I shut them down they are shut down cleanly as to not lose any data that is queued or while the server is determining the shard to write to, etc..

What is the current best practice for shutting down a cluster of MongoDB servers?

Which order are things best to be shut down in, issue fsync, write locks, etc..

I'd like to write a script to automate this to facilitate backups, new code pushes, and anything else that otherwise requires the database to be in a consistent state.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

爱已欠费 2024-10-12 18:08:43

这些最佳实践仍在不断清理中。

根据您的设置,我将如何进行服务器维护。

备份

在每个副本集中查找一个非主副本。执行 fsync &锁定。复制、压缩、备份。解锁数据库。

您应该能够在副本集上成功执行此操作。如果你真的担心,你可以做 fsync & 。锁定然后关闭。

压缩

您可能希望在某个时刻压缩数据。最简单的方法是再次执行 fsync & 操作。锁定然后执行 db.repairDatabase()。修复命令基本上会为您执行“碎片整理/压缩”。如上所述,这也可以通过关闭来关闭。

代码推送

理想情况下,代码推送几乎不需要保持一致。最坏的情况是,您需要管理索引的创建/删除。但这确实需要单独管理,因为您不希望开发人员只是在生产数据库上随机添加索引。

监控

这是一个更加复杂的主题,但您可能需要关注诸如“谁是主控”、“每个节点上的写入吞吐量是多少”、“我使用了多少 RAM”之类的信息,“有多少数据在节点之间转移”。目前用于执行此操作的工具有限,因此请期待自己推出。

These best practices are still really being cleared up.

With your setup here's how I would do the server maintenance.

Backups

Find a non-primary in each replica set. Perform an fsync & lock. Copy, tar, backup. Unlock the DB.

You should be able to do this successfully on a replica set. If you're really worried, you can do fsync & lock and then a shutdown.

Compressions

You probably want to compress data at some point. The easiest way to do this is again to do an fsync & lock and then do a db.repairDatabase(). The repair command will basically do a "defrag / compression" for you. As above, this can also be down with a shutdown.

Code pushes

Ideally, there is very little that needs to be consistent with regards to a code push. At the worst, you'll need to manage index creation / deletion. But this really needs to be managed separately as you don't want devs just randomly adding indexes on a production DB.

Monitoring

This is a way more complex topic, but you'll probably want to watch for things like "who's master", "what the write throughput on each node", "how much RAM am I using", "how much data is shifting between nodes". There are limited tools for doing this right now, so expect to roll your own.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文