如何定期进行 pg_basebackup 而不丢失任何 WAL 文件。如何暂停wal存档

发布于 2025-01-18 07:23:08 字数 877 浏览 4 评论 0原文

环境:PostgreSQL 13.x Docker 容器。

我采用了 pg_basebackup 并使用 wal_archive=on 配置了 PostgreSQL 13.x。它正在按预期工作。

我看到建议定期进行 pg_basebackup 。如何每周或每天轮换 base_backups?

示例:如果新的 pg_basebackup 每周六晚上运行,我们是否应该考虑在此期间停止/暂停 wal_archiving?

#Locations:
pg_basebackup : /db-backup/basebackup
archive_command: /db-backup/wal_files

因此希望每周六移动存档db-backup

mv /db-backup /db-backup-old

在执行这些操作时,我是否应该暂停 wal_archiving 进程?根据文档 我们可以通过设置 24.3.1 来停止/暂停它。设置 WAL 归档

archive_command = ''

这是正确的方法吗?如果是这样,我们应该重新加载配置还是可以通过任何方式即时更新此配置? 注意:使用 Postgres-docker 容器。

我想要实现的是:

如果在数据库备份轮换期间某些数据写入数据库,则它应该位于新的基本备份或新的 wal-files 目录中。

如果这些混淆无关紧要,请纠正我。

Environment: PostgreSQL 13.x Docker container.

I took a pg_basebackup and have configured PostgreSQL 13.x with wal_archive=on. And it is working as expected.

I see that it is recommended to take pg_basebackup periodically. How can I rotate the base_backups weekly or daily?

Example: If new pg_basebackup is running every Saturday night, Should we consider stopping/pausing wal_archiving for that duration?

#Locations:
pg_basebackup : /db-backup/basebackup
archive_command: /db-backup/wal_files

So want to move archive db-backup every Saturday.

mv /db-backup /db-backup-old

While performing these Should I pause the wal_archiving process? As per docs
we can stop/pause it by setting 24.3.1. Setting up WAL archiving

archive_command = ''

Is this the right approach? If so, should we reload the configuration OR any way we can update this configuration on-the-fly?
Note: using Postgres-docker container.

What I am trying to achieve is:

If some data is getting written on DB during DB backup rotation, either it should be in new basebackup OR new wal-files directory.

Please correct me if these confusions are irrelevant.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

柳絮泡泡 2025-01-25 07:23:08

您可以尝试 pg_basebackup 与下一个选项:

pg_basebackup --checkpoint=fast --format=tar --wal-method=stream -D "$BACKUP_DIR/$FNAME.bak"

其中 -- checkpoint=fast 将检查点模式设置为快速(立即)。因此,所有事务都将刷新到磁盘

,并且在进行备份时--wal-method=stream 将流式传输预写日志数据。此方法将打开与服务器的第二个连接,并在运行备份时开始并行传输预写日志。因此,它将需要两个复制连接,而不仅仅是一个。只要客户端能够跟上预写日志数据,使用这种方法不需要在源服务器上保存额外的预写日志

You can try pg_basebackup with next options:

pg_basebackup --checkpoint=fast --format=tar --wal-method=stream -D "$BACKUP_DIR/$FNAME.bak"

Where --checkpoint=fast sets checkpoint mode to fast (immediate). So all transactions will be flushed to disk

and --wal-method=stream will stream write-ahead log data while the backup is being taken. This method will open a second connection to the server and start streaming the write-ahead log in parallel while running the backup. Therefore, it will require two replication connections not just one. As long as the client can keep up with the write-ahead log data, using this method requires no extra write-ahead logs to be saved on the source server.

我不咬妳我踢妳 2025-01-25 07:23:08

到目前为止我已经解决的是。

 1. Create basebackup which getting saved to /backup_location/basebackups/
 2. Create /backup_location/basebackups/wal-`date`
 3. cp -r /postgres/wal_archive_location/*  /backup_location/basebackups/wal-`date`/

这将导致保留一些重复的 WAL 文件。 (就像今天的备份和明天的备份可能具有与我们使用 cp 命令相同的 WAL 文件。

但是不会丢失任何数据。这是有效的。

What I have worked out till now is.

 1. Create basebackup which getting saved to /backup_location/basebackups/
 2. Create /backup_location/basebackups/wal-`date`
 3. cp -r /postgres/wal_archive_location/*  /backup_location/basebackups/wal-`date`/

This will causing keeping some duplicate WAL files. (Like today's backup and tomorrow's backup might have same WAL files as we used cp command.

But no data will be lost. And this works.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文