PostgreSQL增量备份的最佳方法

发布于 2024-10-30 10:20:47 字数 228 浏览 1 评论 0原文

我目前正在使用通过管道连接到 gzip 的 pg_dump,通过管道连接到 split。但这样做的问题是所有输出文件总是会改变。因此基于校验和的备份始终复制所有数据。

还有其他好方法来执行 PostgreSQL 数据库的增量备份,可以从备份数据恢复完整数据库吗?

例如,如果 pg_dump 可以使所有内容绝对有序,那么所有更改仅在转储结束时应用,或类似的。

I am currently using pg_dump piped to gzip piped to split. But the problem with this is that all output files are always changed. So checksum-based backup always copies all data.

Are there any other good ways to perform an incremental backup of a PostgreSQL database, where a full database can be restored from the backup data?

For instance, if pg_dump could make everything absolutely ordered, so all changes are applied only at the end of the dump, or similar.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

深府石板幽径 2024-11-06 10:20:47

更新: 查看 Barman 了解更简单的方法来设置 WAL 归档以进行备份。

您可以使用PostgreSQL的连续WAL归档方法。首先,您需要设置 wal_level=archive,然后执行完整的文件系统级备份(在发出 pg_start_backup() 和 pg_stop_backup() 命令之间)然后只需通过配置 archive_command 选项复制较新的 WAL 文件。

优点:

  • 增量式,WAL 归档包括恢复数据库当前状态所需的一切
  • 几乎没有开销,复制 WAL 文件很便宜
  • 您可以在任何时间点恢复数据库(此功能称为 PITR)或时间点恢复)

缺点:

  • 设置比 pg_dump 更复杂
  • 完整备份将比 pg_dump 大得多,因为包含所有内部表结构和索引
  • 不适用于写入密集型数据库,因为恢复会需要很长时间。

有一些工具,例如 pitrtoolsomnipitr 可以简化设置和恢复这些配置。但我自己没有使用过它们。

Update: Check out Barman for an easier way to set up WAL archiving for backup.

You can use PostgreSQL's continuous WAL archiving method. First you need to set wal_level=archive, then do a full filesystem-level backup (between issuing pg_start_backup() and pg_stop_backup() commands) and then just copy over newer WAL files by configuring the archive_command option.

Advantages:

  • Incremental, the WAL archives include everything necessary to restore the current state of the database
  • Almost no overhead, copying WAL files is cheap
  • You can restore the database at any point in time (this feature is called PITR, or point-in-time recovery)

Disadvantages:

  • More complicated to set up than pg_dump
  • The full backup will be much larger than a pg_dump because all internal table structures and indexes are included
  • Does not work well for write-heavy databases, since recovery will take a long time.

There are some tools such as pitrtools and omnipitr that can simplify setting up and restoring these configurations. But I haven't used them myself.

秋千易 2024-11-06 10:20:47

另请查看 http://www.pgbackrest.org

pgBackrest 是 PostgreSQL 的另一个备份工具,您应该将其评估为它支持:

  • 并行备份(经过测试几乎可以线性扩展至 32 个核心,但可能会更远……)
  • 静态压缩备份
  • 增量和差异(压缩!)备份
  • 流式压缩(数据仅在源处压缩一次,然后通过网络传输并存储)
  • 并行、增量恢复(能够将旧副本更新到最新副本)
  • 完全支持表空间
  • 备份轮换和存档过期
  • 能够恢复由于某种原因失败的备份
  • 等。

Also check out http://www.pgbackrest.org

pgBackrest is another backup tool for PostgreSQL which you should be evaluating as it supports:

  • parallel backup (tested to scale almost linearly up to 32 cores but can probably go much farther..)
  • compressed-at-rest backups
  • incremental and differential (compressed!) backups
  • streaming compression (data is compressed only once at the source and then transferred across the network and stored)
  • parallel, delta restore (ability to update an older copy to the latest)
  • Fully supports tablespaces
  • Backup rotation and archive expiration
  • Ability to resume backups which failed for some reason
  • etc, etc..
狼性发作 2024-11-06 10:20:47

另一种方法是备份为纯文本并使用 rdiff 创建增量差异。

Another method is to backup to plain text and use rdiff to create incremental diffs.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文