使用 pg_dump 备份 Postgres 表 是否有解决方法来包含 --jobs 选项

发布于 2025-01-13 04:13:01 字数 308 浏览 0 评论 0原文

我使用的是 postgres 12,并且有一个大小为 1TB 的数据库。我正在对大小为 140+GB 的表执行大删除。

我正在测试该过程,并希望对表及其内容进行 pg_dump,但是按原样运行大约需要 33 分钟

pg_dump -d titan -t public.play > /backup/playBackup.sql

我知道 pg_dump 确实包含一个 --jobs 选项,鉴于我有一台 32 核机器可以真正利用,但那就是备份数据库本身而不是数据库中的表。

有没有更快的方法来备份我需要的表?

I am using postgres 12 and have a database thats 1TB in size. I am performing a big delete on a table that is 140+GB in size.

I am testing the process and am looking to do a pg_dump of the table and its contents however running as is takes approximately 33mins

pg_dump -d titan -t public.play > /backup/playBackup.sql

I know that pg_dump does include a --jobs option which given I have a 32 core machine could really utilise but that's to backup the database itself as opposed to a table in the database.

Is there a quicker way of backing up the table I need?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

厌味 2025-01-20 04:13:01

--jobs 选项仅适用于“数据库”格式转储,对于单个表没有帮助,因为单个表是由单个进程转储的。

您当然可以启动几个并行的 COPY 语句:

COPY (SELECT * FROM titan WHERE id % 5 = 0) TO '/path/titan0.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 1) TO '/path/titan1.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 2) TO '/path/titan2.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 3) TO '/path/titan3.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 4) TO '/path/titan4.csv' (FORMAT 'csv');

如果您同时启动这些语句,您就有机会获得同步顺序扫描并完成单次读取该表。然后您可以并行加载这些文件。

如果您也需要表结构,请运行这些:

pg_dump --section=pre-data -t public.play titan
pg_dump --section=post-data -t public.play titan

首先恢复前数据,然后恢复数据,然后恢复后数据。

The --jobs option, which works only with a "database" format dump, won't help you with a single table, because a single table is dumped by a single process.

You could of course start a couple of parallel COPY statements:

COPY (SELECT * FROM titan WHERE id % 5 = 0) TO '/path/titan0.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 1) TO '/path/titan1.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 2) TO '/path/titan2.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 3) TO '/path/titan3.csv' (FORMAT 'csv');
COPY (SELECT * FROM titan WHERE id % 5 = 4) TO '/path/titan4.csv' (FORMAT 'csv');

If you start these statements at the same time, you have a chance to get synchronized sequential scans and get done with a single read of the table. Then you can load those files in parallel.

If you need the table structure too, run these:

pg_dump --section=pre-data -t public.play titan
pg_dump --section=post-data -t public.play titan

First restore pre-data, then the data, then post-data.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文