PostgreSQL 数据库大小(表空间大小)比计算的关系总和大得多

发布于 2024-10-13 03:10:24 字数 761 浏览 1 评论 0原文

大家好,

我发现实际数据库大小(在 HDD 上并通过 pg_database_size() 调用显示)与通过总结 pg_total_relation_size 检索到的总关系大小计算得出的大小之间存在很大差异()。

第一个是 62G,最后一个是 16G(正确的是从最大表中删除的数据的差异)

这是一个简化的查询,可以显示我的系统上的差异:

select current_database(),
       pg_size_pretty( sum(total_relation_raw_size)::bigint ) as calculated_database_size,
       pg_size_pretty( pg_database_size(current_database()) ) as database_size   
  from (select pg_total_relation_size(relid) as total_relation_raw_size
          from pg_stat_all_tables -- this includes also system tables shared between databases
         where schemaname != 'pg_toast' 
       ) as stats;

那里似乎有一些悬空数据。当这种情况出现时,我们从该数据库中转储并完全清理了许多未使用的数据。

PS:我想,这是某种数据库损坏...从这种情况中恢复的唯一方法是切换到热备数据库...

Hallo all,

I see a very big difference between the actual database size (on the HDD and displayed by pg_database_size() call) and the size, calculated by summing up total relation sizes retrieved by pg_total_relation_size().

The first is 62G and the last is 16G (right the difference of the deleted data from the biggest table)

Here is a simplified query, that can show that difference on my system:

select current_database(),
       pg_size_pretty( sum(total_relation_raw_size)::bigint ) as calculated_database_size,
       pg_size_pretty( pg_database_size(current_database()) ) as database_size   
  from (select pg_total_relation_size(relid) as total_relation_raw_size
          from pg_stat_all_tables -- this includes also system tables shared between databases
         where schemaname != 'pg_toast' 
       ) as stats;

It seems like there is some dangling data there. As this situation appeared, after we dumped and full vacuumed lots of unused data from that DB.

P.S.: I suppose, that it was a database corruption of some sort... The only way to recover from this situation was to switch to the Hot-Standby database...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

对你再特殊 2024-10-20 03:10:24

正如 BobG 所写,LOB 是一个非常值得关注的问题,因为当删除应用程序表(包含 OID)的行时,它们不会被删除。

这些不会被 VACUUM 进程自动删除,只有您对它们运行了 VACUUMLO。

Vacuumlo 将从数据库中删除所有未引用的 LOB。

示例调用:(

vacuumlo -U postgres -W -v <database_name>

我只包含 -v 以使 Vacuumlo 更详细一些,以便您看到它删除了多少个 LOB)

在 Vacuumlo 删除了 LOB 后,您可以运行 VACUUM FULL(或让自动真空进程运行)。

LOBs are a very valid concern as BobG writes, since they are not deleted when the rows of your application table (containing the OIDs) get deleted.

These will NOT be deleted by the VACUUM process automatically, only you have run VACUUMLO on them.

Vacuumlo will delete all of the unreferenced LOBs from the database.

Example call:

vacuumlo -U postgres -W -v <database_name>

(I only included the -v to make vacuumlo a bit more verbose so that you see how many LOBs it removes)

After vacuumlo has deleted the LOBs, you can run VACUUM FULL (or let the auto-vacuum process run).

小忆控 2024-10-20 03:10:24

出现这种情况时,我们从该数据库转储并完全清理了许多未使用的数据。

我也有类似的经历:3GB db 包含大量动态数据,一个月左右就变成了 20GB。
手动删除/清理有问题的表并没有效果..
然后我们刚刚做了最后的《

VACUUM FULL ANALYZE 

On the Whole DB》……它的大小缩小了一半。

花了 4 个小时,所以要小心。

As this situation appeared, after we dumped and full vacuumed lots of unused data from that DB.

I had similar experience: 3GB db with lots of dynamic data that went to 20GB for a month or so.
Manually deleting / vacuuming the problematic tables doesn't seamed to have effect ..
And then we just did a final

VACUUM FULL ANALYZE 

On the WHOLE DB ... and it dropped half the size.

It took 4hours so be careful with that.

久光 2024-10-20 03:10:24

您是否有未使用的 LOB?

如果您有这样的内容:

CREATE TABLE bigobjects (
    id BIGINT NOT NULL PRIMARY KEY,
    filename VARCHAR(255) NOT NULL,
    filecontents OID NOT NULL
);

后面跟着:

\lo_import '/tmp/bigfile'
11357
INSERT INTO bigobjects VALUES (1, 'bigfile', 11357);
TRUNCATE TABLE bigobjects;

您在数据库中仍然会有 LOB (id 11357)。

您可以检查数据库中所有大对象的 pg_catalog.pg_largeobject 系统目录表(建议 SELECT DISTINCT loid FROM pg_catalog.pg_largeobject 除非您想以八进制形式查看所有 LOB 数据。)

如果您清除所有未使用的 LOB 并执行 以下操作VACUUM FULL 后,您应该会看到存储量大幅减少。我刚刚在我一直使用的个人开发数据库上尝试过此操作,发现大小从 200MB 减少到 10MB(如 pg_database_size(current_database()) 所报告。)

Do you have unused LOBs?

If you have something like this:

CREATE TABLE bigobjects (
    id BIGINT NOT NULL PRIMARY KEY,
    filename VARCHAR(255) NOT NULL,
    filecontents OID NOT NULL
);

followed by:

\lo_import '/tmp/bigfile'
11357
INSERT INTO bigobjects VALUES (1, 'bigfile', 11357);
TRUNCATE TABLE bigobjects;

You'll still have the LOB (id 11357) in the database.

You can check the pg_catalog.pg_largeobject system catalog table for all the large objects in your database (recommend SELECT DISTINCT loid FROM pg_catalog.pg_largeobject unless you want to see all your LOB data as octal.)

If you clean out all your unused LOBs and do a VACUUM FULL, you should see a hefty reduction in storage. I just tried this on a personal dev database I've been using and saw a reduction in size from 200MB down to 10MB (as reported by pg_database_size(current_database()).)

几度春秋 2024-10-20 03:10:24

您的查询专门筛选出 pg_toast 表,该表可能很大。看看摆脱 schemaname != 'pg_toast' 是否可以为您带来更准确的答案。

Your query is specifically screening out pg_toast tables, which can be big. See if getting rid of that where schemaname != 'pg_toast' gets you a more accurate answer.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文