克服“日志文件同步”问题通过设计?

发布于 2024-08-12 22:11:58 字数 411 浏览 12 评论 0原文

一些应用程序设计所需的建议/建议。

我有一个使用 2 个表的应用程序,一个是临时表,许多单独的进程都会写入该表,一旦一组进程完成,另一个作业就会将结果聚合到最终表中,然后删除该表暂存表中的“组”。

我遇到的问题是,当清除临时表时,会生成大量重做,并且我看到数据库中有大量“日志文件同步”等待。这是与许多其他应用程序共享的数据库,这导致了一些问题。

应用聚合时,对于临时表中的每 20 行,最终表中的行数会减少到大约 1 行。

我正在考虑通过不使用单个“临时”表来解决这个问题,而是为每个“组”创建一个表。完成后,可以删除该表,这应该会减少重做。

我只有 SE,所以分区表不是一个选项。此外,用于重做的更快磁盘在短期内可能也不是一个选择。

这是一个坏主意吗?有什么更好的解决方案可以提供吗?

谢谢。

Advice/suggestions needed for a bit of application design.

I have an application which uses 2 tables, one is a staging table, which many separate processes write to, once a 'group' of processes has finished, another job comes along a aggregates the results together into a final table, then deletes that 'group' from the staging table.

The problem that I'm having is that when the staging table is being cleared out, lots of redo is generated and I'm seeing a lot of 'log file sync' waits in the database. This is a shared database with many other applications and this is causing some issues.

When applying the aggregate, the rows are reduced to about 1 row in the final table for every 20 rows in the staging table.

I'm thinking of getting around this by rather than having a single 'staging' table, I will create a table for each 'group'. Once done, this table can just be dropped, which should result in much less redo.

I only have SE, so partitioned tables isn't an option. Also faster disks for the redo probably isn't an option in the short term either.

Is this a bad idea? Any better solutions to be offered?

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

夜无邪 2024-08-19 22:11:58

是否可以通过让您的进程执行逻辑删除(即将表中的 DELETE_FLAG 列设置为“Y”)然后使用夜间进程截断表(可能会写入在截断之前将任何未删除的行复制到单独的表中,然后在截断表后将它们复制回来)?

您确定日志文件同步等待的原因是您的磁盘无法跟上 I/O 吗?当然,这当然是可能的,但是还有其他可能的原因导致日志文件同步等待过多,包括过多的提交。有一篇关于调整日志文件同步事件的优秀文章 在 Pythian 博客上。

Would it be possible to solve the problem by having your process do a logical delete (i.e. set a DELETE_FLAG column in the table to 'Y') and then having a nightly process that truncates the table (potentially writing any non-deleted rows to a separate table before the truncate and then copy them back after the table is truncated)?

Are you certain that the source of the log file sync waits is that your disks can't keep up with the I/O? That's certainly possible, of course, but there are other possible causes of excessive log file sync waits including excessive commits. There is an excellent article on tuning log file sync events on the Pythian blog.

情深已缘浅 2024-08-19 22:11:58

日志文件同步过多的最常见原因是过于频繁的提交,这些提交通常是故意编码的,以错误地尝试减少由于锁定而导致的系统负载。仅当您的业务交易完成时才应提交。

The most common cause of excessive log file syncs is too frequent commits, which are often deliberately coded in a mistaken attempt to reduce system load due to locking. You should commit only when your business transaction is complete.

彩扇题诗 2024-08-19 22:11:58

将每个组加载到单独的表中听起来像是一个减少重做的好计划。您可以在每次聚合后截断单个组表。

另一个(但我认为可能更糟糕)的选择是使用尚未聚合的组创建一个新的临时表,然后删除原始表并重命名新表以替换临时表。

Loading each group into a separate table sounds like a fine plan to reduce redo. You can truncate individual group table following each aggregation.

Another (but I think probably worse) option is to create a new staging table with the groups that haven't been aggregated then drop the original and rename the new table to replace the staging table.

滥情哥ㄟ 2024-08-19 22:11:58

我更喜欢 Justin 的建议(“逻辑删除”),但如果您拥有 EE 许可证,可以考虑的另一个选择可能是分区表。聚合过程可以删除分区而不是删除行。

I prefer Justin's suggestion ("logical delete"), but another option to consider might be a partitioned table, if you have the EE licence. The aggregation process could drop a partition instead of deleting the rows.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文