如何安排 SQL 在@数据库级别稍后执行

发布于 2024-10-21 19:46:49 字数 576 浏览 1 评论 0原文

---------规格---------
数据库:PostgreSQL
语言:PHP

---------描述---------
我想创建一个表来存储数据库的事务日志。我只想存储简短的信息。

我认为在大量并发执行期间,将数据(来自所有表的事务日志)添加到单个日志表将成为性能瓶颈。

所以我想到了一个解决方案,为什么不将事务日志的SQL添加到一个队列中,当数据库没有很大的压力时,它会自动执行。

---------问题---------
PostgreSQL 中是否有类似的功能?或者我如何使用 PHP-Cron 作业或任何其他方法实现类似的功能。注意:在数据库低压期间执行是必要的

---------提前感谢 ---------

编辑:
定义
高压力/高并发执行:每秒对 10 个以上的表并发执行约 500 个或更多查询。
没有沉重的压力:每秒在少于 5 个表上并发执行大约 50 次或更少的查询。
事务日志表:如果在任何表中编辑/插入/删除任何内容,则其详细信息必须插入事务日志表中

---------Specification---------
Database: PostgreSQL
Language: PHP

---------Description---------
I want to create a table to store transaction log of the database. I just want to store brief information.

I think that during heavy concurrent execution, adding data (transaction log from all table) to single log table will bottleneck performance.

So I thought of a solution, why not add the SQL for transaction log to a queue which will execute automatically when there is NO heavy pressure on database.

---------Question---------
Is there any similar facilities available in PostgreSQL. OR How can I achieve similar functionality using PHP-Cron job or any other method. Note: Execution during LOW pressure on DB is necessary

---------Thanx in advance---------

EDIT:
Definition
Heavy Pressure/heavy concurrent execution: About 500 or more query per sec on more than 10 tables concurrently.
NO heavy pressure: About 50 or less query per second on less than 5 tables concurrently.
Transaction log table: If anything is edited/inserted/deleted into any table, its detail must be INSERTED in transaction log table

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

柠栀 2024-10-28 19:46:49

我认为在大量并发执行期间,将数据(来自所有表的事务日志)添加到单个日志表将成为性能瓶颈。

不要假设。测试。
尤其是在性能方面。过早进行优化是一件坏事。

还请定义“大量使用”。您期望每秒插入多少次?

所以我想到了一个解决方案,为什么不将事务日志的SQL添加到一个队列中,当数据库压力不大时自动执行

定义“无重压”呢?你怎么知道的?

总而言之,我建议简单地插入数据并调整 PostgreSQL,以便它能够应对负载。

您可以将数据移动到单独的硬盘上,这样常规操作的 IO 就不会受此影响。一般来说,插入速度受到 IO 的限制,因此为自己准备一个快速的 RAID 10 系统。

您可能还需要调整检查点段和 WAL 编写器。

但如果您谈论的不是每秒 1000 次插入,您可能不需要做太多事情就可以完成这项工作(假设是快速硬盘/RAID 系统)

I think that during heavy concurrent execution, adding data (transaction log from all table) to single log table will bottleneck performance.

Don't assume. Test.
Especially when it comes to performance. Doing premature optimization is a bad thing.

Please also define "heavy usage". How many inserts per second to you expect?

So I thought of a solution, why not add the SQL for transaction log to a queue which will execute automatically when there is NO heavy pressure on database

Define "no heavy pressure"? How do you find out?

All in all I would recommend to simply insert the data and tune PostgreSQL so that it can cope with the load.

You could move the data to a separate hardd disk so that IO for the regular operations is not affected by this. In general insert speed is limited by IO, so get yourself a fast RAID 10 system.

You will probably also need to tune the checkpoint segments and WAL writer.

But if you are not talking about something like 1000 inserts per second, you'll probably don't have to do much to make this work (fast harddisk/RAID system assumed)

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文