SQLite非独占RESERVED锁?

发布于 2024-07-09 03:32:25 字数 775 浏览 4 评论 0原文

我一直在研究提高我的站点的 SQLite 性能,特别是在事务方面。 本质上,我正在寻找一种在进程中推迟数据库写入的方法,以便它们可以同时完成。 但是,当我累积更新查询时,我希望其他进程能够读取和写入数据库,并且仅在进程中发出提交后才锁定文件以进行写入。

在查看文档时,似乎一旦在事务中发出更新命令,该进程就会获得保留锁,这(如果我没记错的话)意味着任何其他尝试向其自己的事务添加更新查询的进程或提交事务无法这样做,因此会阻塞,直到事务在带有锁的进程上提交。

我确信针对此特定功能有很好的数据完整性理由。 我只能说,就我而言,同时执行这些更新没有危险。

一种解决方案是,在每个进程中,我可以将我希望在数组中调用的查询文本累积起来,然后在准备好写入后对其进行循环,但我想知道是否可以将 SQLite 事务设置为自动为我执行此操作。

更新:当我说“立即执行所有更新”时,我的意思本质上是使用 SQLite 中的事务来仅获取独占锁并在每个进程写入磁盘一次,而不是每个查询写入一次。 这使得使用 SQLite 的速度提高了 100 倍。

我已经做了一些基本测试,似乎一旦您有多个进程向其事务添加查询,一旦您点击更新查询,该进程就会尝试获取 RESERVED 锁。 由于只有一次进程可以拥有保留锁,这意味着任何其他尝试获取锁的进程都将阻塞,直到拥有锁的进程完成事务为止。

我承认这个问题可能是一个过早的优化,因为我还没有遇到任何性能损失,但我已经运行了一些简单的测试,100 个用户在我的机器上的 PHP 中每个用户创建和运行一个包含 100 个查询的事务大约需要 4 秒。

I've been looking into improving SQLite performance for my site, especially with regard to transactions. In essence what I'm looking for is a way to defer database writes in a process so that they can all be done at once. However, while I'm accumulating update queries, I would like other processes to be able to both read from and write to the database, and only lock the file for writing once a commit is issued in a process.

In looking at the documentation, it seems as though once an update command is issued in a transaction the process gets a RESERVED lock, which (if I remember correctly) means that any other process that attempts to either add an update query to its own transaction or commit the transaction is unable to do so, and therefore blocks until the transaction commits on the process with the lock.

I'm sure there are very good data integrity reasons against this particular feature. All I can say is that in my case there's no danger in performing these updates simultaneously.

One solution is that in each process I could accumulate the text of the queries I wish to invoke in an array, then loop down it once I'm ready to write, but I'm wondering if it's possible the SQLite transaction can be made to do this for me automatically.

update: What I mean when I say "doing all my updates at once" is essentially using transactions in SQLite to only get an EXCLUSIVE lock and write to disk once per processes, rather than once per query. This leads to 100x speedups using SQLite.

I've done some basic testing and it seems that once you have multiple processes adding queries to their transactions, once you hit an update query that process attempts to get the RESERVED lock. Since only once process can have the reserved lock, this means that any other processes attempting to get the lock will block until the process with the lock finished the transaction.

I will admit that this problem might be a premature optimization as I have yet to encounter any performace penalties, but I have run some simple tests and 100 users each creating and running a transaction with 100 queries takes about 4 seconds in PHP on my machine.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

昔日梦未散 2024-07-16 03:32:25

SQLite 支持 ATTACH 将一个数据库附加到另一个数据库。 也许您可以将数据累积在单独的数据库中,当您准备好合并累积的行时,附加单独的数据库,在单个语句中复制行,然后分离。

编辑:在邮件列表上向OP提出了类似的建议sqlite-users 的线程 进行了一些后续讨论。

SQLite supports ATTACH to attach one database to another database. Perhaps you can accumulate your data in a separate database, and when you are ready to merge the accumulated rows, attach the separate database, copy the rows in a single statement, and detach.

Edit: a similar suggestion to the OP was made on a mailing list thread at sqlite-users with some follow-up discussion.

爱的故事 2024-07-16 03:32:25

比附加数据库更好的方法是创建临时表。 (CREATE TEMPORARY ...)

看看新的 WAL 日志模式,它完全符合您想要手动执行的操作,并且允许同时写入和读取(但不是同时写入)。

#pragmajournal_mode=WAL

Better than to attach a database is to just create a temporary table. (CREATE TEMPORARY ...)

And take a look at the new WAL journal mode, which does exacly what you are trying to do manually and it allows simultanious writes and reads (not simultanious writes though).

#pragma journal_mode=WAL

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文