Python / SQLite - 尽管超时较长但数据库仍被锁定

发布于 2024-08-28 19:35:06 字数 1074 浏览 6 评论 0原文

我确信我错过了一些非常明显的东西,但我一生都无法阻止我的 pysqlite 脚本因数据库锁定错误而崩溃。我有两个脚本,一个用于将数据加载到数据库中,一个用于读取数据,但是两个脚本都会频繁且立即崩溃,具体取决于另一个脚本在任何给定时间对数据库所做的操作。我已将两个脚本的超时设置为 30 秒:

cx = sqlite.connect("database.sql", timeout=30.0)

并且我认为我可以看到一些超时的证据,因为我得到了似乎是时间戳的内容(例如 0.12343827e-06 0.1 - 以及如何阻止这种情况)打印?)偶尔会被转储到我的 Curses 格式的输出屏幕中间,但没有任何延迟远远接近 30 秒超时,但其中一个仍然因此一次又一次崩溃。我在 64 位 4 CPU HS21 IBM 刀片服务器上运行 RHEL 5.4,并且听说过一些提到有关多线程的问题,我不确定这是否相关。使用的软件包是 sqlite-3.3.6-5 和 python-sqlite-1.1.7-1.2.1,升级到 Red Hat 官方规定之外的新版本对我来说不是一个很好的选择。可能,但由于一般环境而不可取。

我之前在两个脚本中都启用了 autocommit=1 ,但后来都禁用了,现在我正在 cx.commit() 插入脚本而不提交在选择脚本上。最终,由于我只有一个脚本实际进行任何修改,所以我真的不明白为什么会发生这种锁定。我注意到,随着时间的推移,当数据库变大时,情况会变得更糟。最近它的大小为 13 MB,有 3 个相同大小的表,大约相当于 1 天的数据。创建一个新文件显着改善了这一点,这似乎可以理解,但最终似乎没有遵守超时。

非常感谢任何指点。

编辑:自从询问以来,我已经能够稍微重组我的代码,并使用信号定期在每 5 秒一个事务中写入 0 到 150 次更新。这显着减少了锁定的发生次数,从每分钟左右一次减少到每小时不到一次。我想我可以进一步确保当我在其他脚本中读取数据时写入数据的时间偏移几秒钟,但从根本上说,我正在解决我意识到的问题,不需要超时,这并不需要超时。看起来还是不对。塔。

I'm sure I'm missing something pretty obvious, but I can't for the life of me stop my pysqlite scripts crashing out with a database is locked error. I have two scripts, one to load data into the database, and one to read data out, but both will frequently, and instantly, crash depending on what the other is doing with the database at any given time. I've got the timeout on both scripts set to 30 seconds:

cx = sqlite.connect("database.sql", timeout=30.0)

And think I can see some evidence of the timeouts in that I get what appears to be a timing stamp (e.g 0.12343827e-06 0.1 - and how do I stop that being printed?) dumped occasionally in the middle of my Curses formatted output screen, but no delay that ever gets remotely near the 30 second timeout, but still one of the other keeps crashing again and again from this. I'm running RHEL 5.4 on a 64-bit 4 CPU HS21 IBM blade, and have heard some mention about issues about multi-threading and am not sure if this might be relevant. Packages in use are sqlite-3.3.6-5 and python-sqlite-1.1.7-1.2.1, and upgrading to newer versions outside of Red Hat's official provisions is not a great option for me. Possible, but not desirable due to the environment in general.

I have had autocommit=1 on previously in both scripts, but have since disabled on both, and I am now cx.commit()ing on the inserting script and not committing on the select script. Ultimately as I only ever have one script actually making any modifications, I don't really see why this locking should ever happen. I have noticed that this is significantly worse over time when the database has gotten larger. It was recently at 13 MB with 3 equal sized tables, which was about 1 day's worth of data. Creating a new file has significantly improved this, which seems understandable, but the timeout ultimately just doesn't seem to be being obeyed.

Any pointers very much appreciated.

EDIT: since asking I have been able to restructure my code slightly and use a signal to periodically write between 0 and 150 updates in one transaction every 5 seconds. This has significantly reduced the occurrences of the locking, to less than one an hour as opposed to once every minute or so. I guess I could go further by ensuring the times I write data are offset by a few seconds as I read data in the other script, but fundamentally I'm working around an issue as I percieve it, making a timeout not required, which doesn't seem right still. Ta.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

∝单色的世界 2024-09-04 19:35:06

在 pysqlite 的早期版本中,sqlite.connecttimeout 参数显然被解释为毫秒。因此,您的 timeout=30.0 应该是 timeout=30000

In early versions of pysqlite, the timeout parameter to sqlite.connect is apparently interpreted as milliseconds. So your timeout=30.0 should be timeout=30000.

青柠芒果 2024-09-04 19:35:06

SQLite 对每次写入(更新/插入/删除/...)使用数据库锁定。恕我直言,此锁将一直保持到事务结束为止。 AFAIK,这是跨线程/进程持有的单个锁。

因此,我会尝试显式结束编写脚本的事务和连接,甚至在读取脚本时显式提交并尝试调试并发问题。

SQLite uses database locking for every write (update/insert/delete/...). IMHO, this lock is held until transaction ends. This is single lock held across threads/processes, AFAIK.

So, I'd try explicitly ending both transaction and connection for writing script and to explicitly commit even in reading script and try to debug concurrency issues.

泅渡 2024-09-04 19:35:06

SQLite 根本就没有针对写入量大的工作负载进行优化,它也没有假装如此(但它不介意在一个事务中写入大量数据)。在我看来,您可能需要切换到另一个数据库,如 MySQL、PostgreSQL、Oracle 或 DB2。其中一些选项确实很昂贵,但对于某些工作负载来说,这正是您所需要的。 (另请注意,使用专用数据库服务器解决方案往往也可以更好地完成写入繁重的工作负载,尽管这会增加部署成本和复杂性。有些事情只是成本。)

SQLite is simply not optimized for write-heavy workloads, nor does it pretend to be (but it doesn't mind writing quite a lot in one transaction). It sounds to me like you might be getting to the point where you need to switch to another database like MySQL, PostgreSQL, Oracle or DB2. Some of those options are expensive indeed, but for some workloads that's what you need. (Also note that write-heavy workloads tend to be better done with a dedicated database server solution too, despite the fact that that pushes up deployment costs and complexity. Some things just cost.)

魔法少女 2024-09-04 19:35:06

您的 sqlite3 可能在没有 HAVE_USLEEP 的情况下进行编译,这会使并发线程在重试之前等待几秒钟。 https://beets.io/blog/sqlite-nightmare.html

Your sqlite3 might be compiled without HAVE_USLEEP which makes concurrent threads wait for seconds before retrying. https://beets.io/blog/sqlite-nightmare.html

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文