SQlite/Firebird:它们中的任何一个都支持多个并发写访问吗?
问:我当前将 ASP.net 应用程序数据存储在 XML 文件中。
现在的问题是我有异步操作,这意味着我遇到了对 XML 文件同时写入访问的问题...
现在,我正在考虑转向嵌入式数据库来解决该问题。 我目前正在考虑 SQlite 和嵌入式 Firebird。
但我不确定 SQlite 或 Firebird 是否可以处理多个并发写入访问。
我当然不想再遇到同样的问题。
有人知道吗?
SQlite 当然更出名,但哪一个更好 - SQlite 还是 Firebird?我倾向于说火鸟,但我真的不知道。
请不要使用 MS-Access 或 MS-SQL-express 进行推荐,我是一个理智的人。
Question: I currently store ASP.net application data in XML files.
Now the problem is I have asynchronous operations, which means I ran into the problem of simultanous write access on a XML file...
Now, I'm considering moving to an embedded database to solve the issue.
I'm currently considering SQlite and embeddable Firebird.
I'm not sure however if SQlite or Firebird can handle multiple concurrent write access.
And I certainly don't want the same problem again.
Anybody knows ?
SQlite certainly is better known, but which one is better - SQlite or Firebird ? I tend to say Firebird, but I don't really know.
No MS-Access or MS-SQL-express recommodations please, I'm a sane person.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
我会选择 Firebird 有很多原因,也有这个原因
也许你也可以检查这个
I wll choose Firebird for many reasons and for this too
May be you can also check this
SQLITE 可以配置为在大多数情况下优雅地处理同时写入。发生的情况是,当一个线程或进程开始写入数据库时,文件将被锁定。当尝试第二次写入并遇到锁定时,它会在再次尝试写入之前后退一小段时间,直到成功或超时。超时是可配置的,但除此之外,所有这一切都发生时,应用程序代码无需执行任何特殊操作(除了启用该选项之外),如下所示:
除了以下两种情况外,所有这些都运行良好且没有任何困难:
如果应用程序执行如果在一个事务中进行大量写入(例如一千次插入),那么数据库将被锁定很长一段时间,并且可能会给尝试写入的任何其他应用程序造成问题。解决方案是将如此大的写入分解为单独的事务,以便其他应用程序可以访问数据库。
如果数据库被不同机器上运行的不同进程共享,则共享网络安装的磁盘。许多操作系统的网络安装磁盘存在缺陷,导致文件锁定不可靠。对此没有答案。如果需要共享网络挂载磁盘上的数据库,则需要另一个数据库引擎,例如MySQL。
我对火鸟没有任何经验。几年来,我在许多应用程序中都在类似的情况下使用过 SQLITE。
SQLITE can be configured to gracefully handle simultaneous writes in most situations. What happens is that when one thread or process begins a write to the db, the file is locked. When the second write is attempted, and encounters the lock, it backs off for a short period before attempting the write again, until it succeeds or times out. The timeout is configurable, but otherwise all this happens without the application code having to do anything special except enabling the option, like this:
All this works very well and without any difficulty, except in two circumstances:
If an application does a great many writes, say a thousand inserts, all in one transaction, then the database will be locked up for a significant period and can cause problems for any other application attempting to write. The solution is to break up such large writes into seperate transactions, so other applications can get access to the database.
If the database is shared by different processes running on different machines, sharing a network mounted disk. Many operating systems have bugs in network mounted disks that making file locking unreliable. There is no answer to this. If you need to share a db on a network mounted disk, you need another database engine such as MySQL.
I do not have any experience with Firebird. I have used SQLITE in situations like this for many applications over several years.
您是否研究过使用 SQLite API 来支持 SQL 的 Berkeley DB?
Have you looked into Berkeley DB with the SQLite API for SQL support?
听起来 SQLite 会是一个不错的选择。我们在许多生产应用程序中使用 SQLite,实际上它支持事务,它更喜欢事务,这对处理并发性有很大帮助。
事务sqlite?在 C# 中
It sounds like SQLite will be a good fit. We use SQLite in a number of production apps, it supports, actually, it prefers transactions which go a long way to handling concurrency.
transactional sqlite? in C#
我会将#3 添加到上面的 ravenspoint 列表中:如果您有一个大型呼叫中心或订单处理中心,例如,可能有数十人同时点击“保存”按钮,即使每个人都在更新或插入仅一条记录,使用繁忙超时方法可能会遇到问题。
对于场景#3,可以序列化的真正的 SQL 引擎是理想的;不太理想但可用的是可以对共享文件进行字节范围记录锁定的 dbms。但请注意,当新记录像货运列车末端的车尾一样附加到文件末尾时,即使是字节范围记录锁也不足以应对大量并发写入,因此多个进程都在尝试同时对同一字节范围设置锁定。另一方面,对于这种情况,字节范围记录锁定方案与散列密钥稀疏文件方法相结合(例如用于 LAN 的旧 Revelation/OpenInsight 数据库)将远远优于 ISAM。
I would add #3 to the list from ravenspoint above: if you have a large call-center or order-processing center, say, where dozens of people might be hitting the SAVE button at the same time, even if each is updating or inserting just one record, you can run into problems using the busy timeout approach.
For scenario #3, a true SQL engine that can serialize is ideal; less ideal but serviceable is a dbms that can do byte-range record locking of a shared-file. But be aware that even a byte-range record lock will be inadequate for a large number of concurrent writes when new records are appended to the end of the file like a caboose on the end of a freight train, so that multiple processes are trying at the same time to set a lock on the same byte-range. On the other hand, a byte-range record locking scheme coupled with a hashed-key sparse file approach (e.g. the old Revelation/OpenInsight database for LANs) will be far superior to ISAM for this scenario.