Firebird 备份恢复令人沮丧,有办法避免吗?

发布于 2024-11-11 18:06:41 字数 295 浏览 4 评论 0原文

我正在使用 Firebird,但最近数据库增长非常严重。 确实有很多删除语句正在运行,还有更新/插入,并且数据库文件大小增长得非常快。 经过大量删除记录后,数据库大小并没有减少,更糟糕的是,我感觉查询实际上变慢了一点。 为了解决这个问题,需要进行每日备份/恢复过程,但是由于已经到了完成的时间 - 我可以说使用 Firebird 确​​实令人沮丧。

  • 欢迎任何有关变通办法或解决方案的想法。

  • 另外,我正在考虑切换到 Interbase,因为我从朋友那里听说它没有这个问题 - 确实如此?

I am using Firebird, but lately the database grows really seriously.
There is really a lot of delete statements running, as well update/inserts, and the database file size grows really fast.
After tons of deleting records the database size doesn't decrease, and even worse, i have the feeling that actually the query getting slowed down a bit.
In order to fix this a daily backup/restore process have been involved, but because of it's time to complete - i could say that it is really frustrating to use Firebird.

  • Any ideas on workarounds or solution on this will be welcome.

  • As well, I am considering switching to Interbase because I heard from a friend that it is not having this issue - it is so ?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

我的痛♀有谁懂 2024-11-18 18:06:41

我们在生产中的 Firebird 上有很多巨大的数据库,但从未遇到过数据库增长的问题。是的,每次删除或更新记录时,它的旧版本都会保留在文件中。但垃圾收集器迟早会把它清除掉。一旦两个进程相互平衡,数据库文件将仅根据新数据和索引的大小而增长。

作为防止数据库巨大增长的一般预防措施,请尝试使事务尽可能短。在我们的应用程序中,我们使用一个只读事务来读取所有数据。该事务在整个应用程序生命周期中都是开放的。对于每批插入/更新/删除语句,我们使用简短的单独事务。

数据库操作速度减慢可能是由于过时的索引统计数据造成的。您可以在此处找到如何重新计算所有指数统计数据的示例:http://www.firebirdfaq.org/faq167 /

We have a lot of huge databases on Firebird in production but never had an issue with a database growth. Yes, every time a record being deleted or updated an old version of it will be kept in the file. But sooner or later a garbage collector will sweap it away. Once both processes will balance each other the database file will grow only for the size of new data and indices.

As general precaution to prevent an enormous database growth try to make your transactions as short as possible. In our applications we use one READ ONLY transaction for reading all the data. This transaction is open through whole application life time. For every batch of insert/update/delete statements we use short separate transactions.

Slowing of database operations could be resulted from obsolete indices stats. Here you can find an example of how to recalculate statistics for all indices: http://www.firebirdfaq.org/faq167/

我很OK 2024-11-18 18:06:41

检查您的应用程序中是否有未完成的交易。如果事务已启动但未提交或回滚,则数据库将为最旧的活动事务之后的每个事务拥有自己的修订版。

您可以检查数据库统计信息(gstat 或外部工具),其中有最旧的事务和下一个事务。如果这些数字之间的差异持续扩大,则存在交易停滞问题。

还有一些监控工具可以检查情况,我用过的一个是 Sinatica Monitor for Firebird。

编辑:此外,数据库文件永远不会自动缩小。它的一部分被标记为未使用(扫描操作后)并将被重复使用。 http://www.firebirdfaq.org/faq41/

Check if you have unfinished transactions in your applications. If transaction is started but not committed or rolled back, database will have own revision for each transaction after the oldest active transaction.

You can check the database statistics (gstat or external tool), there's oldest transaction and the next transaction. If the difference between those numbers keeps growing, you have the stuck transaction problem.

There are also monitoring tools the check situation, one I've used is Sinatica Monitor for Firebird.

Edit: Also, database file doesn't shrink automatically ever. Parts of it get marked as unused (after sweep operation) and will be reused. http://www.firebirdfaq.org/faq41/

微凉徒眸意 2024-11-18 18:06:41

被删除的记录占用的空间一旦被 Firebird 进行垃圾回收就会被重新使用。
如果 GC 没有发生(事务问题?),DB 将继续增长,直到 GC 能够完成其工作。

此外,当您在表中进行大量删除(例如:数百万条记录)时,还会出现一个问题,该表中的下一个选择将“触发”垃圾收集,并且性能将下降,直到 GC 完成。解决此问题的唯一方法是在服务器不太使用的时候进行大量删除,然后运行扫描,确保没有卡住的事务。

另外,请记住,如果您使用“标准”表来保存临时数据(即:多次插入和删除信息),则在某些情况下可能会损坏数据库。我强烈建议您开始使用全局临时表功能。

The space occupied by deleted records will be re-used as soon as it is garbage collected by Firebird.
If GC is not happening (transaction problems?), DB will keep growing, until GC can do its job.

Also, there is a problem when you do a massive delete in a table (ex: millions of records), the next select in that table will "trigger" the garbage collection, and the performance will drop until GC finishes. The only way to workaround this would be to do the massive deletes in a time when the server is not very used, and run a sweep after that, making sure that there are no stuck transactions.

Also, keep in mind that if you are using "standard" tables to hold temporary data (ie: info is inserted and delete several times), you can get corrupted database in some circumstances. I strongly suggest you to start using Global Temporary Tables feature.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文