存储对象更改历史记录的好解决方案是什么?

发布于 2025-01-02 13:39:49 字数 412 浏览 4 评论 0 原文

需要跟踪对数据库中的对象所做的更改。

简单的实现是拥有镜像表,通过触发器在数据库内部或应用程序内将记录插入其中,但这会影响性能,并且随着时间的推移,镜像数据库会变得巨大,并且当必须更改原始表时,维护时间基本上会增加一倍(镜像表)需要反映这种变化)。

由于我最大的要求是对数据库和应用程序性能的影响最小,所以我当前的偏好是通过 udp 将更改转储到 syslog-ng 中,并将它们存储在纯文本文件中。

毕竟变更日志不是经常访问的东西,所以随着时间的推移将其存档甚至可以。但显然,通过这样的设置,实际访问该数据是相当棘手的。

所以我想我的问题是 - 是否已经有一个至少部分满足我的需求的系统?完美的选择是 UDP 访问的无模式仅附加数据库系统,可以自动归档数据(或至少需要最少的配置)或插入性能的缓慢下降。 MongoDB?沙发数据库?你的数据库?

Need to keep track of changes done to objects in database.

Trivial implementation would be to have mirror table that gets records inserted into it by triggers, either inside database or within application, but that affects performance and over time mirror database gets huge and essentially doubles maintenance time when original table has to be altered(mirror table needs to reflect that change).

Since my biggest requirement here is to have minimal effect on database and application performance my current preference is to dump changes into syslog-ng over udp and store them in plain text files.

After all changelog is not something that will be accessed frequently so it's even ok to have it archived over time. But obviously with such setup actual accessing that data is quite tricky.

So i guess my question is - is there already a system that at least partially suites my needs? Perfect fit would be UDP-accessed schema-less append-only database system with possibility to automatically archive data(or at least minimal amount of configuration needed for doing that) or very slow degradation of insert performance. MongoDB? CouchDB? YourDB?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

百思不得你姐 2025-01-09 13:39:49

嗯,有很多方法可以解决这个问题。我最熟悉 MongoDB,所以会倾向于这个方向。总的来说,我认为它将满足您对性能的需求,并且使用副本集,从从属设备读取数据可能是采取的方法。但是,版本控制不是内置的。您可以在此处查看一种使用 Mongoid::Versioning 进行版本控制的方法:

Mongoid::Versioning - 如何检查以前的版本?

你提到的其他解决方案可能有更好的本机支持,但我不能说。希望这至少能给您一些关于 MongoDB 方面的指导。

Well, there are lots of ways to approach this. I am most familiar with MongoDB and so would lean in that direction. In general I think it will meet your needs for performance, and using a replica set, with reads coming off the slaves would likely be the approach to take. However, the versioning is not built in. You can see one approach to versioning with Mongoid::Versioning here:

Mongoid::Versioning - how to check previous versions?

The other solutions you mentioned may have better native support, but I can't speak to that. Hopefully this at least gives you some pointers on the MongoDB side of things.

网名女生简单气质 2025-01-09 13:39:49

查看 mongoid 历史

它跟踪变化的历史,例如什么、何时、由谁进行与版本。它还提供了配置选项

Have a look at mongoid history

It tracks the history of of changes like what, when, by whom along with version. Its also provided with configuration options

追我者格杀勿论 2025-01-09 13:39:49

RavenDB 原生具有此功能(但对于生产需求而言,NoSQL 数据库可能还太年轻 - 当然取决于您)

http://ravendb.net/docs/server/bundles/versioning

http://www.slideshare.net/jwoglamott/battle-of-nosql-stars-amazons-sdb-vs-mongodb-vs-couchdb-vs-ravendb

如果你想使用 MongoDB,在此线程

策略 1:嵌入历史记录 将如果您调整代码以避免在不必要时返回历史记录,则不会影响您的写入性能和读取性能,但是您对一个文档有 16Mb 的限制(可能会对您造成阻碍) 或不)。 策略 2:将历史记录写入单独的集合(显然)需要两个操作。我同意那里所说的,这些(或组合)是 MongoDB 中可用的策略。

CouchDB 在内部使用 MVCC 方法(您可以按照此处),但在 SO 中这种方法是 有争议此主题有一个问题,建议的解决方案类似于上面描述的 MongoDB 嵌入式策略(因此您应该选择您喜欢的策略)。

RavenDB has this feature native (but might be too young as NoSQL db for production needs - up to you of course)

http://ravendb.net/docs/server/bundles/versioning

http://www.slideshare.net/jwoglamott/battle-of-nosql-stars-amazons-sdb-vs-mongodb-vs-couchdb-vs-ravendb

If you want to go for MongoDB, two implementation strategies are suggested in this thread

Strategy 1: embed history will not impact your write performances and read if you tweak your code to avoid returning the history when not necessary, however you have the 16Mb limitation for one documents (might be blocker for you or not). Strategy 2: write history to separate collection requires (plainly) two operations. I agree as said there that these (or a combination) are the strategies available in MongoDB.

CouchDB is using internally an MVCC approach (and you could leverage for it as suggested here), but in SO this kind of approach is debated. There is a question on this topic, and the proposed solution is similar to the embedded strategy described above for MongoDB (so you should pick the one you prefere).

南薇 2025-01-09 13:39:49

出于简单的目的(MySQL!),只需在您想要保留记录的表上执行 AFTER UPDATE 触发器即可。

例如,对于带有字段

carId(主键) 的表 cars
颜色
制造商
模型

创建一个表“cars_history”(或相同名称),其中包含以下字段:
卡号
场地
旧值
new_value

并添加一个 AFTER UPDATE 触发器,如下所示:

delimiter //

CREATE TRIGGER trigger_changes
AFTER UPDATE ON cars
FOR EACH ROW
BEGIN
IF OLD.manufacturer <> NEW.manufacturer THEN
  INSERT INTO cars_history
  ( carId, field, old_value, new_value)
  VALUES
  (OLD.carId, 'manufacturer', OLD.manufacturer, NEW.manufacturer);
ELSE IF OLD.color <> NEW.color THEN
  ...
END IF;
END;//
delimiter ;

未经测试,因此可能包含语法错误:) 希望有帮助!

For simple purposes (MySQL!), just do an AFTER UPDATE trigger on the tables you'd like to keep record of.

For example for table cars with fields

carId (primary key)
color
manufacturer
model

create a table 'cars_history' (or equal name) with fields:
carId
field
old_value
new_value

and add an AFTER UPDATE trigger like this:

delimiter //

CREATE TRIGGER trigger_changes
AFTER UPDATE ON cars
FOR EACH ROW
BEGIN
IF OLD.manufacturer <> NEW.manufacturer THEN
  INSERT INTO cars_history
  ( carId, field, old_value, new_value)
  VALUES
  (OLD.carId, 'manufacturer', OLD.manufacturer, NEW.manufacturer);
ELSE IF OLD.color <> NEW.color THEN
  ...
END IF;
END;//
delimiter ;

untested, so may contain syntax errors :) hope that helps anyway!

故事和酒 2025-01-09 13:39:49

SQLite 怎么样?每个数据库都是一个独立的文件,您可以在归档时轻松重命名和移动。如果文件被重命名或自动移动,则会在下次插入时创建其他文件。

SQLite 的唯一问题是并发写入,这需要阻止文件写入。它每秒可以执行大约 60 个事务,但您可以在一个事务中执行数千次插入(请参阅 doc< /a>)。

What about SQLite? Each DB is a self-contained file that you can easily rename and move when archiving. If the file is renamed or moved automatically other is created on next insert.

The only issue about SQLite is concurrent writes, which need block the file for writing. It can do about 60 transactions per second, but you can do thousands of inserts in one transaction (see doc).

◇流星雨 2025-01-09 13:39:49

我想知道这是否是您正在寻找的解决方案类型:
http://www.tonymarston.net/php-mysql/auditlog.html

这似乎是一个非常简单、优雅的解决方案,数据占用量很小,我希望它对插入时间的影响也最小。

I wonder if this is the type of solution you're looking for:
http://www.tonymarston.net/php-mysql/auditlog.html

It appears to be a very simple, elegant solution with a small data footprint and I would expect it to have minimal impact on insert times as well.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文