我应该如何以事务方式持久保存一个实体并发布其事件?
在这个简单的示例中,我们从存储库加载一个实体,要求它执行操作,然后再次通过存储库插入结果(新状态)。
const doSomething = (personDao: PersonRepository) => (personId: PersonId) => {
const person = personDao.findBy(personId)
const newState = person.doSomething()
personDao.insert(newState)
return newState
}
但是,如果 person.doSomething() 不仅返回其新状态,而且还返回域事件,那么我应该如何保留状态并以事务方式发布它?
const doSomething = (eventPublisher: EventPublisher) => (personDao: PersonRepository) => (personId: PersonId) => {
const person = personDao.findBy(personId)
const [event, newState] = person.doSomething()
personDao.insert(newState)
eventPublisher.publish(event)
return newState
}
也就是说,如果
personDao.insert()
执行,而
eventPublisher.publish()
没有执行,则应回滚插入。确保发布事件至关重要,因为其他应用程序将需要它。
可能的解决方案
PersonRepository
应该将人员和事件保存在单独的表中。
某些(?)应该轮询events
表以检查新事件,然后将其发送到消息代理。
我不确定这个想法的可行性如何。
而且,如果我使用的数据库不支持事务(例如 Cassandra)怎么办?或者 MongoDB,在这种情况下我必须将事件存储在 Person
文档中?我没有使用任何一个,只是我的一个想法。
谢谢。
In this simple example, we load an entity from a repository, ask it to perform an operation and then insert the result (a new state) through the repository again.
const doSomething = (personDao: PersonRepository) => (personId: PersonId) => {
const person = personDao.findBy(personId)
const newState = person.doSomething()
personDao.insert(newState)
return newState
}
However, if person.doSomething()
not only returns its new state, but also a domain event, how am I supposed to persist the state and publish it transactionally?
const doSomething = (eventPublisher: EventPublisher) => (personDao: PersonRepository) => (personId: PersonId) => {
const person = personDao.findBy(personId)
const [event, newState] = person.doSomething()
personDao.insert(newState)
eventPublisher.publish(event)
return newState
}
That is, if
personDao.insert()
goes through and if
eventPublisher.publish()
does not, then the insertion should be rolled back. Making sure events are published is essential since other applications will need it.
Possible solution
PersonRepository
should persist the person and also the events in a separate table.
Something (?) should poll the events
table to check for new events, and then send it to a message broker.
I'm not sure how feasible this idea is.
And, what if I am using a database that does not support transactions, such as Cassandra? Or MongoDB, in which case I would have to store the events inside the Person
document? I am not using either, but just a thought I had.
Thanks.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对此有两种广泛的解决方案:
第一种是事务发件箱模式,在该模式中,您将域事件和更新的实体保留在一个原子事务中。可以通过轮询或通过更改数据捕获来向消息代理提供数据来跟踪事件。
第二种是事件源方法,您只需保留事件(域事件,也可能是表示的“实现细节”事件)。然后可以使用事件来构造实体;作为优化,另一个进程可以轮询/更改数据捕获事件并更新实体的快照(在这种情况下,重建实体的过程是加载最新快照(包括版本),然后应用事件在该版本之后到快照)。
因为您的事件和快照/实体是不同的数据模型,所以对于面向表的数据存储,您几乎肯定希望它们位于不同的表中;对于更面向流的数据存储,同样您需要将它们放在单独的流中。因此,事务发件箱方法的适用性取决于原子写入多个表/流的能力。事件溯源没有这个要求。
我将把实体和事件一起编写的方法描述为事件源(其中事件恰好包括快照)。如果走这条路线,我将有一个消费者(通过轮询或更改数据捕获提供)对配对事件和快照进行多路分离。
There are two broad solutions to this:
First is the transactional outbox pattern, in which you persist the domain events and the updated entity in one atomic transaction. The events can be tracked by polling or via a change-data-capture to feed a message broker.
Second is the event sourcing approach in which you just persist the events (domain events but also, potentially, events which are "implementation details" of your representation). The events can then be used to construct the entity; as an optimization, another process can poll/change-data-capture the events and update a snapshot of the entity (in which case, the process of reconstructing an entity is load the latest snapshot (which includes a version) and then apply the events after that version to the snapshot).
Because your events and your snapshots/entities are different data models, for datastores which are table-oriented, you will almost certainly want to have those be in different tables; for datastores which are more stream-oriented, likewise you'll want those in separate streams. Accordingly, the suitability of a transactional outbox approach rests on the ability to atomically write to multiple tables/streams. Event sourcing doesn't have that requirement.
I would characterize the approach of writing an entity and the events together as event sourcing (where the events happen to include the snapshot). If going that route, I would then have a consumer (fed by polling or by change-data-capture) demultiplex the paired events and snapshot.