如何在分布式缓存产品中传输在多个分布式映射上操作的事务
我所说的分布式缓存产品是指 Coherence 或 Hazelcast 之类的产品。我将使用 Hazelcast 作为示例。
假设我有一个在多个映射中保存状态的对象:
class DataState {
Map<ID, Dog> dogs = Hazelcast.getMap("dog");
Map<ID, Owner> owners = Hazelcast.getMap("owner");
public void associate(Dog dog, Owner owner) {
/* ... put in maps and set up references */
}
}
请注意,associate() 函数需要是事务性的,因为它会修改多个映射。由于狗和主人以某种方式关联,因此在方法完成之前数据可能处于不一致状态。现在,如果另一个类从分布式内存中读取数据,它不知道事务正在发生,并且可能会看到不一致的数据。
class DataStateClient {
Map<ID, Dog> dogs = Hazelcast.getMap("dog");
Map<ID, Owner> owners = Hazelcast.getMap("owner");
public void doSomething() {
// oops, owner2 is associated with dog1 but
// dog1 is not yet in the map!
}
}
现在,Hazelcast 已经有了分布式锁来解决类似的问题,但是这对性能有什么影响呢?假设 doSomething() 的成本很高(例如,在本地复制两个映射),在这种情况下,可能不足以锁定多个客户端。
这个分布式同步问题有标准的解决方案吗?
By distributed cache product I mean something like Coherence or Hazelcast. I will use Hazelcast as the example.
Suppose I have an object that keeps state in a number of maps:
class DataState {
Map<ID, Dog> dogs = Hazelcast.getMap("dog");
Map<ID, Owner> owners = Hazelcast.getMap("owner");
public void associate(Dog dog, Owner owner) {
/* ... put in maps and set up references */
}
}
Note that the associate() function needs to be transactional because it modifies multiple maps. Since dogs and owners are somehow associated, it may be that the data is in an inconsistent state until the method completes. Now if another class reads from the distributed memory, it has no idea that a transaction is happening and may see data inconsistently.
class DataStateClient {
Map<ID, Dog> dogs = Hazelcast.getMap("dog");
Map<ID, Owner> owners = Hazelcast.getMap("owner");
public void doSomething() {
// oops, owner2 is associated with dog1 but
// dog1 is not yet in the map!
}
}
Now, Hazelcast has distributed locks to solve something like this, but what are the performance implications? Suppose that doSomething() is expensive (for example, copying both maps locally) in which case it may not be adequate to lock out multiple clients.
Is there a standard solution to this distributed synchronization problem?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如果您想序列化写访问(互斥),分布式锁是一种不错的选择。如果您使用 Cacheonix,并且使用 Cacheonix 读/写锁,您的示例可能会具有更好的性能。这样,读者就可以进行并发读取访问,而不必等待单个服务器完成,如果使用简单的互斥锁,就会出现这种情况:
Writer:
...
Readers:
If you want to serialize write access (mutual exclusion), a distributed lock is a way to go. If you were using Cacheonix your example could have a better performance if you used Cacheonix read/write locks. This way readers could have concurrent read access and wouldn't have to wait for a single server to finish that would be a case if a simple mutex were used:
Writer:
...
Readers: