Hibernate 二级缓存 ObjectNotFoundException 具有大量并发事务

发布于 2024-09-07 02:38:43 字数 2092 浏览 2 评论 0原文

我们有一个 Java 应用程序,它使用 MySQL、Hibernate (3.5.1-Final) 和 EHcache(1.2.3) 作为二级缓存。

我们的 hibernate.properties 隔离级别是读提交隔离= 2

# 2-Read committed isolation 
hibernate.connection.isolation=2

在大量并发事务下,我们看到一个问题,即加载时某些集合(数据库关联)​​将抛出 ObjectNotFoundException 并且似乎二级缓存正在返回该收藏的旧副本。

我们有许多不同类型的事务访问此集合(仅读取),并且只有少数事务会从中添加/删除项目。

在单个事务负载甚至中等事务负载(10 - 20 个并发连接)下我们不会看到此问题。

例如,我们有一个字符实体:

@Entity
@Table(name = "CHARACTERS")
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class Character extends AbstractCharacter implements Serializable {
...
    @Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
    @OneToMany(mappedBy = "character", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
    private Set<CharacterItem> items;

删除实体时,我们通过将它们从包含的集合中删除并调用 session.delete() 来正确维护对象图。

    character.getItems().remove(characterItem);
    session.delete(characterItem); 

我们尝试过更改设置项; CacheConcurrencyStrategy 来自:

@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
private Set<CharacterItem> items;

@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
private Set<CharacterItem> items;

没有运气。

我们不使用数据库锁,而是使用 乐观并发控制捕获并重试冲突事务。

此时我们唯一可以看到的两个解决方案是:

  1. 尝试捕获ObjectNotFoundException并尝试智能地逐出集合(尽管异常中似乎没有足够的上下文)

  2. 使用@NotFound(action=NotFoundAction.IGNORE) 项目集合上的注释,它将忽略并且不会抛出 ObjectNotFoundException(但我们担心它如何与二级缓存一起工作并确保它正在查看正确的数据)。

我希望有一个 @NotFound(action=NotFoundAction.EVICT_2ND_LEVEL_CACHE_RELOAD) ,它将从缓存中逐出该对象并尝试重新加载集合。

我们还可以尝试将 FetchyType 从 LAZY 更改为 EAGER,但我想尝试理解问题并选择最佳解决方案,以确保事务中的数据在高并发下保持一致。

We have a Java application that uses MySQL, Hibernate (3.5.1-Final) and EHcache(1.2.3) for our 2nd level cache.

Our hibernate.properties isolation level is Read-committed isolation = 2

# 2-Read committed isolation 
hibernate.connection.isolation=2

Under a high number of concurrent transactions, we're seeing an issue where certain collections (DB associations) when loaded will throw an ObjectNotFoundException and it appears that the 2nd level cache is returning an old copy of that collection.

We have many different types of transactions that access this collection (only reading) and only a couple that will add/delete items from it.

We don't see this issue under single transaction load or even moderate transaction load (10 - 20 concurrent connections).

For example we have a Character entity:

@Entity
@Table(name = "CHARACTERS")
@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
public class Character extends AbstractCharacter implements Serializable {
...
    @Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
    @OneToMany(mappedBy = "character", cascade = CascadeType.ALL, fetch = FetchType.LAZY)
    private Set<CharacterItem> items;

We are properly maintaining the object graph when deleting entities by removing them from the collection that they're contained in and calling session.delete().

    character.getItems().remove(characterItem);
    session.delete(characterItem); 

We've tried changing the Set items; CacheConcurrencyStrategy from:

@Cache(usage = CacheConcurrencyStrategy.NONSTRICT_READ_WRITE)
private Set<CharacterItem> items;

To

@Cache(usage = CacheConcurrencyStrategy.READ_WRITE)
private Set<CharacterItem> items;

With no luck.

We don't use database locks instead we use optimistic concurrency control to catch and retry conflicting transactions.

The only 2 solutions we can see at this point is to:

  1. Try to catch the ObjectNotFoundException and try to intelligently evict the collection (although there doesn't seem to be enough context in the exception)

  2. Use the @NotFound(action=NotFoundAction.IGNORE) annotation on the items collection, which will ignore and not throw the ObjectNotFoundException (but we have concerns as to how this works with the 2nd level cache and ensure that it's looking at the proper data).

I wish there was a @NotFound(action=NotFoundAction.EVICT_2ND_LEVEL_CACHE_RELOAD) where it would evict that object from the cache and attempt to reload the collection.

We could also try changing the FetchyType from LAZY to EAGER but I want to try to understand the problem and choose the best solution that will provide that data in our transactions are consistent under high concurrency.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

烟凡古楼 2024-09-14 02:38:43

也许您应该尝试 session.evict(characterItem) 而不是 session.delete

Maybe you should try session.evict(characterItem) instead of session.delete?

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文