使用持久引用持久化对象时遇到 JPA 数据库死锁
我已经为这个问题苦苦挣扎了几天,但找不到我满意的解决方案。我可能可以通过不同程度的间接解决这个问题,但这似乎是我应该能够做的事情,虽然我发现其他一些人可能有我的问题,但他们似乎都没有确切的答案出现问题或者没有提供答案。
虽然我之前已经使用 Hibernate 成功完成了此操作,但我一直无法让它与 JPA 一起工作,而这正是我现在专注于做的事情。
初步信息:
- Guice 3.0 应用程序使用 persist 扩展、hibernate(带注释)和 c3p0(最多 5 个连接)。我们从提供者注入 EntityManager,我可以确认 EntityManager 对象在整个过程中与事务对象保持相同。
- 作为 Web 应用程序的一部分而设计,当前的问题仅发生在使用 JUnit 和 dbUnit 的自动化集成测试中。访问是针对单例对象(.in(Scopes.SINGLETON))的,但是该单例对象仅依赖于注入器和 @Transactional 来保证线程安全。
- 在数据库中,有一个文件表(映射到带注释的 FileContainer)和一个映射到对象 MIMEType 的 mimetype 表。在应用程序中,当我们创建新文件时,我们首先检索 MIMEType 对象作为 NamedQuery 并将其保留。
- 数据库是PostgreSQL 8.4,使用postgresql-8.4-702.jdbc3.jar。
- 系统对失败的情况有一定的挑剔:我需要生成一个单独的线程并从那里执行访问。
FileContainer 对象中 mimetype 字段的映射是:
@ManyToOne(fetch=FetchType.EAGER, cascade={})
@JoinColumn(name="mimetype_id", referencedColumnName="id", nullable=false, updatable=true)
MIMEType 对象注释如下:
@Entity
@Table(name="mimetypes")
@org.hibernate.annotations.Immutable
@Cacheable(true)
@NamedQuery(name="mt.ext", query="from MIMEType where extension = :ext",
hints= {@QueryHint(name="org.hibernate.fetchSize", value="1"),
@QueryHint(name="org.hibernate.readOnly", value="true")})
以上所有注释都是javax.persistence版本。
addFile
方法使用 @com.google.inject.persist.Transactional
进行注释。将其更改为在类中使用注入的 EntityManager 似乎不会改变结果。
创建对象的过程如下:
- 从注入器获取 EntityManager 和 FileDAO。这些都来自提供商。
- 通过调用 em.createNamedQuery("mt.ext", MIMEType.class).setParameter("ext", extension).getSingleResult(); 从数据库中获取我们将要使用的 MIMEType 对象。
- 创建
FileContainer
对象并填充它,包括使用我们刚刚检索到的对象调用 `setMimetype(MIMEType)` - 调用
dao.save(fileContainer)
,这会设置一个基本的FileContainer
对象中的字段,然后调用em.persist(Object)
。将其从 DAO 中取出并使用之前注入的 EntityManager 不会改变结果。
此时,一旦 EntityManager.flush()
发生(无论发生在事务结束时还是之前),INSERT 就会发生并且代码会死锁。
当我检查 pg_stat_activity 并将其与 pg_locks 进行比较时,我发现插入语句被“事务中空闲”连接阻止,并且它处于等待状态。删除 MIMEType 的插入(并将列设置为允许空值)允许代码正常进行,并且检查 MIMEType 的值表明它已正确检索。
任何想法或想法都值得赞赏。
编辑:
这是层次结构,希望它能够阐明操作的顺序以及发生的情况:
- 测试工具(初始化注入器并使用运行器将其注入到特定的测试类中)
- 测试用例
- 线程启动
- 进入单例对象上的事务块(初始化使用相同的注入器)从线程内确认所有 DAO 对象中的 EntityManager 对象以及故障发生的位置都是相同的。
EntityManager
对象始终通过injector.getInstance(EntityManager.class)
调用或获取Provider
对象来获取。- 经历上述过程,全部在同一个线程中。
- 冻结
flush()
代码似乎没有任何应该在线程之间共享的线程本地内容,并且所有内容都通过注入器进行初始化,这让我感到困惑。
I have been struggling with this problem for a few days and I can't quite find a solution that I am happy with. I can probably work around it via various degrees of indirection, but this seems like the sort of thing I should be able to do and while I've found a few other people who might have my problem, none of them seem to have the exact problem or there are no answers provided.
While I've done this successfully using Hibernate before, I haven't been able to get it to work with JPA, which is what I am focusing on doing now.
Preliminary information:
- Guice 3.0 application using the persist extension, hibernate (w/ annotations), and c3p0 (with up to 5 connections). We are injecting the EntityManager from the provider, and I can confirm that the EntityManager object remains the same as does the transaction object throughout this process.
- Designed as part of a web application, current problems are only occurring in automated integration tests that use JUnit and dbUnit. Access is against a singleton object (.in(Scopes.SINGLETON)), but that Singleton object only depends on the injector and @Transactional to be thread safe.
- In the database there is a files table which is mapped to an annotated FileContainer and a mimetype table that is mapped to an object MIMEType. In the application when we create a new file we retrieve the MIMEType object first as a NamedQuery and persist it.
- Database is PostgreSQL 8.4, using the postgresql-8.4-702.jdbc3.jar.
- The system is moderately picky about the circumstances under which it fails: I need to spawn a separate thread and perform the access from there.
The mapping for the mimetype field in th FileContainer object is:
@ManyToOne(fetch=FetchType.EAGER, cascade={})
@JoinColumn(name="mimetype_id", referencedColumnName="id", nullable=false, updatable=true)
The MIMEType object is annotated as follows:
@Entity
@Table(name="mimetypes")
@org.hibernate.annotations.Immutable
@Cacheable(true)
@NamedQuery(name="mt.ext", query="from MIMEType where extension = :ext",
hints= {@QueryHint(name="org.hibernate.fetchSize", value="1"),
@QueryHint(name="org.hibernate.readOnly", value="true")})
All above annotations are the javax.persistence versions.
The addFile
method is annotated with @com.google.inject.persist.Transactional
. Changing this to using an injected EntityManager in the class does not appear to change the outcome.
The process for creating the object goes like this:
- Grab the EntityManager and the FileDAO from the injector. These are both coming from a provider.
- Grab the MIMEType object we will be using from the database by calling em.createNamedQuery("mt.ext", MIMEType.class).setParameter("ext", extension).getSingleResult();
- Create the
FileContainer
object and populate it, including a call to `setMimetype(MIMEType)` with the object we just retrieved - Call
dao.save(fileContainer)
, which sets a basic field inFileContainer
object and then callsem.persist(Object)
. Pulling this out of the DAO and using theEntityManager
injected earlier does not change the outcome.
At this point as soon as EntityManager.flush()
occurs (wherever that happens--at the end of the transaction or before it) the INSERT happens and the code deadlocks.
When I check pg_stat_activity
and compare it against pg_locks
I see that the insert statement is being blocked by an "idle in transaction" connection and that it is in a waiting state. Removing the insert of the MIMEType (and setting the column to allow for nulls) allows the code to proceed normally, and checking the values of the MIMEType indicates that it was retrieved correctly.
Any thoughts or ideas are appreciated.
EDIT:
This is the hierarchy, hopefully it clarifies the order of operations and what is going on:
- Testing Harness (Initializes Injector and uses a runner to inject it into the specific test classes)
- Test Case
- Thread Started
- Enter transactional block on singleton object (initialized with same injector) from within the thread, confirmed that the
EntityManager
objects are identical in all of the DAO objects and where the fault is happening. EntityManager
object is consistently acquired with aninjector.getInstance(EntityManager.class)
call or by acquiring theProvider
object.- Go through the above process, all in the same thread.
- Freeze on
flush()
The code does not appear to have anything that should be thread-local that is being shared between threads and everything is being initialized with the injector, which is what is baffling to me.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
我从你在这里所说的唯一可能是问题的地方是:
Guice-Persist 工作单元和事务的范围仅限于使用 ThreadLocal 的线程。如果您在单独的线程上执行某些操作,我绝对可以看到类似的情况发生(尽管我不确定具体是如何执行的)。您能否更详细地解释一下何时以及如何使用这个单独的线程?从
@Transactional
方法内部?它是否使用在原始线程上注入的 EntityManager ?编辑: 嗯...我不认为你所做的事情有什么问题。看起来工作和事务都是在一个线程上完成的。不知道那里发生了什么。
The only thing I'm picking up on from what you've said here that seems likely to be the problem is this:
Guice-Persist units of work and transactions are scoped to a thread using
ThreadLocal
. I can definitely see something like that happening if you are doing something on a separate thread (though I'm not sure how exactly). Could you explain in more detail when and how this separate thread is being used? From inside the@Transactional
method? Is it using anEntityManager
that was injected on the original thread?Edit: Hmm... I don't see anything wrong with what you're doing. Seems like the work and the transaction are all being done on one thread. Not sure what's going on there.
我发现了问题。
在代码的其他地方,我有一个不相关但正在进行的事务,该事务在
@Before
方法中启动。由于某种原因,EntityManager
阻止了该事务的完成。当所有内容都在同一个线程中并且对于某些只读操作时,这不是问题,但是当它被分成一个单独的线程并尝试写入时,它开始等待另一个事务,而该事务没有及时完成交易并触及许多相同的表。I found the problem.
Elsewhere in the code I had an unrelated but ongoing transaction that was being kicked off in an
@Before
method. For some reason theEntityManager
was blocking on that transaction completing. This wasn't a problem when everything was in the same thread and for certain read-only operations, but when it got split into a separate thread and trying to write it started waiting on the other transaction, which wasn't finishing in the time of the transaction and touched many of the same tables.