Spring JPA ObjectOptimisticLockingFailureException,不使用批处理插入/更新
为了为您提供有关我面临的问题的上下文,这是Postgres数据库中的客户表,其状态是EventHandler的更新,它从单个SQS队列中获取事件。 现在出现此错误
ObjectOptimisticLockingFailureException
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: update customer set created_by=?, lock_id=?, modifiedat=?, modified_by=?, app_id=?, client=?, comments=?, customer_id=?, decision=?, source=? where id=? and lock_id=?; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: update customer set created_by=?, lock_id=?, modifiedat=?, modified_by=?, application_id=?, client_app=?, comments=?, customer_id=?, decision=?, source=? where id=? and lock_id=?
,此错误表明批处理更新正在发生,但是在此功能中,我正在执行批处理更新。 The code where this insert happens is
public Customer updateOrCreateCustomer(int customerId, String applicationId, String status) {
Customer customer = customerRepository.findByCustomerId(customerId);
if(customer == null) {
customer = new Customer();
customer.setCustomerId(customerId);
customer.setApplicationId(applicationId);
customer.status(status);
log.info("Creating Customer with Customer Id - {} Application Id - {}", customerId, applicationId);
} else {
customer.setStatus(status);
log.info("Updating Customer with Customer Id - {} Application Id - {}", customerId, applicationId);
}
return customerRepository.save(customer);
}
Also, in my application.yml
, I have set the batch_size
property of JPA to 50 but this is being used in a different API where I需要做批处理插入物,
jpa:
hibernate:
ddl-auto: none
open-in-view: false
properties:
generate_statistics: false
hibernate:
order_inserts: true
jdbc:
batch_size: 50
我不知道为什么JPA在updateOrcreateCustomer
函数中进行批处理更新。我猜是许多请求正在同一时间出现,因此JPA看到了batch_size已设置,因此它会自动将所有这些查询组合为一个,以优化插入/更新并做到这一点。请帮忙
To give you the context about the issue I am facing, this is a customer table in a Postgres database and its status is update by EventHandler which picks up events from a single SQS queue. This error comes up
ObjectOptimisticLockingFailureException
Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: update customer set created_by=?, lock_id=?, modifiedat=?, modified_by=?, app_id=?, client=?, comments=?, customer_id=?, decision=?, source=? where id=? and lock_id=?; nested exception is org.hibernate.StaleStateException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: update customer set created_by=?, lock_id=?, modifiedat=?, modified_by=?, application_id=?, client_app=?, comments=?, customer_id=?, decision=?, source=? where id=? and lock_id=?
Now this error shows that batch update is happening but nowhere in this function I am doing batch updates. The code where this insert happens is
public Customer updateOrCreateCustomer(int customerId, String applicationId, String status) {
Customer customer = customerRepository.findByCustomerId(customerId);
if(customer == null) {
customer = new Customer();
customer.setCustomerId(customerId);
customer.setApplicationId(applicationId);
customer.status(status);
log.info("Creating Customer with Customer Id - {} Application Id - {}", customerId, applicationId);
} else {
customer.setStatus(status);
log.info("Updating Customer with Customer Id - {} Application Id - {}", customerId, applicationId);
}
return customerRepository.save(customer);
}
Also, in my application.yml
, I have set the batch_size
property of JPA to 50 but this is being used in a different API where I need to do batch inserts
jpa:
hibernate:
ddl-auto: none
open-in-view: false
properties:
generate_statistics: false
hibernate:
order_inserts: true
jdbc:
batch_size: 50
I don't know why JPA is doing batch updates in updateOrCreateCustomer
function. What I guess is that many requests are coming at the same time so JPA sees that batch_size is set, so it automatically combines all these queries into one to optimize inserts/updates and does that. Please help
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
我认为,如果您的对象在另一个线程中修改,然后尝试同时从另一个线程提交相同的对象,则可能会获得ObjectoptimisticlockingFailureException
我知道的解决方案是提高分离级别,以使项目一个一个一个合作,并且没有同步误差。
I think if your object is modified in another thread, and then you try to commit that same object from another thread at the same time, you maybe get ObjectOptimisticLockingFailureException
The solution I know is to raise the separation level so that the items are commited one by one and there is no synchronization error.