如何在多线程环境下使用JdbcTemplate?
我正在尝试将 Spring JdbcTemplate 与 Spring 的 SimpleAsyncTaskExecutor 一起使用,以便与数据库的并发连接 与单线程环境相比,可以在更短的时间内将整个数据插入到相关表中。
我正在使用以下代码,但它并没有加快我的应用程序速度。
我能找到的唯一线索是 bean“campaignProductDBWriter”仅构造一次,而我期望创建 10 个单独的实例 因为我在 tasklet 中将“throttle-limit”设置为 10。
我做错了什么?任何帮助或建议将不胜感激。
问候,
<bean id="dataSourceProduct"
class="org.springframework.jdbc.datasource.DriverManagerDataSource"
p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url.product}"
p:username="${jdbc.username.product}" p:password="${jdbc.password.product}"
/>
<bean id="jdbcTemplateProduct" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSourceProduct" />
</bean>
<bean id="simpleTaskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" >
<property name="concurrencyLimit" value="-1" />
</bean>
<batch:job id="sampleJob" restartable="true" incrementer="dynamicJobParameters">
<batch:step id="mapMZList">
<batch:tasklet allow-start-if-complete="true" task-executor="simpleTaskExecutor" throttle-limit="10">
<batch:chunk reader="campaignProductItemReader" processor="campaignProductProcessor" writer="campaignProductDBWriter" commit-interval="5000"/>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="campaignProductDBWriter" class="com.falcon.cc.job.step.CampaignProductWriter">
<property name="jdbcTemplate" ref="jdbcTemplateProduct" />
</bean>
<bean id="campaignProductItemReader" class="com.falcon.cc.job.step.FlatFileSynchronizedItemReader" scope="step">
<property name="resource" value="file:#{jobParameters['input.TEST_FILE.path']}"/>
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";"/>
<property name="names" value="approvalStatus,validFrom,validTo"/>
</bean>
</property>
<property name="fieldSetMapper">
<bean class="com.falcon.cc.mapper.CampaignProductFieldSetMapper" />
</property>
</bean>
</property>
</bean>
I'm trying to use Spring JdbcTemplate with Spring's SimpleAsyncTaskExecutor so that concurrent connections to the DB
can be made and the whole data can be inserted into the related table in a smaller amount of time when compared to a single threaded environment.
I'm using the following code, however it doesn't speed up my application.
The only clue I could find is the fact that the bean "campaignProductDBWriter" is constructed only once whereas I'm expecting 10 seperate instances to be created
as I set "throttle-limit" to 10 in the tasklet.
What am I doing wrong? Any help or suggestions will be greatly appreciated.
Regards,
<bean id="dataSourceProduct"
class="org.springframework.jdbc.datasource.DriverManagerDataSource"
p:driverClassName="${jdbc.driverClassName}" p:url="${jdbc.url.product}"
p:username="${jdbc.username.product}" p:password="${jdbc.password.product}"
/>
<bean id="jdbcTemplateProduct" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSourceProduct" />
</bean>
<bean id="simpleTaskExecutor" class="org.springframework.core.task.SimpleAsyncTaskExecutor" >
<property name="concurrencyLimit" value="-1" />
</bean>
<batch:job id="sampleJob" restartable="true" incrementer="dynamicJobParameters">
<batch:step id="mapMZList">
<batch:tasklet allow-start-if-complete="true" task-executor="simpleTaskExecutor" throttle-limit="10">
<batch:chunk reader="campaignProductItemReader" processor="campaignProductProcessor" writer="campaignProductDBWriter" commit-interval="5000"/>
</batch:tasklet>
</batch:step>
</batch:job>
<bean id="campaignProductDBWriter" class="com.falcon.cc.job.step.CampaignProductWriter">
<property name="jdbcTemplate" ref="jdbcTemplateProduct" />
</bean>
<bean id="campaignProductItemReader" class="com.falcon.cc.job.step.FlatFileSynchronizedItemReader" scope="step">
<property name="resource" value="file:#{jobParameters['input.TEST_FILE.path']}"/>
<property name="lineMapper">
<bean class="org.springframework.batch.item.file.mapping.DefaultLineMapper">
<property name="lineTokenizer">
<bean class="org.springframework.batch.item.file.transform.DelimitedLineTokenizer">
<property name="delimiter" value=";"/>
<property name="names" value="approvalStatus,validFrom,validTo"/>
</bean>
</property>
<property name="fieldSetMapper">
<bean class="com.falcon.cc.mapper.CampaignProductFieldSetMapper" />
</property>
</bean>
</property>
</bean>
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这不是 Spring 配置的问题,也不是您使用 jdbcTemplate 的方式的问题,jdbcTemplate 只是 JDBC API 的一个薄的、无状态的包装器。
最明显的可能性是您的瓶颈是您的数据库,而不是您的代码。对数据库运行多个并发操作完全有可能并不比一次执行一个操作快。
造成这种情况的原因可能有多种,例如数据库锁定,或者只是缺乏原始 I/O 性能。
当考虑使用多线程来提高性能时,您必须确定瓶颈在哪里。如果您的代码不是瓶颈,那么使其成为多线程并不会让事情变得更快。
This is not a problem with your Spring config, or with how you're using
jdbcTemplate
, which is just a thin, stateless wrapper around the JDBC API.The most obvious likelihood is that your bottleneck is your database, not your code. It's entirely possible that running multiple concurrent operations against the database is no faster than doing them one at a time.
There could be several reasons for this, such as database locking, or just lack of raw I/O performance.
When considering using multi-threading to improve performance, you have to be sure where your bottlenecks are. If your code isn't the bottleneck, then making it multi-threaded isn't going to make things any faster.
当spring的上下文被初始化时,它会创建在上下文中声明的所有实例。
此代码将导致 spring 创建一个<属性名称=“jdbcTemplate”ref=“jdbcTemplateProduct”/>
CampaignProductWriter
实例,该实例将是一个单例(默认情况下范围是单例)。为了获得 Bean 的新实例,其范围必须是原型。
When spring's context is initialized, it creates all instances declared in the context.
<bean id="campaignProductDBWriter" class="com.falcon.cc.job.step.CampaignProductWriter">
this code will result in spring creating an instance of<property name="jdbcTemplate" ref="jdbcTemplateProduct" />
</bean>
CampaignProductWriter
which will be a singleton (as by default the scope is singleton).In order to have a new instance of your bean, its scope have to be prototype.