如何使用 DriverConfigLoader 增加 Cassandra Java 驱动程序中的默认超时?
关于 Spring Webflux Reactive Cassandra 应用程序的小问题。
在使用 Webflux 和反应式 Cassandra 设置 Spring Boot 2.6.4 时,我使用该应用程序在 Cassandra 表中插入一些数据。
一切正常,直到负载更高时,我看到一个问题(附加堆栈跟踪)
事实是,阅读一些文档,我认为这段代码将有助于解决问题:(
@Bean
public DriverConfigLoaderBuilderCustomizer defaultProfile(){
return builder -> builder
.withInt(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, 5000)
.build();
}
这是我的 Cassandra 配置,请注意我正在使用 CqlSession.builder()
)
protected CqlSession cqlLocalSession() {
return CqlSession.builder()
.addContactEndPoint(new DefaultEndPoint(new InetSocketAddress(contactPoints, port)))
.withKeyspace(keyspace)
.withLocalDatacenter(datacenter)
.withAuthCredentials(username, passPhrase)
.build();
}
不幸的是,DriverConfigLoaderBuilderCustomizer
似乎不起作用,因为我仍然面临 PT2S 后查询超时的问题; (我至少期望看到 PT5S。
请问问题是什么,以及如何正确增加超时?
谢谢
org.springframework.data.cassandra.CassandraUncategorizedException: ReactivePreparedStatementCallback; CQL [INSERT INTO mutable (x,y,z) VALUES (?,?,?)]; Query timed out after PT2S; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2S
at org.springframework.data.cassandra.core.cql.CassandraExceptionTranslator.translate(CassandraExceptionTranslator.java:162) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint ⇢ Handler myHandler [DispatcherHandler]
Original Stack Trace:
at org.springframework.data.cassandra.core.cql.CassandraExceptionTranslator.translate(CassandraExceptionTranslator.java:162) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
at org.springframework.data.cassandra.core.cql.ReactiveCassandraAccessor.translate(ReactiveCassandraAccessor.java:149) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
at org.springframework.data.cassandra.core.cql.ReactiveCqlTemplate.lambda$translateException$15(ReactiveCqlTemplate.java:820) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
at reactor.core.publisher.Flux.lambda$onErrorMap$28(Flux.java:6911) ~[reactor-core-3.4.15.jar!/:3.4.15]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.4.15.jar!/:3.4.15]
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96) ~[spring-cloud-sleuth-instrumentation-3.1.1.jar!/:3.1.1]
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onError(MonoFlatMapMany.java:204) ~[reactor-core-3.4.15.jar!/:3.4.15]
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onError(FluxContextWrite.java:121) ~[reactor-core-3.4.15.jar!/:3.4.15]
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96) ~[spring-cloud-sleuth-instrumentation-3.1.1.jar!/:3.1.1]
at reactor.core.publisher.MonoCompletionStage.lambda$subscribe$0(MonoCompletionStage.java:80) ~[reactor-core-3.4.15.jar!/:3.4.15]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
at com.datastax.oss.driver.internal.core.cql.CqlPrepareAsyncProcessor.lambda$process$1(CqlPrepareAsyncProcessor.java:71) ~[java-driver-core-4.13.0.jar!/:na]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
at com.datastax.oss.driver.internal.core.cql.CqlPrepareHandler.setFinalError(CqlPrepareHandler.java:320) ~[java-driver-core-4.13.0.jar!/:na]
at com.datastax.oss.driver.internal.core.cql.CqlPrepareHandler.lambda$scheduleTimeout$1(CqlPrepareHandler.java:163) ~[java-driver-core-4.13.0.jar!/:na]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2S
at com.datastax.oss.driver.internal.core.cql.CqlPrepareHandler.lambda$scheduleTimeout$1(CqlPrepareHandler.java:163) ~[java-driver-core-4.13.0.jar!/:na]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Small question regarding a Spring Webflux Reactive Cassandra application please.
On a setup Spring Boot 2.6.4 with Webflux and reactive Cassandra, I am using the app to insert some data in Cassandra tables.
Things works fine, until when there is a higher load, I am seeing an issue (stack trace attached)
The thing is, reading some documentation, I thought this piece of code would help solve the issue:
@Bean
public DriverConfigLoaderBuilderCustomizer defaultProfile(){
return builder -> builder
.withInt(DefaultDriverOption.METADATA_SCHEMA_REQUEST_TIMEOUT, 5000)
.build();
}
(here is my Cassandra Config, note I am using CqlSession.builder()
)
protected CqlSession cqlLocalSession() {
return CqlSession.builder()
.addContactEndPoint(new DefaultEndPoint(new InetSocketAddress(contactPoints, port)))
.withKeyspace(keyspace)
.withLocalDatacenter(datacenter)
.withAuthCredentials(username, passPhrase)
.build();
}
Unfortunately, the DriverConfigLoaderBuilderCustomizer
does not seem to work, as I am still facing Query timed out after PT2S; (I would have at least expected to see PT5S.
May I ask what is the issue, and how to properly increase the timeout please?
Thank you
org.springframework.data.cassandra.CassandraUncategorizedException: ReactivePreparedStatementCallback; CQL [INSERT INTO mutable (x,y,z) VALUES (?,?,?)]; Query timed out after PT2S; nested exception is com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2S
at org.springframework.data.cassandra.core.cql.CassandraExceptionTranslator.translate(CassandraExceptionTranslator.java:162) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
*__checkpoint ⇢ Handler myHandler [DispatcherHandler]
Original Stack Trace:
at org.springframework.data.cassandra.core.cql.CassandraExceptionTranslator.translate(CassandraExceptionTranslator.java:162) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
at org.springframework.data.cassandra.core.cql.ReactiveCassandraAccessor.translate(ReactiveCassandraAccessor.java:149) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
at org.springframework.data.cassandra.core.cql.ReactiveCqlTemplate.lambda$translateException$15(ReactiveCqlTemplate.java:820) ~[spring-data-cassandra-3.3.2.jar!/:3.3.2]
at reactor.core.publisher.Flux.lambda$onErrorMap$28(Flux.java:6911) ~[reactor-core-3.4.15.jar!/:3.4.15]
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:94) ~[reactor-core-3.4.15.jar!/:3.4.15]
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96) ~[spring-cloud-sleuth-instrumentation-3.1.1.jar!/:3.1.1]
at reactor.core.publisher.MonoFlatMapMany$FlatMapManyMain.onError(MonoFlatMapMany.java:204) ~[reactor-core-3.4.15.jar!/:3.4.15]
at reactor.core.publisher.FluxContextWrite$ContextWriteSubscriber.onError(FluxContextWrite.java:121) ~[reactor-core-3.4.15.jar!/:3.4.15]
at org.springframework.cloud.sleuth.instrument.reactor.ScopePassingSpanSubscriber.onError(ScopePassingSpanSubscriber.java:96) ~[spring-cloud-sleuth-instrumentation-3.1.1.jar!/:3.1.1]
at reactor.core.publisher.MonoCompletionStage.lambda$subscribe$0(MonoCompletionStage.java:80) ~[reactor-core-3.4.15.jar!/:3.4.15]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
at com.datastax.oss.driver.internal.core.cql.CqlPrepareAsyncProcessor.lambda$process$1(CqlPrepareAsyncProcessor.java:71) ~[java-driver-core-4.13.0.jar!/:na]
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:859) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:837) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) ~[na:na]
at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2088) ~[na:na]
at com.datastax.oss.driver.internal.core.cql.CqlPrepareHandler.setFinalError(CqlPrepareHandler.java:320) ~[java-driver-core-4.13.0.jar!/:na]
at com.datastax.oss.driver.internal.core.cql.CqlPrepareHandler.lambda$scheduleTimeout$1(CqlPrepareHandler.java:163) ~[java-driver-core-4.13.0.jar!/:na]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
Caused by: com.datastax.oss.driver.api.core.DriverTimeoutException: Query timed out after PT2S
at com.datastax.oss.driver.internal.core.cql.CqlPrepareHandler.lambda$scheduleTimeout$1(CqlPrepareHandler.java:163) ~[java-driver-core-4.13.0.jar!/:na]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.run(HashedWheelTimer.java:715) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.ImmediateExecutor.execute(ImmediateExecutor.java:34) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:703) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:790) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:503) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.74.Final.jar!/:4.1.74.Final]
at java.base/java.lang.Thread.run(Thread.java:829) ~[na:na]
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
您在驱动程序上配置了错误的选项。
METADATA_SCHEMA_REQUEST_TIMEOUT
是检索架构的请求的超时时间。Java 驱动程序的默认请求超时为
basic.request.timeout
(查看参考配置):basic.request.timeout
的等效类型化驱动程序选项是REQUEST_TIMEOUT
(参考 TypedDriverOption.java)。我应该指出,增加请求超时几乎总是不是处理您所面临的问题的正确方法,因为您只是隐藏了问题而没有真正解决根本原因。
正如您在文章开头已经指出的那样,当负载较高时就会出现问题。您收到
DriverTimeoutException
的原因是协调器没有及时响应,不难得出结论,这是由于节点过载造成的。Cassandra 中的写入速度非常快,因为突变被附加到
commitlog
的末尾——没有磁盘寻道。如果用于提交日志的磁盘速度较慢或无法跟上写入速度,您需要查看:You have configured the wrong option on the driver.
METADATA_SCHEMA_REQUEST_TIMEOUT
is the timeout for the requests to retrieve the schema.The default request timeout for the Java driver is
basic.request.timeout
(see reference configuration):The equivalent typed driver option for
basic.request.timeout
isREQUEST_TIMEOUT
(reference TypedDriverOption.java).I should point out that increasing the request timeout is almost always never the correct way of handling the issue you're facing because you're just hiding the problem without actually addressing the root cause.
As you already pointed out at the start of your post, the problem arises when the load is higher. The reason you're getting
DriverTimeoutException
is the coordinators do not respond in time and it's not difficult to conclude that it is due to nodes being overloaded.Writes in Cassandra are very fast because the mutations are appended to the end of the
commitlog
-- there are no disk seeks. If the disk for thecommitlog
is slow or cannot keep up with writes, you need to look at:commitlog
is on a separate disk/volume so it's not competing for the same IO as reads,您还可以通过将 DefaultDriverOption.METADATA_SCHEMA_ENABLED 设置为 false 来解决此问题。
kotlin 响应式数据 cassandra 的示例如下:
You can also fix the issue by setting DefaultDriverOption.METADATA_SCHEMA_ENABLED to false.
Example as below on kotlin reactive data cassandra:
如果您在
创建架构
时遇到错误,您可以配置advanced.metadata.schema.request-timeout
,并且在执行某些非架构查询时出现错误,您可以配置与basic.request.timeout
。示例如下:if you get the error while
creating the schema
you can configureadvanced.metadata.schema.request-timeout
and the error comes when executing some non-schema queries, you can configure withbasic.request.timeout
. the example as follows: