我使用的是带有4个节点的Cassandra簇。在CPU内核和RAM方面,2个节点的资源比其他两个节点多得多。
现在,我正在使用DCAWAREROUNDROBIN负载平衡策略。我相信在这项政策中,所有节点都会收到相同数量的请求。因此,2个较小的节点表现不佳。这导致较小的节点上的IO和CPU使用率很高。
我想以可用资源的比率将流量从Java应用程序分配到Cassandra群集。
例如:小节点1-20%,小节点2-20%,大节点1-30%,大节点2-30%的查询。
需要您建议我可以使用任何方法或方法以这种方式分发流量。
我知道我可以使用latencyawarepolicy [1]。我担心,当由于阈值违规而将节点从查询计划中取出时,其余的节点可能会看到连锁反应。
[1]
I am using a Cassandra cluster with 4 nodes. 2 nodes have much more resources than the other two in terms of CPU cores and RAM.
Right now I am using the DCAwareRoundRobin load balancing policy. I believe in this policy, all nodes receive the same number of requests. Due to this, 2 smaller nodes are NOT performing well. This is resulting in high IO and CPU usage on smaller nodes.
I want to distribute the traffic from the Java application to the Cassandra cluster in the ratio of available resources.
For example: small node 1 - 20%, small node 2 - 20%, large node 1 - 30%, large node 2 - 30% of queries.
Need you suggestion about any method or approach I can use to distribute the traffic in the manner.
I understand that I can use LatencyAwarePolicy [1]. I am worried that when a node is taken out of query plan due to a threshold breach, the remaining nodes might see the ripple effect.
[1] https://docs.datastax.com/en/drivers/java/2.2/com/datastax/driver/core/policies/LatencyAwarePolicy.html
发布评论
评论(1)
简而言之,您采用的方法是一种反模式,应不惜一切代价避免。
tl; dr:使节点相同。
在请求处理过程中,查询将不达到单个节点,而是根据复制因子访问您访问的分区的所有节点,您使用的一致性级别都无关紧要。如果您的RF = 3,则每次写或阅读请求将达到4个节点中的3个节点。您没有,也不应该控制要求分发,“药物会比疾病更糟”。
In short, the approach you have is an anti-pattern and should be avoided at all costs.
TL;DR: Make the nodes the same.
During the request processing, The query will reach not a single node but all the nodes responsible for the partition you are accessing, based on the Replication Factor, doesn't matter whatever Consistency Level you use. If you have RF=3, 3 of your 4 nodes will be reached for each write or read request. You do not have and should not have control over request distribution, "the medicine will be worse than the sickness".