如何使用NoSQL(Cassandra)实现客户端身份验证解决方案?

发布于 2024-12-13 00:39:36 字数 406 浏览 0 评论 0原文

我目前正在考虑如何使用 NoSQL 解决方案实现 Web 应用程序的身份验证。我在此遇到的问题是,大多数NoSQL解决方案(例如Cassandra、MongoDB)可能存在延迟写入。例如,我们在节点 A 上写入,但不能保证写入同时出现在节点 B 上。这与 NoSQL 解决方案背后的方法是合乎逻辑的。

现在一个想法是你不进行二次阅读(所以一切都会经过大师)。这可能适用于 MongoDB(您实际上有一个主节点),但不适用于 Cassandra(所有节点都是相等的)。但我们的应用程序运行在世界各地的多个独立点,因此我们需要多主控能力。

目前我不知道 Cassandra 的解决方案可以更新数据并确保后续读取(对所有节点)确实有更改。那么,如何在 NoSQL 解决方案之上构建身份验证,使身份验证请求(读取)可以并行出现在多个节点上呢?

感谢您的帮助!

I am currently thinking of how to implement an authentication for a web application with a NoSQL solution. The problem I encounter hereby is that in most of the NoSQL solutions (e.g. Cassandra, MongoDB) have probably delayed writes. For example we write on node A but it is not guaranteed that the write is appearing on node B at the same time. This is logical with the approaches behind the NoSQL solutions.

Now one idea would be that you do no secondary reads (so everything goes over a master). This would probably work in MongoDB (where you actually have a master) but not in Cassandra (where all nodes are equal). But our application runs at several independent points all over the world, so we need multi master capability.

At the moment I am not aware of a solution with Cassandra where I could update data and be sure that subsequent reads (to all of the nodes) do have the change. So how could one build an authentication on top of those NoSQL solutions where the authentication request (read) could appear on several nodes in parallel?

Thanks for your help!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

祁梦 2024-12-20 00:39:36

关于 Apache Cassandra:

ConsistencyLevel 是一个枚举,它根据架构定义控制读取和写入行为。不同的一致性级别具有不同的含义,具体取决于您正在进行写入还是读取操作。注意,如果W+R> ReplicationFactor,其中 W 是写入时阻止的节点数,R 是读取时阻止的节点数,您将具有强一致的行为;也就是说,读者将始终看到最新的内容。其中,最有趣的是进行 QUORUM 读取和写入,这可以提供一致性,同时在面对高达 ReplicationFactor 一半的节点故障时仍然允许可用性。当然,如果延迟比一致性更重要,那么您可以对其中一个或两者使用较低的值。

这是在应用程序端管理的。具体针对您的问题,这取决于您如何设计 Cassandra 实现、跨 Cassandra 节点的复制因子以及应用程序在读/写方面的行为方式。

  • ANY:确保写入至少已写入 1 个节点,包括 HintedHandoff 接收者。
  • ONE:在响应客户端之前,确保写入已写入至少 1 个副本的提交日志和内存表。
  • QUORUM:在响应客户端之前,确保写入已写入N/2+1个副本。
  • LOCAL_QUORUM:确保写入已写入本地数据中心内的 / 2 + 1 个节点(需要 NetworkTopologyStrategy)
  • EACH_QUORUM:确保写入已写入每个数据中心内的 / 2 + 1 个节点(需要 NetworkTopologyStrategy)
  • ALL:确保写入在响应客户端之前,写入会写入所有 N 个副本。任何无响应的副本都会导致操作失败。

阅读

  • 任何:不支持。您可能想要一个。
  • ONE:将返回第一个响应的副本返回的记录。当使用 ConsistencyLevel.ONE 时,一致性检查始终在后台线程中完成,以修复任何一致性问题。这意味着即使初始读取得到的是较旧的值,后续调用也将具有正确的数据。 (这称为 ReadRepair)
  • QUORUM:一旦报告了至少大多数副本 (N / 2 + 1),将查询所有副本并返回具有最新时间戳的记录。同样,剩余的副本将在后台检查。
  • LOCAL_QUORUM:本地数据中心内的大多数副本已回复后,返回具有最新时间戳的记录。
  • EACH_QUORUM:每个数据中心内的大多数副本已回复后,返回具有最新时间戳的记录。
  • ALL:将查询所有副本,并在所有副本回复后返回具有最新时间戳的记录。任何无响应的副本都会导致操作失败。

With respects to Apache Cassandra:

The ConsistencyLevel is an enum that controls both read and write behavior based on in your schema definition. The different consistency levels have different meanings, depending on if you're doing a write or read operation. Note that if W + R > ReplicationFactor, where W is the number of nodes to block for on write, and R the number to block for on reads, you will have strongly consistent behavior; that is, readers will always see the most recent write. Of these, the most interesting is to do QUORUM reads and writes, which gives you consistency while still allowing availability in the face of node failures up to half of ReplicationFactor. Of course if latency is more important than consistency then you can use lower values for either or both.

This is managed on the application side. To your question specifically, it comes down to how you design your Cassandra implementation, replication factor across the Cassandra nodes and how your application behaves on read/writes.

Write

  • ANY: Ensure that the write has been written to at least 1 node, including HintedHandoff recipients.
  • ONE: Ensure that the write has been written to at least 1 replica's commit log and memory table before responding to the client.
  • QUORUM: Ensure that the write has been written to N / 2 + 1 replicas before responding to the client.
  • LOCAL_QUORUM: Ensure that the write has been written to / 2 + 1 nodes, within the local datacenter (requires NetworkTopologyStrategy)
  • EACH_QUORUM: Ensure that the write has been written to / 2 + 1 nodes in each datacenter (requires NetworkTopologyStrategy)
  • ALL: Ensure that the write is written to all N replicas before responding to the client. Any unresponsive replicas will fail the operation.

Read

  • ANY: Not supported. You probably want ONE instead.
  • ONE: Will return the record returned by the first replica to respond. A consistency check is always done in a background thread to fix any consistency issues when ConsistencyLevel.ONE is used. This means subsequent calls will have correct data even if the initial read gets an older value. (This is called ReadRepair)
  • QUORUM: Will query all replicas and return the record with the most recent timestamp once it has at least a majority of replicas (N / 2 + 1) reported. Again, the remaining replicas will be checked in the background.
  • LOCAL_QUORUM: Returns the record with the most recent timestamp once a majority of replicas within the local datacenter have replied.
  • EACH_QUORUM: Returns the record with the most recent timestamp once a majority of replicas within each datacenter have replied.
  • ALL: Will query all replicas and return the record with the most recent timestamp once all replicas have replied. Any unresponsive replicas will fail the operation.
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文