.NET 上服务层的扩展策略

发布于 2024-07-08 11:34:01 字数 693 浏览 12 评论 0原文

我正在开发一个由许多模块化 Web 应用程序组成的 Web 产品。 对于最终用户来说,它似乎是一个单一的应用程序,尽管各种组件被分解为各自的应用程序。

这样做的部分原因是它可以轻松地跨多个应用程序服务器水平扩展。

为了更轻松地水平扩展数据层,我们计划在数据库前面使用 Web 服务层。 该层可以扩展到 N 台机器,并且它的每个实例将单独处理缓存。

这个想法是,应用程序将调用服务层负载均衡器,该负载均衡器会将调用分配给服务实例,然后使用其缓存返回数据,或连接到数据库并查询数据。 这似乎是最简单的前瞻性解决方案,无需大量修改应用程序代码即可进行扩展。

[N Amount of Databases]
         |
         \/
[Service Tier X N amount of Machines]
         |
         \/
[Application Tier X n amount of Machines]

不过,出现了一些问题,我想在服务级别保留用户会话,以便每个应用程序只需使用令牌进行身份验证,但是我不确定如何在没有令牌的情况下在所有服务计算机上维护会话数据单点故障。

关于如何实现这一目标有什么想法吗? 关于建筑还有其他想法吗? 还有其他人有过设计一个每天可以处理数百万次点击的网站的项目吗?

编辑:甚至没有一个想法? :(

I'm working on a web product that is composed of many modular web applications. To the end user, it appears to be one single application, though various components are broken out into their own apps.

Part of the reasoning for this is so that it can be easily scaled horizontally across multiple application servers.

To facilitate easier horizontal scaling of the data tier, we are planning on using a web service layer in front of the database. This layer could be scaled out to N machines, and each instance of it would handle caching individually.

The idea is that an application would make a call to the service tier load balancer, which would assign the call to a service instance, this would then use its cache to return data, or connect up to the database and query the data. It seems like this would be the easiest forward looking solution to scaling out without having to heavily modify application code.

[N Amount of Databases]
         |
         \/
[Service Tier X N amount of Machines]
         |
         \/
[Application Tier X n amount of Machines]

Some questions come up though, I'd like to persist user session at the service level, so that each application would just authenticate with a token, however I'm uncertain on how I'd maintain session data across all service machines without having a single point of failure.

Any ideas on how to pull this off? Any other ideas on architecture? Has anyone else had the project of designing a site that could potentially handle millions of hits per day?

EDIT: Not even an idea? :(

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

夜巴黎 2024-07-15 11:34:01

您已经描述了分布式缓存机制(例如 memcached)的完美用例(http://www.danga.com /memcached)或即将推出的 MS Velocity 项目 (http://code.msdn.microsoft. com/velocity)。

在您描述的情况下,您有越来越多的服务层实例,每个实例都执行自己的本地缓存,缓存的有用性会随着每个新框的增加而降低,因为每个单独的实例都必须从数据库检索相同的数据来填充其本地缓存,即使如果相同的数据刚刚被另一个服务层实例访问。 使用 memcached 或 Velocity,缓存机制将智能地将所有服务器上未使用的 RAM 合并到一个缓存中,供所有服务层安装共享。 这样,只有第一个访问数据的服务层实例才需要使用数据库,其他服务层实例的后续访问将从缓存中提取相同的数据。

这样做还可以解决用户会话问题,因为您可以轻松地使用相同的缓存来存储用户会话的状态值,并且所有服务层实例都可以访问相同的信息。

希望这可以帮助!

亚当

you've described the perfect use case for a distributed caching mechanism such as memcached (http://www.danga.com/memcached) or the upcoming MS Velocity project (http://code.msdn.microsoft.com/velocity).

In the situation you describe where you have an increasing number of Service Tier instances each doing their own local caching, the usefulness of your cache decreases with each new box because each individual instance must retrieve the same data from the database to populate its local cache even if the same data was just accessed by another Service Tier instance. With memcached or Velocity the caching mechanism will intelligently combine unused RAM across all your servers into a single cache for all the Service Tier installations to share. That way only the first Service Tier instance to access a piece of data will need to use the database, and subsequent accesses by other Service Tier instances will pull the same data from cache.

Having this in place also answers the user session question, as you could easily use this same cache to store state values for user sessions and all of your Service Tier instances would have access to this same information.

Hope this helps!

Adam

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文