设计决策 - 扩展基于 Web 的应用程序架构

发布于 2024-09-05 16:29:09 字数 584 浏览 10 评论 0原文

这个问题是关于设计决策的。我目前正在开发一个 Web 项目,该项目一开始将拥有 4 万用户,预计在几个月内将增加 5000 万用户(尽管不是并发用户)。我希望有一个无需太多努力即可轻松扩展的架构。

为了解释,我想使用一个简单的场景。可以说,用户实体和服务(例如 CreateUser、AuthenticateUser 等)是页面控制器的简单方法调用。但是,一旦流量增加,例如,验证用户(或与用户实体相关的此类服务)就必须移至不同的内部服务器以分散负载。但同时,当用户数为 40K 时,通过网络使用 RPC 调用就显得大材小用了。

我的建议是最初使用 IPC,当我们需要横向扩展时,我们可以内部切换到基于 TCP 的 RPC 调用,以便它可以轻松横向扩展。例如,我首先引用 System.IO.Pipes.NamedPipeStreamServer,然后再转向 TcpListener。

如果我们有适当的设计来封装上述方法,我们就可以轻松地将服务扩展到多个网络服务器,同时避免用户数量较少时的网络调用。

这是最好的方法吗?任何建议都会很棒..

注意:数据库扩展绝对是第二阶段优化,因此我们已经进行了架构设计,以便在流量增加时轻松分区数据。在此期间,主要瓶颈将是应用程序服务器。

This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort.

In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill.

My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on.

If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small.

Is this is a best approach? Any suggestions would be great ..

Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

沐歌 2024-09-12 16:29:09

如果您打算做的是将身份验证和授权工作外包给中央服务器(如果我没看错的话),那么我认为如果您尝试使用命名管道或执行此操作,您会发现可扩展性方面的问题甚至低级 TCP 套接字。您没有理由无法通过常规 Web 服务甚至基于 TCP 通道的 WCF 服务访问这些内部服务器。

我选择这条路线的原因是因为调用无状态 Web 服务(ASMX 或 WCF)将允许您在一个农场。因此,随着对这些服务的点击量增加,您可以扩展响应这些调用的服务器数量,而无需更改客户端代码。

If what you're planning on doing is (if I read it right) farm out the authentication and authorization work to a central server, then I think you're going to find problems in scalability if you try and do it with named pipes or even low-level TCP sockets. There's no reason why you can't reach these internal servers over regular web services or even TCP-channel based WCF services.

The reason I would go this route is because invoking stateless web services (ASMX or WCF) is going to allow you to make your "auth and auth" (authentication and authorization) server as well as your user management server (createuser, etc) on a farm. So, as your hits to these services increase, you can scale out the number of servers responding to these calls without having to change the client code.

顾忌 2024-09-12 16:29:09

在我的一个旅游行业项目(每天大约 100 万次点击)中,我们有一个单独的身份验证服务器场。当时大约有四台负载均衡服务器。我们的业务层称为身份验证 Web 服务 (asmx),传递用户凭据并获取 xml 结果。如果用户数量增加,我们可以进一步扩展身份验证农场。恕我直言,通过 http(在 Intranet 上)使用 Web 服务比 TCP 具有更多的性能优势。

In one of my project for travel industry (around 1m hits per day), we had a separate auth server farm. There were around four load balanced server at that time. Our business layer called the authentication web service (asmx) passing the user credentials and get the xml results. If the number of users increase, we can scale out the auth farm further. IMHO using the web services over the http (on intranet) give more performance benefit than TCP.

淡看悲欢离合 2024-09-12 16:29:09

根据我的经验,“你现在不需要它,所以不要在上面浪费精力”和“这里有龙”之间总是存在着紧张关系。

您的扩展策略是,当需要时,使用特定的远程技术将工作负载卸载到其他主机。听起来好像可能有用。 [顺便说一句,另一种方法是让同一事物有许多并行实例,因此将所有内容保留在本地 - 我的直觉是这可能会更好。但现在让我们坚持您的计划......]

我的一般建议是尽早应对风险。因此,在这种情况下,您打算将来使用远程技术来卸载一些工作。添加这项新的(到您的系统)技术将(至少)产生两个影响:

  1. 新的故障模式
  2. 延迟增加

哦,还有远程策略不起作用的可能性(诚然不太可能)!您可能看不到预期的扩展优势。众所周知,性能是不直观的。

所以我在这里看到了风险,我想现在解决这个风险。我会说立即进行远程处理,即使这不是必需的。然后,您将在所有性能测试中加入增加的延迟,并在所有弹性测试中加入故障模式。您可以在压力解除、用户数量较少的情况下执行此操作。您还可以对实际的可扩展性进行一些测试。

In my experience there's always a tension between "you don't need it now, so don't waste effort on it" and "here be dragons".

Your scaling strategy is, when the need arises, use particular remotng techniques to offload pieces of work to other hosts. Sounds like it might work. [In passing, another approach is just to have many parallel instances of the same thing, so keeping everything local - my instinct is that this might be better. But let's stick with your plan for now ...]

My general recommendation is to attack risk early. So in this case you are intending in the future to use a remoting technology to offload some work. Adding this new (to your system) technology will have (at least) two impacts:

  1. New failure modes
  2. Increased latency

Oh, and there's the (admitedly unlikely) possibility that the remoting strategy doesn't work! You may not see the scaling benefits you expect. Performance is notoriously unintuitive.

So I see risk here, I want to address this risk now. I would say do the remoting immediately even though it's not ncessary. You will then bake into all your performance testing the increased latency and into all your resilience testing the failure modes. You are doing this while the pressure is off, while the user population is low. You can also do some testing of the actual scalability.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文