在 ASP.NET 中,有没有办法编写自定义负载均衡器?

发布于 2024-12-29 15:10:31 字数 429 浏览 1 评论 0原文

我一直致力于通过 WCF 设置 (1) 个服务器、(n) 个辅助角色服务器。一台服务器处理请求,通过 WCF 将工作传递给可用的辅助角色,后者处理请求,然后将数据传递回服务器。对于我能够处理的来说,这有点太复杂了。我试图简化解决方案,并认为可以通过创建 (n) 个服务器来删除 WCF 组件,每个服务器都能够处理辅助角色服务和 Web 服务器角色。假设每个工作服务器只能担任 (1) 个工作角色。

这样,当请求到来时,一些循环式处理程序会选择一个可用的服务器并将 HTTP 请求转发到该服务器/工作线程。该服务器/工作人员完成工作,并将数据直接返回给请求者。

这是一个可行的方法吗?我意识到对于一些高级开发人员来说,WCF 解决方案不会造成任何问题。我问是否有一种更简单、更黑客风格的方法,可以让我以有限的能力创建一个单服务器解决方案,然后复制和负载平衡该解决方案的多个版本?

任何建议非常感谢!

I have been working on implementing a (1) server, (n) worker role servers setup via WCF. One server handles requests, passes the work on via WCF to an available Worker Role, which processes the request, and passes the data back to the server. It's a bit too complicated for what I am able to handle. I'm trying to simplify the solution, and thought I could remove the WCF component by creating (n) servers, each capable of handling both the worker role service and the web server role. Assume each worker server is capable of only (1) worker role.

So that way, when a request comes in, some round-robin style handler picks an available server and forwards the HTTP request to that server/worker. That server/worker does the work, and returns the data directly to the requester.

Is this a feasible approach? I realize for some advanced developers, the WCF solution would pose no problem. I'm asking if there is a simpler, more hacked style approach that would allow me, with my limited ability, create a 1 server solution, then replicate and load balance multiple versions of that solution?

Any suggestions mucho apprecianado!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

安人多梦 2025-01-05 15:10:31

您所描述的正是负载均衡器的构建目的 - 它们尝试在可用服务器池中分发传入请求。

许多托管公司提供涉及多个服务器的托管计划,您可以选择这些服务器来平衡传入请求的负载。

例如:Rackspace 提供负载平衡作为其某些托管计划的可选功能。

如果您在 Microsoft Azure 云中托管具有多个 Web 角色实例的站点,系统会自动为您的站点进行负载平衡。您还可以构建站点,使其在多个地理区域进行动态负载平衡,从而将来自(例如亚洲)的请求路由到亚洲的数据中心,从而减少延迟和数据中心内带宽。

另外,请考虑在前端网站和后端批量/密集工作负载处理之间引入队列/消息总线。这样您就可以独立扩展系统的前端和后端。

综上所述,不要过度设计您的解决方案 - 专注于构建一个稳定、坚固、可靠、高效的系统,然后监视和测量其性能并在适当的情况下进行调整。否则,您可能会花费宝贵的时间和精力来实现实际上对网站或用户没有好处的功能/调整!


根据OP的评论更新2012-01-31:

如果你想让你的工作角色一次执行一项任务,并且只在他们不再忙时才回去执行另一项工作,我建议你反转你的架构:

< img src="https://i.sstatic.net/Ms0SF.png" alt="在此处输入图像描述">

而不是让一些前端服务器尝试找出哪个工作人员“忙”并分配工作因此,考虑让前端服务器将传入消息排队到“传入”队列中。

工作人员从前端“拉”一个新的工作项,执行所需的任何工作,然后通知前端工作已完成并请求另一个工作项。

这种方法的优点在于它可以以线性方式扩展并且具有高度弹性。

当工作人员“拉取”新工作项时,前端会为消息添加时间戳并将其移至辅助“待处理”队列。工作人员完成工作项目后通知前端;前端将已完成的项目移动到“已完成”队列(如果不关心则删除它们)。

然后,前端可以对“待处理”队列进行定期扫描,查找等待时间过长的消息,如果这些消息等待时间过长,则可以将这些消息返回到“传入”队列。

队列可以很有趣:)但是,构建这样的排队系统可能很复杂,并且使其真正可靠可能既耗时又昂贵。

幸运的是,您可以利用一些非常熟练的消息总线实现,它们将为您提供实现这一目标所需的 90%!我最喜欢的是 Microsoft 基于云的 Azure Message Bus,它提供了为您提供一个非常适合您的场景的防弹持久消息传递、发布-订阅和队列基础设施。

HTH。

What you describe is precisely what Load Balancers are built for - they attempt to distribute incoming requests across a pool of available servers.

Many hosting companies offer hosting plans involving multiple servers which you can choose to load-balance incoming requests against.

For example: Rackspace offer load balancing as an optional feature of some of their hosting plans.

If you host a site with more than one web-role instances in Microsoft's Azure cloud, your site is automatically load balanced for you. You can also build your site such that it is dynamically load-balanced in multiple geographical regions also so that requests originating from, for example Asia, are routed to a datacenter in asia, reducing latency and intra-datacenter bandwidth.

Also, consider introducing queueing/message-bus between your front-end website and your back-end batch/intensive workload processing. This way you can independently scale the front and back ends of your system.

Having said all of the above, don't over-engineer your solution - focus on building a stable, solid, reliable, efficient system, then monitor and measure its performance and tune it where appropriate. Otherwise, you could spend valuable time and effort implementing features/tweaks that don't actually benefit the site or the user!


Update 2012-01-31 based on OP's comments:

If you want to make your worker roles perform one task at a time and only go back for another piece of work when they're no longer busy, I suggest you invert your architecture:

enter image description here

Instead of having some front-end server try to work out which of the workers is "busy" and distribute work accordingly, consider having your front-end server queue incoming messages into an "incoming" queue.

Workers "pull" a new work-item from the front-end, perform whatever work is required and then inform the front-end that the work is complete and ask for another work-item.

The beauty with this approach is that it can scale in a linear fashion and can be HIGHLY resilient.

When a worker "pulls" a new work item, the front-end timestamps the message and moves it to a secondary "pending" queue. Workers inform the front-end when they're done with a work-item; the front-end moves completed items to a "completed" queue (or deletes them if it doesn't care).

The front-end can then run a periodic scan of the "pending" queue, looking for messages that have been waiting too long and can return those messages to the "incoming" queue if they've been pending for too long.

Queues can be A LOT of fun :) However, building such a queueing system can be complex and making it truly reliable can be time consuming and costly.

Luckily, you can take advantage of some very proficient message-bus implementations that'll provide you with 90% of what you'll need to make this happen! My favorite is Microsoft's cloud-based Azure Message Bus which provides you a pretty bullet-proof durable-messaging, pub-sub and queueing infrastructure that's ideally suited to your scenario.

HTH.

憧憬巴黎街头的黎明 2025-01-05 15:10:31

我不会推荐这个。平衡对于大多数开发人员来说一开始似乎很简单(“嘿,我只需跟踪每个请求并将下一个请求转发到队列中的下一个服务器等”),但实际上如果它不是完全微不足道的话,它会很复杂。您需要考虑维护每台服务器的负载配额、处理出现故障的服务器等。

如果您已经在运行 Server 2008,那么使用操作系统的 NLB 功能可能比使用操作系统更便宜、更容易(并且性能更高)。与你自己的。例如,是设置 NLB 群集的一个很好的演练。

当然,最终的方法取决于你,但我认为使用正确的工具来完成工作总是一个好主意。如果您已经将其融入到操作系统中,那么在 WCF 服务中重新发明循环 IP 集群似乎是浪费时间。

祝你好运 :)

I would not recommend this. Balancing seems simple to most developers at first ("hey I'll just keep track of each request and forward the next one to the next server in line, etc") but in reality it is complicated if it's not completely trivial. You need to think about maintaining load quotas per server, handling servers that go down, etc.

If you're already running Server 2008 it's probably cheaper and easier (and far more performant) to use the NLB features of the OS instead of coming up with your own. This for example is a good walkthrough of setting up an NLB cluster.

Ultimately of course the approach is up to you, but I think using the right tools for the job is always a good idea. Re-inventing round robin IP clustering in a WCF service seems like a waste of time if you have that baked into the OS already.

Good luck :)

岁吢 2025-01-05 15:10:31

我建议您采取完全不同的方法。

也就是说,不要将作业推送到特定服务器,而是让场中的每个服务器轮询可用的作业。

这样做的主要原因是您要求每个服务器“一次只能为 1 个客户端提供服务”。


因此,要进行设置:

  1. 收到一个请求。
  2. 该请求被记录到工作表中
  3. 。您的一个工作服务器发出下一个作业的请求。
  4. 它被分配给该服务器。
  5. 工作完成后,服务器将其标记为已完成。

现在,这为您提供了一些选择。首先,重新分配很简单...只需清除作业当前分配给哪个服务器即可。您可能有一个监视器监视该表,如果某个作业花费“太长时间”,那么您可以简单地允许不同的服务器来获取它。

此外,您可以随意添加或删除工作服务器,而无需通知某种类型的控制服务器机器现在处于在线或离线状态。


为了更加稳健,您可以让每个工作服务器检查您的数据库,以表明它已准备好工作。然后,服务器每隔几秒检查一次,看看是否已分配了任何内容。

SQL 作业可能会经常执行以分配工作。如果机器处理时间过长,它还可以负责重新分配。

I would suggest that you take a completely different approach.

Namely, instead of pushing a job to a particular server, let each server in the farm poll for jobs as they are available.

The primary reason for this is your requirement that each server is "able to serve only 1 client at a time".


So, to set things up:

  1. A request comes in.
  2. The request is logged to a Work table
  3. One of your work servers makes a request for the next job.
  4. It is assigned to that server.
  5. Once the work is complete, the server marks it as done.

Now, this gives you some options. First off it's trivial to reassign... Just clear out what server the job is currently assigned to. You might have a monitor watching this table and, if a job is taking "too long" then you can simply allow a different server to grab it.

Further, you can add or remove worker servers at will, without having to notify some type of controlling server that a machine is now online or offline.


For a little more robustness you could have each worker server check in with your database to indicate that it is ready for work. The servers would then check in once every few seconds to see if anything has been assigned.

A SQL job could be executing every so often that assigns the work. It could also be responsible for reassigning in the event a machine has taken too long to process it.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文