架构问题:集中式服务器与本地化服务器方法

发布于 2024-08-06 01:38:13 字数 443 浏览 4 评论 0原文

我的问题与目前正在开发的应用程序的架构有关。目前,我们在每个盒子上本地安装服务器,该服务器从客户端获取数据并对其进行某种处理,然后生成输出并根据输出数据打印收据,并且该输出数据存储在集中数据库中每小时从客户端盒子上的本地服务器上传。

我担心的是在每个客户端本地安装服务器是好的做法还是拥有集中式服务器的最佳方法。当被问到时,有人建议,如果我们使用集中式服务器,则需要考虑延迟、速度和带宽,因为每个客户端请求都会到达服务器,从而增加执行时间,减少带宽,并且延迟也会受到严重影响。

注意:

应用程序的业务线是运输和供应链物流,应用程序生成将包裹从源地运送到目的地所需的所有路线、评级和其他标签相关信息。前任。苹果、戴尔运送了数以百万计的包裹,因此该服务器完成了生成标签、路由和评级详细信息的所有工作...希望这将使图片更加清晰:)

这里客户端处理数以百万计的交易,因此请求命中率为非常高。

谢谢。

My question is related to Architecture of the Application on which am working right now. Currently, we are installing server locally on each box and that server get's data from the client and does some kind of processing on it and than it generates output and receipt is printed depending upon the output data, and that output data is stored in centralized database by hourly upload from local server's on client boxes.

I have concern of is it good practice to install server locally on each client box or its best approach to have centralized server. When asked it was suggested that if we use centralized server than latency, speed and bandwidth would come in considerations as each and every client request would hit server thereby increasing the time of execution, reducing bandwidth and latency would be also badly affected.

Note:

Business line of application is Shipping and Supply Chain Logistics, application generates all routing, rating and other label related information which is needed to ship package from source to destination. Ex. Apple, Dell ship millions and millions of package and so this server does all work of generating label, routing and rating details...Hope this would make picture more clear :)

Here client process millions and millions of transactions and so request hitting ratio is very high.

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

木槿暧夏七纪年 2024-08-13 01:38:13

这取决于您拥有什么类型的系统以及您的要求是什么。

集中式服务器模型的优点之一是,您可以独立扩展客户端数量和服务器数量,以充分利用硬件,并且还可以在其中一台服务器发生故障时提供冗余。例如,SOA 环境中的 Web 服务就适合此模型。这确实会增加延迟,如果您有具有 SLA 的实时系统,需要在几毫秒范围内做出响应,那么这可能不是正确的选择。

由于您似乎追求非常快的响应时间,因此您现在拥有的解决方案可能是一个相当合理的解决方案。

如果您正在寻找一种更接近实时的方法,则可以按计划将数据同步回数据库,也许可以使用消息队列。这可能也会让事情变得更简单一些。

It depends on what sort of system you have and what your requirements are.

One of the advantages of a centralised server model is that you can scale the number of clients and the number of servers independently to make the most of your hardware and it also allows for redundancy in the event one of your servers falls over. For instance Web services in an SOA environment is suited to this model. This does come with an increase in latency which if you have real time systems with SLA's which require responses in the couple of milliseconds range than this probably isn't the way to go.

Since it appears that you are after really fast response times than perhaps what you have now is quite a reasonable solution.

The syncing of the data back to the database on a schedule could be done differently if you were looking for a way to make that closer to real time, perhaps a message queue would work. This would probably make things a little simpler as well.

伊面 2024-08-13 01:38:13

客户端-服务器环境(包括 Web)各有优点和缺点,因此应用程序的上下文至关重要。在您的场景中,您拥有分布式服务器,因此工作负载是平衡的。然而,在维护每台服务器(软件、操作、可靠性等)方面,您会遇到一场噩梦。集中式服务器提供更好的维护/监控等,但也增加了工作量。

您的情况的答案很大程度上取决于您的应用程序的需求。虽然数百万个交易听起来很多,但设计良好的应用程序可以相当合理地处理该负载。但是,您可能会在这些事务请求中发送大量数据,这可能会使该过程变得繁重且不可靠。同样,应用程序上下文非常重要。

根据您提供的注释,听起来好像有一些本地服务器处理可以处理实时事务,但会按计划将处理/汇总的数据异步加载到中央数据库。这当然不是一个糟糕的方法,尽管它确实增加了环境的复杂性。

如果您能提供有关您的申请的更多详细信息,我将很乐意编辑我的回复。

希望这有帮助。

Client-server environments (web included) have advantages and disadvantages, so the context of your application is critical. In your scenario, you have distributed servers so the workload is balanced. However, you have a nightmare in terms of maintaining each server (software, operations, reliability, etc.) A centralized server provides better maintenance/monitoring/etc., but also carries increased workload.

The answer for your situation depends greatly on the needs of your application. While millions of transactions sound like a lot, well-designed applications can handle that load quite reasonably. However, you may be sending a substantial amount of data in those transactional requests, which might make that process onerous and unreliable. Again, application context is very important.

Based on the notes you've supplied, it sounds as if there is some local server processing that handles real-time transactions, but asynchronously does processed/summarized data loads to a central db on a schedule. That's certainly not a poor approach, although it does increase environmental complexity.

I will gladly edit my response if you can supply greater detail about your application.

Hope this helps.

梦幻的味道 2024-08-13 01:38:13

这两种方法都可以成功地发挥作用。

存储转发系统的缺点是您无法在货运站的中心位置获得最新的信息。更全连接的集中式系统的技术缺点不一定是带宽和交易吞吐量,因为这些可以通过更多资源来缓解(这是成本问题,而不是技术问题),但全连接的系统有更多的故障点并且没有本地后备选项。

在成本方面,虽然较胖的客户端具有较低的带宽成本,但管理客户端会增加管理成本。通常,虽然可以减轻管理成本,但它们是劳动力成本和支持成本,这些成本通常超过商品技术成本。

Both approaches can work successfully.

The drawbacks of a store-and-forward system is that you will not have up-to-date information in the central location of what's going on at a shipping station. The technical drawbacks of a more fully-connected centralized system are not necessarily bandwidth and transaction throughput, since these can be mitigated with more resources (it's a cost problem, not a technical problem), but a fully-connected system has more points of failure and no local fallback option.

On the costs side, although fatter clients have lower bandwidth costs, administering the clients increases management costs. Typically, the management costs, while they can be mitigated are labor costs and support costs, which often outweigh the commodity technology costs.

疯到世界奔溃 2024-08-13 01:38:13

正如其他人所说,这完全取决于您在做什么。

然而,最需要关注的是您跨越机器边界的次数。如果你能尽量减少这种情况,你就会处于良好的状态。一般来说,我会尽可能避免 RPC 机制,因为这将是两台机器边界交叉:)

在每台本地机器上都有一个“服务器”的问题很简单 - 如何保持一致的状态?

此外,您的网络拓扑将是一个重要因素。如果所有内容都在本地子网上(最好在同一交换机上),那么延迟不会成为问题,除非您设计了糟糕的网络代码。如果你要通过云,情况就不同了。

As others have said, it all depends on what you're doing.

However, the biggest thing to look at is how many times you're crossing machine boundaries. If you can minimize that, you'll be in pretty good shape. In general, I'd avoid RPC mechanics whenever possible, as that will be two machine boundary crossings :)

The issue with having a 'server' on each local machine is simple - how do you maintain consistent state?

Also, your network topology will be an important factor. If everything's on a local subnet (ideally on the same switch), latency won't be an issue unless you have horribly designed network code. If you're going over the cloud, it's a different story.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文