跨多个核心/服务器扩展 Node.JS

发布于 2024-10-25 15:57:10 字数 695 浏览 3 评论 0原文

好吧,我有一个想法想要仔细研究,但在我这样做之前,我需要充分理解一些事情。

首先,我认为我继续使用该系统的方式是拥有 3 个服务器,如下所述:

第一个服务器将是我的网络前端,这是服务器将侦听连接并响应客户端,该服务器将具有 8 个内核和 16GB RAM。

第二个服务器将是数据库服务器,实际上非常不言自明,连接到主机并设置/获取数据。

第三台服务器将是我的存储服务器,这将是存储可下载文件的地方。

我的第一个问题是:

  • 在我的前端服务器上,我有 8 个核心,扩展节点以使负载分布在各个核心上的最佳方法是什么?

我的第二个问题是:

  • 是否有一个系统可以放入我的应用程序框架中,让我能够与其他内核通信并传递消息以节省 I/O。

最后一个问题:

  • 是否有任何系统可以帮助我以尽可能小的开销将内容从存储服务器移动到前端服务器上的请求,速度是这里的一个问题,因为我们将有 500 多个客户端下载和高峰时同时上传。

我终于说服了我的雇主,node.js 非常快,而且是最新的编程技术,我们应该为我们的内联网系统投资一个平台,但他要求提供详细的文档,说明如何在我们当前的硬件上进行扩展。有可用的。

Ok so I have an idea I want to peruse but before I do I need to understand a few things fully.

Firstly the way I think im going to go ahead with this system is to have 3 Server which are described below:

The First Server will be my web Front End, this is the server that will be listening for connection and responding to clients, this server will have 8 cores and 16GB Ram.

The Second Server will be the Database Server, pretty self explanatory really, connect to the host and set / get data.

The Third Server will be my storage server, this will be where downloadable files are stored.

My first questions is:

  • On my front end server, I have 8 cores, what's the best way to scale node so that the load is distributed across the cores?

My second question is:

  • Is there a system out there I can drop into my application framework that will allow me to talk to the other cores and pass messages around to save I/O.

and final question:

  • Is there any system I can use to help move the content from my storage server to the request on the front-end server with as little overhead as possible, speed is a concern here as we would have 500+ clients downloading and uploading concurrently at peak times.

I have finally convinced my employer that node.js is extremely fast and its the latest in programming technology, and we should invest in a platform for our Intranet system, but he has requested detailed documentation on how this could be scaled across the current hardware we have available.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

绝影如岚 2024-11-01 15:57:10

在我的前端服务器上,我有 8 个
核心,扩展的最佳方式是什么
节点,以便分布负载
跨核心?

尝试查看node.js cluster 模块,它是一个多核服务器管理器。

On my front end server, I have 8
cores, what's the best way to scale
node so that the load is distributed
across the cores?

Try to look at node.js cluster module which is a multi-core server manager.

鯉魚旗 2024-11-01 15:57:10

首先,我不会将您提出的设置描述为“缩放”,它更像是“扩展”。您只有一台应用程序服务器来处理请求。如果您将来添加更多应用程序服务器,那么您将遇到扩展问题。

据我所知,node.js 是单线程的,这意味着它只能使用单个核心。不是我的专业领域如何/如果你可以扩展它,将把这部分留给其他人。

我建议 NFS 将存储服务器上的目录挂载到应用程序服务器。 NFS 的开销相对较低。然后您就可以像访问本地文件一样访问这些文件。

Firstly, I wouldn't describe the setup you propose as 'scaling', it's more like 'spreading'. You only have one app server serving the requests. If you add more app servers in the future, then you will have a scaling problem then.

I understand that node.js is single-threaded, which implies that it can only use a single core. Not my area of expertise on how to/if you can scale it, will leave that part to someone else.

I would suggest NFS mounting a directory on the storage server to the app server. NFS has relatively low overhead. Then you can access the files as if they were local.

十二 2024-11-01 15:57:10

关于你的第一个问题:使用 cluster (我们已经在生产系统中使用它,就像一个魅力)。

当涉及到工作人员消息传递时,我无法真正帮助您。但你最好的选择也是集群。也许将来会有一些功能为所有集群工作人员提供“核心间”消息传递(不知道集群的路线图,但这似乎是一个想法)。

对于您的第三个要求,我会使用 NFS 等低开销协议或(如果您在基础设施方面真的很疯狂的话)高速 SAN 后端。

另一个建议:使用 MongoDB 作为数据库后端。您可以从低端硬件开始,然后使用 MongoDB 的分片/复制集功能轻松扩展数据库实例(如果这是某种要求)。

Concerning your first question: use cluster (we already use it in a production system, works like a charm).

When it comes to worker messaging, i cannot really help you out. But your best bet is cluster too. Maybe there will be some functionality that provides "inter-core" messaging accross all cluster workers in the future (don't know the roadmap of cluster, but it seems like an idea).

For your third requirement, i'd use a low-overhead protocol like NFS or (if you can go really crazy when it comes to infrastructure) a high-speed SAN backend.

Another advice: use MongoDB as your database backend. You can start with low-end hardware and scale up your database instance with ease using MongoDB's sharding/replication set features (if that is some kind of requirement).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文