我正在开发 NextJs 应用程序。它将部署在 Azure 上,我将在其中安装 Nodejs,并将运行 next start -p 4000
命令。
我想知道NextJs如何处理大流量?也就是说,如果有大约 2 万个用户浏览我的网站,Nextjs 是否可以开箱即用地处理这种情况,或者我应该使用多个 nextjs 应用程序来 dockerize 和编排多个 nodejs docker 映像?
或者,Nextjs 是否向我的 CDN 提供静态文件,以便我不必关心运行 nextjs 服务器的 NodeJS 的流量压力?
希望我的问题有意义。
I am developing NextJs application. It will be deployed on Azure where I will have nodejs installed and will run next start -p 4000
command.
What I would like to know is how does NextJs handle heavy traffic? Namely, if there are something like 20k users going through my site, is this something that Nextjs can handle out of the box or should I dockerize and orchestrate multiple nodejs docker images with multiple nextjs applications?
Or, is Nextjs serving static files to my CDN so that I do not have to care about traffic stress of my nodejs where nextjs server is running?
Hope my question makes sense.
发布评论
评论(1)
没有神奇的数字
没有可以凭空想象的容量限制的固定数字。其次,Node.js 应用程序通常在处理多个连接方面非常高效,但负载的重量取决于您的站点。您可以“处理”多少个同时连接还取决于您认为可以接受的延迟时间。例如,您的服务器可能能够处理 40k 个并发请求(延迟为 1 秒),但只能处理 5k 个并发请求(延迟为 100 毫秒)。
影响容量的因素
您的服务器可以处理的流量取决于以下因素:
估计容量
理论上,您的开发计算机应该比生产服务器慢,因此您可以通过负载测试来获得服务器容量的下限。您可以使用 Autocannon 或 负载测试以评估您的容量。如果一开始只有几个同时连接并逐渐增加,您应该会看到延迟突然增加(在此之前延迟应该或多或少保持一致)。这是你开始达到极限的时候。
扩展容量
Threadpool
Node.js 是单线程的,但异步调用在 Lib 中运行UV线程池。当 Node.js 等待 IO 时,有一个 LibUV 线程在后台旋转。当 Lib UV 线程池已满时,Node.js 必须等待另一个可用,然后才能启动另一个异步 IO 任务,这会减慢一切。
Node.js 中的默认线程池大小非常小(以前为 4),因此增加它可能非常有益。您可以在此处和此处。
其他问题
因为您在问题中特别提到了 Docker,所以请记住 Docker 只是一种部署策略,其本身并不能帮助减轻任何负载。如果您受到线程池限制,那么对同一台计算机上的多个 Docker 实例进行负载平衡将加快您的进程,直到您达到其他上限之一。如果您已经受到 CPU 限制或 IO 限制,那么在同一服务器上运行多个实例将无济于事。此时,您需要垂直扩展服务器计算机或添加更多计算机。
No magic number
There is no set number for capacity limit that can be pulled out of a hat. Next, and Node.js apps in general, are pretty efficient at handling multiple connections, but how heavy your load is depends on your site. How many simultaneous connections you can "handle" also depends on how much latency you find acceptable. For example, your server may be able to handle 40k simultaneous requests with 1 second of latency, but only 5k simultaneous requests with 100ms of latency.
Factors affecting capacity
How much traffic your server can handle will depend on things like:
Estimating capacity
Your dev machine should, theoretically, be slower than your production server, so you can get a lower bound on the capacity of your server by load testing it. You can use tools like Autocannon or Loadtest to ballpark your capacity. If you start with only a few simultaneous connections and ramp up, you should reach a point where you see the latency suddenly increase (latency should be more or less consistent until then). This is when you are starting to hit a limit.
Expanding your capacity
Threadpool
Node.js is single-threaded, but asynchronous calls run in the Lib UV thread pool. When Node.js is waiting on IO, there is a LibUV thread spinning behind the scenes. When the Lib UV thread pool is full, Node.js has to wait for another to become available before another async IO task can be started, which slows everything down.
The default thread pool size in Node.js is quite small (used to be 4), so increasing it can be quite beneficial. You can find more information on tuning the LibUV threadpool size here and here.
Other concerns
Because you specifically mentioned Docker in your question, remember that Docker is only a deployment strategy, and does not by itself help alleviate any load. If you're bound by a threadpool limit, then load balancing to multiple Docker instances on the same machine will speed up your process until you hit one of the other caps. If you're already CPU-bound or IO-bound, then multiple instances running on the same server won't help. At this point you'll need to either vertically scale your server machine or add more machines.