为什么将 nginx 与 Catalyst/Plack/Starman 一起使用?
我正在尝试使用 Plack/Starman 部署我的小型 Catalyst Web 应用程序。所有文档似乎都表明我想将其与 nginx 结合使用。这样做有什么好处呢?为什么不在端口 80 上直接使用 Starman?
I am trying to deploy my little Catalyst web app using Plack/Starman. All the documentation seems to suggest I want to use this in combination with nginx. What are the benefits of this? Why not use Starman straight up on port 80?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(3)
它不一定是特别的 nginx,但出于以下几个原因,您需要某种前端服务器代理到您的应用程序服务器:
以便您可以作为普通用户在高端口上运行 Catalyst 服务器,在端口 80 上运行前端服务器。
提供静态文件(普通资源) 图像、JS 和 CSS,以及您可能想要使用 X-Sendfile 或 X-Accel-Redirect 的任何类型的下载),而无需在下载期间占用 perl 进程。
如果您想进行更复杂的配置,例如 Edge Side Includes,或者让 Web 服务器直接从 memcached 或 mogilefs 提供服务(nginx 可以做的这两件事),或者负载平衡/HA,这会让事情变得更容易 。
It doesn't have to be nginx in particular, but you want some kind of frontend server proxying to your application server for a few reasons:
So that you can run the Catalyst server on a high port, as an ordinary user, while running the frontend server on port 80.
To serve static files (ordinary resources like images, JS, and CSS, as well as any sort of downloads you might want to use X-Sendfile or X-Accel-Redirect with) without tying up a perl process for the duration of the download.
It makes things easier if you want to move on to a more complicated config involving e.g. Edge Side Includes, or having the webserver serve directly from memcached or mogilefs (both things that nginx can do), or a load-balancing / HA config.
我在 #plack 上问了这个问题,并得到了 @nothingmuch 的以下回复(我添加了格式):
I asked this question on #plack and got the following response from @nothingmuch (I added formatting):
另一个原因是轻量级前端服务器(甚至 Apache 也可以)每个连接消耗的内存比典型的 Starman 进程少得多(几 MB vs. 几十或超过 100 MB)。由于连接会打开一段时间,特别是如果您想使用保持活动连接,则可以使用更少的 RAM 支持大量并发连接。只需确保代理前端服务器的缓冲区大小足够大,可以立即从后端加载典型的 HTTP 响应。然后后端就可以空闲地处理下一个请求。
Another reason is that a lightweight frontend server (even Apache is OK) consumes much less memory per connection than a typical Starman process (a couple of MB vs. tens or more than 100 MB). Since a connection is open for some time, especially if you want to use keep-alive connections, you can support a large number of simultaneous connections with much less RAM. Only make sure that the buffer size of the proxying frontend server is large enough to load a typical HTTP response immediately from the backend. Then the backend is free to process the next request.