nginx 可以用作后端 websocket 服务器的反向代理吗?

发布于 2024-08-25 00:49:08 字数 782 浏览 9 评论 0原文

我们正在开发一个需要利用 html5 websockets 的 Ruby on Rails 应用程序。目前,我们有两个独立的“服务器”:我们的主应用程序运行在 nginx+passenger 上,另一个使用 Pratik Naik 的 Cramp 框架(运行在 Thin) 处理 websocket 连接。

理想情况下,当需要部署时,我们会让 Rails 应用程序在 nginx+passenger 上运行,并且 websocket 服务器将在 nginx 后面进行代理,因此我们不需要让 websocket 服务器在不同的端口上运行。

问题是,在此设置中,nginx 似乎过早地关闭了与 Thin 的连接。与瘦服务器的连接成功建立,然后立即关闭并返回 200 响应代码。我们的猜测是 nginx 没有意识到客户端正在尝试为 websocket 流量建立长期运行的连接。

诚然,我对 nginx 配置不太了解,所以,是否有可能将 nginx 配置为充当 websocket 服务器的反向代理?或者我是否必须等待 nginx 提供对新的 websocket 握手内容的支持?假设需要应用程序服务器和 websocket 服务器侦听端口 80,这是否意味着我现在必须让 Thin 在前面没有 nginx 的单独服务器上运行?

预先感谢您的任何意见或建议。 :)

-约翰

We're working on a Ruby on Rails app that needs to take advantage of html5 websockets. At the moment, we have two separate "servers" so to speak: our main app running on nginx+passenger, and a separate server using Pratik Naik's Cramp framework (which is running on Thin) to handle the websocket connections.

Ideally, when it comes time for deployment, we'd have the rails app running on nginx+passenger, and the websocket server would be proxied behind nginx, so we wouldn't need to have the websocket server running on a different port.

Problem is, in this setup it seems that nginx is closing the connections to Thin too early. The connection is successfully established to the Thin server, then immediately closed with a 200 response code. Our guess is that nginx doesn't realize that the client is trying to establish a long-running connection for websocket traffic.

Admittedly, I'm not all that savvy with nginx config, so, is it even possible to configure nginx to act as a reverse proxy for a websocket server? Or do I have to wait for nginx to offer support for the new websocket handshake stuff? Assuming that having both the app server and the websocket server listening on port 80 is a requirement, might that mean I have to have Thin running on a separate server without nginx in front for now?

Thanks in advance for any advice or suggestions. :)

-John

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(7

木緿 2024-09-01 00:49:08

目前您无法使用 nginx 来实现此目的< strong>[这不再是真的],但我建议查看 HAProxy。我正是为了这个目的才使用它的。

诀窍是设置较长的超时,以便套接字连接不会关闭。类似于:

timeout client  86400000 # In the frontend
timeout server  86400000 # In the backend

如果您想在同一端口上提供 Rails 和 Clamp 应用程序,您可以使用 ACL 规则来检测 WebSocket 连接并使用不同的后端。所以你的 haproxy 前端配置看起来像这样

frontend all 0.0.0.0:80
  timeout client    86400000
  default_backend   rails_backend
  acl websocket hdr(Upgrade)    -i WebSocket
  use_backend   cramp_backend   if websocket

为了完整性,后端看起来像

backend cramp_backend
  timeout server  86400000
  server cramp1 localhost:8090 maxconn 200 check

You can't use nginx for this currently[it's not true anymore], but I would suggest looking at HAProxy. I have used it for exactly this purpose.

The trick is to set long timeouts so that the socket connections are not closed. Something like:

timeout client  86400000 # In the frontend
timeout server  86400000 # In the backend

If you want to serve say a rails and cramp application on the same port you can use ACL rules to detect a websocket connection and use a different backend. So your haproxy frontend config would look something like

frontend all 0.0.0.0:80
  timeout client    86400000
  default_backend   rails_backend
  acl websocket hdr(Upgrade)    -i WebSocket
  use_backend   cramp_backend   if websocket

For completeness the backend would look like

backend cramp_backend
  timeout server  86400000
  server cramp1 localhost:8090 maxconn 200 check
哆兒滾 2024-09-01 00:49:08

使用我的 nginx_tcp_proxy_module 模块怎么样?

该模块专为 Nginx 的通用 TCP 代理而设计。我认为它也适合 websocket。我只是在开发分支中添加 tcp_ssl_module 。

How about use my nginx_tcp_proxy_module module?

This module is designed for general TCP proxy with Nginx. I think it's also suitable for websocket. And I just add tcp_ssl_module in the development branch.

神爱温柔 2024-09-01 00:49:08

nginx (>= 1.3.13) 现在支持反向代理 Websockets。

# the upstream server doesn't need a prefix! 
# no need for wss:// or http:// because nginx will upgrade to http1.1 in the config below
upstream app_server {
    server localhost:3000;
}

server {
    # ...

    location / {
        proxy_pass http://app_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_redirect off;
    }
}

nginx (>= 1.3.13) now supports reverse proxying websockets.

# the upstream server doesn't need a prefix! 
# no need for wss:// or http:// because nginx will upgrade to http1.1 in the config below
upstream app_server {
    server localhost:3000;
}

server {
    # ...

    location / {
        proxy_pass http://app_server;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_set_header Host $host;

        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

        proxy_redirect off;
    }
}
尛丟丟 2024-09-01 00:49:08

开箱即用(即官方来源)Nginx 只能建立到上游(=后端)的 HTTP 1.0 连接,这意味着不可能保持活动:Nginx 将选择一个上游服务器,打开到它的连接,代理,缓存(如果你想要的话) )并关闭连接。就是这样。

这是需要与后端持久连接的框架无法通过 Nginx 工作的根本原因(我猜没有 HTTP/1.1 = 没有 keepalive 和没有 websockets)。尽管有这个缺点,但有一个明显的好处:Nginx 可以从多个上游中进行选择(负载平衡),并在其中一些出现故障时故障转移到活动上游。

编辑:Nginx 支持后端 HTTP 1.1 keepalive 自 1.1.4 版本起。支持“fastcgi”和“proxy”上游。 这是文档

Out of the box (i.e. official sources) Nginx can establish only HTTP 1.0 connections to an upstream (=backend), which means no keepalive is possibe: Nginx will select an upstream server, open connection to it, proxy, cache (if you want) and close the connection. That's it.

This is the fundamental reason frameworks requiring persistent connections to the backend would not work through Nginx (no HTTP/1.1 = no keepalive and no websockets I guess). Despite having this disadvantage there is an evident benefit: Nginx can choose out of several upstreams (load balance) and failover to alive one in case some of them failed.

Edit: Nginx supports HTTP 1.1 to backends & keepalive since version 1.1.4. "fastcgi" and "proxy" upstreams are supported. Here it is the docs

情未る 2024-09-01 00:49:08

对于任何想知道同样问题的人,nginx 现在正式支持 HTTP 1.1 上游。请参阅 nginx 文档了解“keepalive”和“proxy_http_version 1.1”。

For anyone that wondering about the same problem, nginx now officially supports HTTP 1.1 upstream. See nginx documentation for "keepalive" and "proxy_http_version 1.1".

偷得浮生 2024-09-01 00:49:08

Nginx 与新的 HTTP 推送模块怎么样:http://pushmodule.slact.net/。它负责处理反向代理可能需要担心的连接杂耍(可以这么说)。它无疑是 Websocket 的一个可行替代方案,但 Websocket 尚未完全融入其中。我知道 HTTP Push 模块的开发人员仍在开发完全稳定的版本,但它正在积极开发中。它有一些版本正在生产代码库中使用。引用作者的话说,“一个有用的工具,但名字很无聊。”

How about Nginx with the new HTTP Push module: http://pushmodule.slact.net/. It takes care of the connection juggling (so to speak) that one might have to worry about with a reverse proxy. It is certainly a viable alternative to Websockets which are not fully in the mix yet. I know developer of the HTTP Push module is still working on a fully stable version, but it is in active development. There are versions of it being used in production codebases. To quote the author, "A useful tool with a boring name."

终弃我 2024-09-01 00:49:08

我使用 nginx 反向代理到具有长轮询连接的彗星式服务器,效果很好。确保将 proxy_send_timeout 和 proxy_read_timeout 配置为适当的值。还要确保 nginx 代理的后端服务器支持 http 1.0,因为我不认为 nginx 的代理模块支持 http 1.1。

只是为了消除一些答案中的一些混乱:Keepalive 允许客户端重用连接来发送另一个 HTTP 请求。它与长轮询或保持连接打开直到事件发生无关,这正是最初问题所询问的。所以没关系,nginx的代理模块只支持HTTP 1.0,没有keepalive。

I use nginx to reverse proxy to a comet style server with long polling connections and it works great. Make sure you configure proxy_send_timeout and proxy_read_timeout to appropriate values. Also make sure your back-end server that nginx is proxying to supports http 1.0 because I don't think nginx's proxy module does http 1.1 yet.

Just to clear up some confusion in a few of the answers: Keepalive allows a client to reuse a connection to send another HTTP request. It does not have anything to do with long polling or holding connections open until an event occurs which is what the original question was asking about. So it doesn't matter than nginx's proxy module only supports HTTP 1.0 which does not have keepalive.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文