HAProxy + Socket.IO + TornadIO 在心跳时不断断开连接

发布于 2024-10-21 05:57:16 字数 1375 浏览 2 评论 0原文

因此,我在使用 HAProxy 在端口 8888 上平衡 socket.io 负载时遇到问题。我的设置是 NGINX 侦听端口 80,并在端口 80 上运行的 Tornado Web 服务器实例之间进行负载平衡。然后,在同一个负载均衡器上,我有一个 HAProxy 实例侦听端口 8888,将请求转发到托管 TornadIO 的网络中的其他计算机服务器实例也在 8888 上运行。连接在前 30 秒左右有效,然后开始反复断开/重新连接。值得注意的是,它似乎在第一次心跳尝试时就中断了……与第一次连接尝试/交换的前几条消息相比,心跳是 HAProxy 会遇到问题的不同协议吗?

有趣的是,当tornadIO实例与负载均衡器在同一台计算机上运行时,即使HAProxy正在工作(但连接端口8888,假设tornadIO实例在端口9000上),这种情况也不会发生。

需要注意的是,TornadIO 在整个过程中不会抛出任何异常或任何令人不安的输出,这表明它不是我的服务器代码,而是代理层中的某些内容?

还请大家知道,我正在使用 RabbitMQ 同步所有 TonadIO 集群,但我认为这并不重要(并且 HAProxy 不接触 Rabbit)

这是我的 HAProxy 设置:

global
    daemon
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

listen http-in
    balance roundrobin
    option forwardfor # This sets X-Forwarded-For
    timeout queue 5000
    timeout server 86400000
    timeout connect 86400000
    bind *:8888
    server server1 18.181.3.164:8888 # ether1

在我的 nginx 配置中,我插入了:

location ~* \.(eot|ttf|woff)$ {
            add_header Access-Control-Allow-Origin *;
        }

以确保其不是访问控制问题(控制台没有说是,所以不应该是)。

我也尝试添加

option http-server-close
option http-pretend-keepalive

到我的 HAProxy 配置中,但没有成功。

有什么想法吗?

** 我正在 Chrome 9.0.597 和 Firefox 3.6 中进行测试(因此无论使用还是不使用网络套接字,都是一样的)

So, I am having trouble load balancing socket.io on port 8888 using HAProxy. My setup is NGINX listening on port 80, and load balancing between Tornado web server instances running on port 80. Then, on the same load balancer, I have a HAProxy instance listening on port 8888, forwarding requests to OTHER computers in the network hosting TornadIO server instances also running on 8888. The connection works for the first 30 seconds or so, and then begins to disconnect / reconnect repeatedly. What's important to notice is that it seems like it breaks on the first heartbeat attempt ... is the heartbeat a different protocol that HAProxy would have trouble with as opposed to the first connection attempt / first few messages exchanged?

Interestingly, this DOES NOT happen when the tornadIO instance is running on the same computer as the load balancer, even with HAProxy working (but connecting port 8888 and lets say the tornadIO instance on port 9000).

It's important to note that TornadIO does not throw any exceptions, or any upset output during this entire process, showing that its not my server code but something in the proxy layer?

Let it also be known that I am using RabbitMQ to synchronize all the TonadIO clusters, not that I think it matters (and HAProxy does not touch Rabbit)

Here is my HAProxy setup:

global
    daemon
    maxconn 256

defaults
    mode http
    timeout connect 5000ms
    timeout client 50000ms
    timeout server 50000ms

listen http-in
    balance roundrobin
    option forwardfor # This sets X-Forwarded-For
    timeout queue 5000
    timeout server 86400000
    timeout connect 86400000
    bind *:8888
    server server1 18.181.3.164:8888 # ether1

In my nginx configuration, I have inserted:

location ~* \.(eot|ttf|woff)$ {
            add_header Access-Control-Allow-Origin *;
        }

to make sure its not an access control problem (the console does not say it is, so it shouldnt be).

I have also tried adding

option http-server-close
option http-pretend-keepalive

to my HAProxy config, but to no avail.

Any ideas?

** I am testing in Chrome 9.0.597 and Firefox 3.6 (so with both web sockets, and without, same thing)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

紙鸢 2024-10-28 05:57:16

我不知道此设置中涉及的其他组件,但上次我检查时(几个月前),nginx 尚不支持 WebSocket 使用的 Upgrade+101 HTTP 机制。那么也许您的测试在连接升级之前一直有效?您绝对应该在 haproxy 上启用日志记录,您会知道连接在哪里关闭以及原因。顺便说一句,升级到 1.4.13 将解决一些日志记录问题,这将帮助您更确定地进行故障排除。

I don't know for the other components involved in this setup, but last time I checked (a few months ago), nginx did not yet support the Upgrade+101 HTTP mechanism which is used by WebSocket. So maybe your test works until the connection is upgraded ? You should definitely enable logging on haproxy, you'd know where connections are closed and why. BTW, upgrading to 1.4.13 will solve a few logging issues that will help you troubleshooting with more certainty.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文