Websocket连接无法确定AWS ALB和NGINX反向代理均衡器的何时确定
设置简介:我有一个带有3种不同服务的节点JS应用程序,即管理员,客户端和服务器。所有这3个服务都是作为单个Docker容器运行的。我的设置由AWS应用程序加载平衡器背后的2个EC2实例组成,每个EC2实例运行1个admin和client Service每个容器,服务器服务使用Docker-Compose-Scale-Scale选项缩放到2个容器。我正在使用容器化的NGINX作为反向代理和负载平衡器。我有一个目标组,两个实例都是注册目标。
问题描述:管理员服务需要通过Websocket与服务器服务进行通信,并且我将socket.io用于此目的。因此,此方案需要粘性会话来建立Websocket连接。我已经在上游块中使用Nginx IP_HASH在实例级别启用了粘性会话,以进行服务器服务。在ALB级别,我使用负载平衡器生成的cookie类型启用了目标组的粘性会话。当我通过Chrome浏览器访问管理员服务的端点并使用Inspect Element时,我可以看到Websocket连接无法确切地确定错误:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is establisbed.
Failed to load resource: the server responded with the status of 400 ()
这是我的nginx conf for Server Service:
upstream webinar_server {
hash $remote_addr consistent;
server webinar-server_webinar_server_1:8000;
server webinar-server_webinar_server_2:8000;
}
server {
listen 80;
server_name server.mydomain.com;
location / {
proxy_pass http://webinar_server/;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
这是Nignx Conf管理员服务:
server {
listen 80;
server_name admin.mydomain.com;
location / {
proxy_pass http://webinar_admin:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
我已经尝试过:我尝试实现更简单的设置来测试根据预期的基础结构的粘性。我有2个EC2实例,在AWS ALB背后,每个运行2个基本容器的Nginx Web服务器每个实例每个都提供不同的HTML页面。这些Web服务器位于我原始设置中提到的容器化NGINX反向负载平衡器后面。在这种情况下,使用NGINX HASH函数的实例级别粘性和ALB级目标组的粘性按预期工作。
对于我要实现的原始设置,当我从目标组中删除了一个实例(目标组中的一个注册目标)时,实例级别nginx粘性效果很好地路由到正确的服务器容器(因为有2个服务器容器)。但是目标组级别的粘性返回上述错误。
Setup Introduction: I have a node js app with 3 different services namely admin, client and server. All these 3 services are running as individual docker containers. My setup consists of 2 EC2 instances behind an AWS Application Load Balancer, with each EC2 instance running 1 container each of the admin and client service and the server service scaled to 2 containers using docker-compose --scale option. I'm using containerised nginx as a reverse proxy and load balancer. I have a target group with both the instances as registered targets.
Problem description: The admin service needs to communicate with the server service via WebSocket and I'm using socket.io for that purpose. So this scenario requires sticky session to establish WebSocket connection. I have enabled sticky session at the instance level with nginx ip_hash in the upstream block for server service. At the ALB level I've enabled sticky session for the target group with the Load balancer generated cookie type. When I access the endpoint for the admin service via Chrome browser and use the inspect element, I can see that the WebSocket connection failed to establish with the error exactly being:
WebSocket connection to '<URL>' failed: WebSocket is closed before the connection is establisbed.
Failed to load resource: the server responded with the status of 400 ()
This is my nginx conf for the server service:
upstream webinar_server {
hash $remote_addr consistent;
server webinar-server_webinar_server_1:8000;
server webinar-server_webinar_server_2:8000;
}
server {
listen 80;
server_name server.mydomain.com;
location / {
proxy_pass http://webinar_server/;
proxy_set_header X-Real_IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
This is the nignx conf for admin service:
server {
listen 80;
server_name admin.mydomain.com;
location / {
proxy_pass http://webinar_admin:5001;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_buffering off;
}
}
I've tried: I've tried to implement a simpler setup to test out the stickiness of the infrastructure which worked as expected. I had 2 EC2 instances behind AWS ALB and each instances running 2 basic containerised nginx web servers each serving a different html page. These web servers are behind a containerised nginx reverse load balancer as mentioned in my original setup. In this case both the instance level stickiness using nginx hash function and the alb level target group stickiness worked as expected.
For the original setup I'm trying to implement, when i removed one of the instance from the target group(only one registered target in the target group), the instance level nginx stickiness worked fine routing to the correct server container(since there are 2 server containers). But the target group level stickiness returns the error mentioned above.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
如您所见,在这里,套接字。 io客户端不会开箱即用,并且使用cookie将cookie重定向到正确的服务器。
要解决此问题,您需要将该代码放在客户端
As you can see here, Socket.IO Client don't handle cookies out of the box and ALB use cookies to redirect to the right server.
To fix this issue you need to put that code in client side