GO HTTP代理 - 防止重复使用代理连接
我有一组袜子代理,它们坐在负载平衡器后面(特定于AWS网络负载平衡器)。当负载平衡器在袜子端口接收请求时,它将请求转发到Docker容器 (代理)具有最少的活动连接。
我有一个使用标准HTTP软件包的GO应用程序。我想使用此软件包对袜子代理的内置支持,并将其指向我的负载平衡器的地址。
我的问题是,GO HTTP客户端是否:
- 打开并维护与袜子代理的TCP连接,我将其指向(所有HTTP连接的单个TCP连接),
- 打开并保持与袜子代理的TCP连接每个HTTP连接(即袜子代理连接到开放/空闲http连接的一对一关系),
- 打开并随后关闭一个TCP连接到每个HTTP/S请求的袜子代理,无论HTTP客户端上的并发/idle连接设置如何,还是
- 完全做其他事情?
如果答案是(1)或(2),是否有一种方法可以防止此行为并确保它始终与每个HTTP/S请求的袜子代理重新连接?
我要这样做的原因是要确保请求在许多代理中都保持平衡,而不是 clients (每个>都有许多并发请求)在代理之间保持平衡。
具体来说,说我在负载平衡器背后有3个代理,标有A,B和C。我有两个应用程序,即1和2。应用程序1每秒提出5个HTTP请求,并且应用程序2每秒提出500 HTTP请求。
我要避免是让应用程序1与负载平衡器建立袜子连接,并将其转发到代理A,并且与要维护的代理进行连接,而应用程序2将袜子连接到负载平衡器并将其转发给代理B,也可以维护。如果发生这种情况,则代理A将处理5个请求/秒,而代理B则将处理500个请求/秒,这显然是不平衡的。我宁愿所有三个代理人分别得到〜168个请求/秒。
I have a set of SOCKS proxies, which are sitting behind a load balancer (an AWS Network Load Balancer, to be specific). When the load balancer receives a request on the SOCKS port, it forwards the request to the Docker container
(proxy) with the fewest active connections.
I have a Go application that uses the standard HTTP package. I'd like to use the built-in support that this package has for SOCKS proxies, and point it to my load balancer's address.
My question is, does the Go HTTP client:
- Open and maintain a TCP connection to the SOCKS proxy I point it to (a single TCP connection for all HTTP connections that it keeps open),
- Open and maintain a TCP connection to the SOCKS proxy for each HTTP connection (i.e. a one-to-one relationship of SOCKS proxy connections to open/idle HTTP connections),
- Open and subsequently close a TCP connection to the SOCKS proxy for each HTTP/S request, regardless of the concurrent/idle connection settings on the HTTP client, or
- Do something else entirely?
If the answer is (1) or (2), is there a way to prevent this behavior and ensure it always re-connects to the SOCKS proxy for each HTTP/S request?
The reason I'd like to do this is to ensure that the requests are balanced across the many proxies, instead of the clients (each of which may have many concurrent requests) being balanced across the proxies.
Specifically, say I have 3 proxies behind the load balancer, labeled A, B, and C. And say I have two applications, 1 and 2. Application 1 makes 5 HTTP requests per second, and Application 2 makes 500 HTTP requests per second.
What I want to avoid is having Application 1 make a SOCKS connection to the load balancer, and having it forwarded to proxy A, and that connection to the proxy being maintained, while Application 2 makes a SOCKS connection to the load balancer and has it forwarded to proxy B, which is also maintained. If that occurs, proxy A will be handling 5 requests/second while proxy B will be handling 500 requests/second, which is obviously massively imbalanced. I'd rather all three proxies each get ~168 requests/second.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
袜子是协议不可知的。它不知道HTTP请求是什么,但只有TCP连接是什么是由代理进行的。因此,由HTTP堆栈引起的每个TCP连接都将意味着通过代理进行新的连接 - 无重用。如果要确保每个http/s请求通过代理获得自己的连接,则必须禁用http keep-alive。参见这个问题 关于如何做。
SOCKS is protocol agnostic. It has no idea of what a HTTP request is but only what a TCP connection is which gets tunneled by the proxy. Thus every TCP connection caused by the HTTP stack will mean a new connection through the proxy - no reuse. If you want to make sure that every HTTP/S requests gets its own connection through the proxy you must disable HTTP keep-alive. See this question on how to do this.