HTTP代理连接共享

发布于 2024-08-01 14:22:14 字数 1070 浏览 9 评论 0原文

我正在尝试使用与 Web 浏览器模拟全双工连接类似的技术来实现 HTTP 隧道,在 Java 中使用 Netty 框架。 我希望以一种能够在现实世界的 HTTP 代理存在的情况下工作的方式来实现它。 我尝试在不使用 servlet 容器的情况下执行此操作,以避免库依赖项方面不必要的开销,并且因为 servlet API 不适合全双工 http 隧道的使用模式。

我知道 HTTP 代理施加的一些限制会“破坏”HTTP 协议的一些潜在用途:

  1. 除了客户端和代理之间的连接之外,HTTP 管道可能不会得到遵守。 即代理可以发送单个请求并在发送下一个请求之前等待响应,即使客户端已经向代理分派了多个管道请求。
  2. 分块编码可能不会以类似的方式超出代理之间的连接:服务器可能会分块发送回响应,但代理可能会等待结束块,然后再将完整的、分块的响应分派给客户端。
  3. HTTP CONNECT 通常只允许 SSL/TLS 端口,通常只允许端口 443,因此这不能用作与外界建立不受约束的 TCP 连接的偷偷摸摸的方式。

然而,还有一种我不确定的可能性:现实世界中的 HTTP 代理是否也在多个客户端之间共享与服务器的持久连接? 例如:

  • 客户端 A 向服务器 X 发送请求 A1、A2 和 A3
  • 客户端 B 向服务器 X 发送请求 B1 和 B2
  • 客户端 C 向服务器 X 发送请求 C1、C2 和 C3

代理可能会打开与服务器 X 的单个连接并按以下顺序发送消息:

A1、A2、B1、C1、B2、A3、C2、C3

或保留每个客户端的顺序但可能交错的类似顺序? 或者更糟糕的是,代理是否可以打开到服务器的多个连接并在连接之间分散来自每个客户端的消息,即

Connection 1: A1, C1, C2, C3
Connection 2: B1, B2, A2, A3

如果是这样,我的方法需要更多的考虑,因为我可能需要将这些消息解复用到每个隧道的不同队列中,并且不能简单地依赖于将连接识别为用于特定客户端。

有谁知道有什么好的资源可以描述常用的 HTTP 代理和状态检查防火墙的特性吗?

I am attempting to implement an HTTP tunnel using similar techniques to those employed by web browsers to simulate a full-duplex connection, in Java using the Netty framework. I wish to implement this in such a way that it will work in the presence of real world HTTP proxies. I am attempting to do this without using a servlet container, to avoid unnecessary overhead in terms of library dependencies, and because the servlet API does not fit the usage patterns of a full duplex http tunnel.

I am aware of some restrictions that HTTP proxies impose that "break" some potential uses of the HTTP protocol:

  1. HTTP Pipelining may not be honoured beyond the connection between the client and the proxy. i.e. The proxy may send a single request and wait for the response before sending the next request, even if the client has dispatched multiple pipelined requests to the proxy.
  2. Chunked encoding may not be honoured beyond the connection between the proxy in a similar fashion: the server may send a response back in chunks, but the proxy may wait for the end chunk before dispatching the full, dechunked response to the client.
  3. HTTP CONNECT is often only allowed for SSL/TLS ports, typically only port 443, so this cannot be used as a sneaky way to get an unfettered TCP connection to the outside world.

However there is one additional possibility that I am not sure about: do real world HTTP proxies also share a persistent connection to a server between multiple clients? For instance:

  • Client A sends requests A1, A2, and A3 to server X
  • Client B sends requests B1 and B2 to server X
  • Client C sends requests C1, C2 and C3 to server X

Would the proxy then potentially open a single connection to server X and send messages in the order:

A1, A2, B1, C1, B2, A3, C2, C3

or a similar order that preserves the ordering from each individual client, but potentially interleaved? Or even worse, could the proxy open multiple connections to the server and scatter messages from each client between the connections, i.e.

Connection 1: A1, C1, C2, C3
Connection 2: B1, B2, A2, A3

If so, my approach requires more thought as I potentially need to demultiplex these messages into different queues for each tunnel, and cannot simply rely on identifying a connection as being used for a particular client.

Does anyone know of any good resources that describe the quirks of commonly used HTTP proxies and stateful inspecting firewalls?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

一城柳絮吹成雪 2024-08-08 14:22:14

我确实知道 NetScaler 可以配置为在它和服务器之间使用 keepalive,无论客户端上的 keepalive 设置如何。

I do know that NetScaler can be configured to use keepalive between it and the server, regardless of the keepalive setting on the client.

落叶缤纷 2024-08-08 14:22:14

HTTP 1.1 规范 将此段落包含为 8.1 .4 实际考虑

使用持久连接的客户端应该限制它们与给定服务器保持的同时连接的数量。 单用户客户端不应与任何服务器或代理保持超过 2 个连接。 代理应该使用最多 2*N 个到另一个服务器或代理的连接,其中 N 是同时活动用户的数量。 这些准则旨在缩短 HTTP 响应时间并避免拥塞。

不过,我不知道现实世界的代理实现如何满足此要求。

也许您会在缓存教程中找到一些内容,即使它只是有用的链接。 最终操作可能是向 Mark Nottingham 发送邮件 ([电子邮件受保护])。 如果他不知道,那么没有人知道。

The HTTP 1.1 spec contains this paragraph as 8.1.4 Practical Consideration:

Clients that use persistent connections SHOULD limit the number of simultaneous connections that they maintain to a given server. A single-user client SHOULD NOT maintain more than 2 connections with any server or proxy. A proxy SHOULD use up to 2*N connections to another server or proxy, where N is the number of simultaneously active users. These guidelines are intended to improve HTTP response times and avoid congestion.

I don't know what real world proxy implementations do with this requirement, though.

Maybe you'll find something in the Caching Tutorial, even if it's only useful links. The ultimate action might be to send a mail to Mark Nottingham ([email protected]). If he doesn't know, nobody does.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文