是否有任何 HTTP 代理可以显式地、可配置地支持请求/响应缓冲和延迟连接?

发布于 2024-07-04 15:26:11 字数 901 浏览 10 评论 0原文

在处理移动客户端时,HTTP 请求传输过程中出现多秒延迟是很常见的。 如果您通过预分叉 Apache 提供页面或服务,则即使您的应用程序服务器逻辑在 5 毫秒内完成,子进程也将被占用数秒来为单个移动客户端提供服务。 我正在寻找支持以下功能的 HTTP 服务器、平衡器或代理服务器:

  1. 请求到达代理。 代理开始在 RAM 或磁盘中缓冲请求,包括标头和 POST/PUT 主体。 代理不会打开与后端服务器的连接。 这可能是最重要的部分。

  2. 代理服务器在以下情况下停止缓冲请求:

    • 已达到大小限制(例如 4KB),或者
    • 请求已完全收到,标头和正文
  3. 仅现在,请求(部分)位于

  4. 后端发回响应。 代理服务器再次立即开始缓冲(最大大小为 64KB。)

  5. 由于代理有足够大的缓冲区,后端响应会在几毫秒内完全存储在代理服务器中,并且后端进程/线程可以自由地处理更多请求。 后端连接立即关闭。

  6. 代理将响应发送回移动客户端,尽可能快或尽可能慢,而不会与后端连接占用资源。

我相当确定您可以使用 Squid 执行 4-6,而 nginx 似乎支持 1-3(并且在这方面看起来相当独特)。 我的问题是:是否有任何代理服务器可以理解这些缓冲和在就绪之前不打开连接的功能? 也许只是有一点 Apache config-fu 使得这种缓冲行为变得微不足道? 他们中的任何人都认为它不是像 Squid 这样的恐龙,并且支持精益的单进程、异步、基于事件的执行模型?

(Siderant:我会使用 nginx,但它不支持分块 POST 主体,这使得它无法为移动客户端提供服务。是的,便宜的 50 美元手机喜欢分块 POST...叹息)

When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following:

  1. A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part.

  2. The proxy server stops buffering the request when:

    • A size limit has been reached (say, 4KB), or
    • The request has been received completely, headers and body
  3. Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed.

  4. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.)

  5. Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed.

  6. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources.

I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model?

(Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(5

原来是傀儡 2024-07-11 15:26:11

Fiddler 是 Telerik 的一款免费工具,至少可以完成您正在寻找的一些功能。

具体来说,请转到规则| 自定义规则...,您可以在连接期间的所有点添加任意 Javascript 代码。 您可以通过 sleep() 调用来模拟您需要的一些事情。

然而,我不确定这种方法是否能够为您提供所需的精细缓冲控制。 不过,有总比没有好?

Fiddler, a free tool from Telerik, does at least some of the things you're looking for.

Specifically, go to Rules | Custom Rules... and you can add arbitrary Javascript code at all points during the connection. You could simulate some of the things you need with sleep() calls.

I'm not sure this method gives you the fine buffering control you want, however. Still, something might be better than nothing?

私藏温柔 2024-07-11 15:26:11

不幸的是,我不知道有现成的解决方案。 在最坏的情况下,请考虑自己开发,例如使用 Java NIO —— 时间不会超过一周。

Unfortunately, I'm not aware of a ready-made solution for this. In the worst case scenario, consider developing it yourself, say, using Java NIO -- it shouldn't take more than a week.

裸钻 2024-07-11 15:26:11

Squid 2.7 可以通过补丁支持 1-3:

我已经对此进行了测试,发现它运行良好,但条件是它仅缓冲到内存,而不是磁盘(除非当然,它会交换,而您不希望这样),因此您需要在为您的工作负载适当配置的机器上运行它。

分块 POST 对于大多数服务器和中介来说都是一个问题。 您确定需要支持吗? 通常,客户端在收到 411 错误时应重试请求。

Squid 2.7 can support 1-3 with a patch:

I've tested this and found it to work well, with the proviso that it only buffers to memory, not disk (unless it swaps, of course, and you don't want this), so you need to run it on a box that's appropriately provisioned for your workload.

Chunked POSTs are a problem for most servers and intermediaries. Are you sure you need support? Usually clients should retry the request when they get a 411.

第几種人 2024-07-11 15:26:11

同时使用 nginx 和 Squid(客户端 — Squid — nginx — 后端)怎么样? 当从后端返回数据时,Squid 会将其从 CTE: chunked 转换为设置了 Content-Length 的常规流,因此也许它也可以标准化 POST。

What about using both nginx and Squid (client — Squid — nginx — backend)? When returning data from a backend, Squid does convert it from C-T-E: chunked to a regular stream with Content-Length set, so maybe it can normalize POST also.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文