禁用 nginx 中的请求缓冲

发布于 2024-11-06 06:25:06 字数 470 浏览 1 评论 0原文

看起来 nginx 在将请求传递到上游服务器之前会缓冲请求,虽然对我来说大多数情况下都可以,但这非常糟糕:)

我的情况是这样的:

我有 nginx 作为前端服务器来代理 3 个不同的服务器:

  1. apa​​che 与 我用 python 和 gevent 构建的典型 php 应用程序
  2. shaveet(开源 Comet 服务器)
  3. 使用 gevent 再次构建的文件上传服务器,代理上传到机架空间云文件 同时接受客户端的上传。

#3 是问题,现在我所遇到的是 nginx 缓冲所有请求,然后将其发送到文件上传服务器,文件上传服务器又将其发送到 cloudfiles,而不是在获取每个块时发送它(这使得上传速度更快)我可以将 6-7MB/s 推送到云文件)。

我使用 nginx 的原因是拥有 3 个不同的域和一个 IP,如果我做不到这一点,我将不得不将文件上传服务器移动到另一台机器。

It seems that nginx buffers requests before passing it to the updstream server,while it is OK for most cases for me it is very bad :)

My case is like this:

I have nginx as a frontend server to proxy 3 different servers:

  1. apache with a typical php app
  2. shaveet(a open source comet server) built by me with python and gevent
  3. a file upload server built again with gevent that proxies the uploads to rackspace cloudfiles
    while accepting the upload from the client.

#3 is the problem, right now what I have is that nginx buffers all the request and then sends that to the file upload server which in turn sends it to cloudfiles instead of sending each chunk as it gets it (those making the upload faster as i can push 6-7MB/s to cloudfiles).

The reason I use nginx is to have 3 different domains with one IP if I can't do that I will have to move the fileupload server to another machine.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

风为裳 2024-11-13 06:25:06

一旦实现了此 [1] 功能,Nginx 就能够充当反向代理,而无需缓冲上传(错误客户端请求)。
它应该登陆 1.7,这是当前的主线。

[1] http://trac.nginx.org/nginx/ticket/251

更新

此功能自1.7.11 通过标志

proxy_request_buffering |关闭;

http://nginx.org/en/docs/http/ngx_http_proxy_module.html# proxy_request_buffering

As soon as this [1] feature is implemented, Nginx is able to act as reverse proxy without buffering for uploads (bug client requests).
It should land in 1.7 which is the current mainline.

[1] http://trac.nginx.org/nginx/ticket/251

Update

This feature is available since 1.7.11 via the flag

proxy_request_buffering on | off;

http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering

尘曦 2024-11-13 06:25:06

根据Gunicorn,他们建议您使用nginx来实际缓冲客户端并防止slowloris攻击。所以这种缓冲可能是一件好事。然而,我确实在我提供的链接上看到了一个选项,其中谈到删除代理缓冲区,目前尚不清楚这是否在 nginx 内,但看起来好像是。当然,这是假设您运行了 Gunicorn,但实际上并没有。也许它对你仍然有用。

编辑:我做了一些研究,nginx 中的缓冲区禁用用于出站、长轮询数据。 Nginx 在其 wiki 站点上声明,入站请求在发送到上游之前必须进行缓冲。

“请注意,当使用 HTTP 代理模块时(甚至使用 FastCGI 时),整个客户端请求将在 nginx 中缓冲,然后再传递到后端代理服务器。因此,上传进度表将无法正常工作如果它们通过测量后端服务器接收到的数据来工作。”

According to Gunicorn, they suggest you use nginx to actually buffer clients and prevent slowloris attacks. So this buffering is likely a good thing. However, I do see an option further down on that link I provided where it talks about removing the proxy buffer, it's not clear if this is within nginx or not, but it looks as though it is. Of course this is under the assumption you have Gunicorn running, which you do not. Perhaps it's still useful to you.

EDIT: I did some research and that buffer disable in nginx is for outbound, long-polling data. Nginx states on their wiki site that inbound requests have to be buffered before being sent upstream.

"Note that when using the HTTP Proxy Module (or even when using FastCGI), the entire client request will be buffered in nginx before being passed on to the backend proxied servers. As a result, upload progress meters will not function correctly if they work by measuring the data received by the backend servers."

初见你 2024-11-13 06:25:06

从 nginx-1.7.11 版本开始,现在可在 nginx 中使用。

查看文档
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering

要禁用缓冲上传指定

proxy_request_buffering off;

Now available in nginx since version nginx-1.7.11.

See documentation
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering

To disable buffering the upload specify

proxy_request_buffering off;
梦过后 2024-11-13 06:25:06

我会研究 haproxy 来满足这个需求。

I'd look into haproxy to fulfill this need.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文