如何让 nginx 返回静态响应并向应用程序发送请求标头?

发布于 2024-12-19 13:28:11 字数 353 浏览 2 评论 0原文

我正在通过将 标签嵌入到网站来制作一个高负载的网络统计系统。我想做的是:

  1. nginx从某个主机获取对图像的请求,
  2. 它给出了来自文件系统的主机小1px静态图像的答案,
  3. 此时它以某种方式将请求的标头传输到应用程序并关闭与主机的连接

我正在使用Ruby并且我将制作一个纯机架应用程序来获取标头并将它们放入队列中以进行进一步计算。

我无法解决的问题是,如何配置 sphinx 向 Rack 应用程序提供标头,并返回静态图像作为回复,而无需等待 Rack 应用程序的响应?

另外,如果有更常见的 Ruby 解决方案,则不需要 Rack。

I am making a high-load web statistics system through embedding <img> tag to site. The thing I want to do is:

  1. nginx gets request for an image from some host
  2. it gives as answer to host little 1px static image from filesystem
  3. at this time it somehow transfers request's headers to application and closes connection to host

I am working with Ruby and I'm going to make a pure-Rack app to get the headers and put them into a queue for further calculations.

The problem I can't solve is, how can I configure sphinx to give headers to the Rack app, and return a static image as the reply without waiting a for response from the Rack application?

Also, Rack is not required if there is more common Ruby-solution.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

淑女气质 2024-12-26 13:28:11

一个简单的选择是在继续后端进程的同时尽快终止客户端连接。

server {
    location /test {
        # map 402 error to backend named location
        error_page 402 = @backend;

        # pass request to backend
        return 402;
    }

    location @backend {
        # close client connection after 1 second
        # Not bothering with sending gif
        send_timeout 1;

        # Pass the request to the backend.
        proxy_pass http://127.0.0.1:8080;
    }
}

上面的选项虽然简单,但可能会导致客户端在连接断开时收到错误消息。 ngx.say 指令将确保发送“200 OK”标头,并且由于它是异步调用,因此不会拖延。这需要ngx_lua模块。

server {
    location /test {
        content_by_lua '
            -- send a dot to the user and transfer request to backend
            -- ngx.say is an async call so processing continues after without waiting
            ngx.say(".")
            res = ngx.location.capture("/backend")

        ';
    }

    location /backend {
        # named locations not allowed for ngx.location.capture
        # needs "internal" if not to be public
        internal;

        # Pass the request to the backend.
        proxy_pass http://127.0.0.1:8080;
    }

}

一个更简洁的基于 Lua 的选项:

server {
    location /test {
        rewrite_by_lua '
            -- send a dot to the user
            ngx.say(".")

            -- exit rewrite_by_lua and continue the normal event loop
            ngx.exit(ngx.OK)
        ';
        proxy_pass http://127.0.0.1:8080;
    }
}

绝对是一个有趣的挑战。

A simple option is to terminate the client connection ASAP while proceeding with the backend process.

server {
    location /test {
        # map 402 error to backend named location
        error_page 402 = @backend;

        # pass request to backend
        return 402;
    }

    location @backend {
        # close client connection after 1 second
        # Not bothering with sending gif
        send_timeout 1;

        # Pass the request to the backend.
        proxy_pass http://127.0.0.1:8080;
    }
}

The option above, while simple, may result in the client receiving an error message when the connection is dropped. The ngx.say directive will ensure that a "200 OK" header is sent and as it is an async call, will not hold things up. This needs the ngx_lua module.

server {
    location /test {
        content_by_lua '
            -- send a dot to the user and transfer request to backend
            -- ngx.say is an async call so processing continues after without waiting
            ngx.say(".")
            res = ngx.location.capture("/backend")

        ';
    }

    location /backend {
        # named locations not allowed for ngx.location.capture
        # needs "internal" if not to be public
        internal;

        # Pass the request to the backend.
        proxy_pass http://127.0.0.1:8080;
    }

}

A more succinct Lua based option:

server {
    location /test {
        rewrite_by_lua '
            -- send a dot to the user
            ngx.say(".")

            -- exit rewrite_by_lua and continue the normal event loop
            ngx.exit(ngx.OK)
        ';
        proxy_pass http://127.0.0.1:8080;
    }
}

Definitely an interesting challenge.

秋凉 2024-12-26 13:28:11

阅读此处有关 post_action 并阅读“通过 Nginx POST 提供静态内容”http://invalidlogic.com/2011/04/12/serving-static-content-via-post-from-nginx/
我已经使用以下方法完成了此操作:

server {
  # this is to serve a 200.txt static file 
  listen 8888;
  root /usr/share/nginx/html/;
}
server {
  listen 8999;
  location / {
    rewrite ^ /200.txt break;
  }

  error_page 405 =200 @405;
  location @405 {
    # post_action, after this, do @post
    post_action @post;
    # this nginx serving a static file 200.txt
    proxy_method GET;
    proxy_pass http://127.0.0.1:8888;
  }

  location @post {
    # this will go to an apache-backend server.
    # it will take a long time to process this request
    proxy_method POST;
    proxy_pass http://127.0.0.1/$request_uri;
  }
}

After reading here about post_action and reading "Serving Static Content Via POST From Nginx" http://invalidlogic.com/2011/04/12/serving-static-content-via-post-from-nginx/
I have accomplished this using:

server {
  # this is to serve a 200.txt static file 
  listen 8888;
  root /usr/share/nginx/html/;
}
server {
  listen 8999;
  location / {
    rewrite ^ /200.txt break;
  }

  error_page 405 =200 @405;
  location @405 {
    # post_action, after this, do @post
    post_action @post;
    # this nginx serving a static file 200.txt
    proxy_method GET;
    proxy_pass http://127.0.0.1:8888;
  }

  location @post {
    # this will go to an apache-backend server.
    # it will take a long time to process this request
    proxy_method POST;
    proxy_pass http://127.0.0.1/$request_uri;
  }
}
吝吻 2024-12-26 13:28:11

您可以使用 post_action 来完成此操作(我不完全确定这会起作用,但是这是我唯一能想到的)

server {
  location / {
    post_action @post;
    rewrite ^ /1px.gif break;
  }

  location @post {
    # Pass the request to the backend.
    proxy_pass http://backend$request_uri;

    # Using $request_uri with the proxy_pass will preserve the original request,
    # if you use (fastcgi|scgi|uwsgi)_pass, this would need to be changed.
    # I believe the original headers will automatically be preserved.
  }
}

You may be able to accomplish this with post_action (I'm not entirely sure this will work, but it's the only thing I can think of)

server {
  location / {
    post_action @post;
    rewrite ^ /1px.gif break;
  }

  location @post {
    # Pass the request to the backend.
    proxy_pass http://backend$request_uri;

    # Using $request_uri with the proxy_pass will preserve the original request,
    # if you use (fastcgi|scgi|uwsgi)_pass, this would need to be changed.
    # I believe the original headers will automatically be preserved.
  }
}
小耗子 2024-12-26 13:28:11

为什么不使用X-Accel-Redirecthttp://wiki.nginx.org /XSendfile,这样您就可以将请求转发到您的 ruby​​ 应用程序,然后只需设置一个响应标头,nginx就会返回该文件。

更新,对于 1x1px 透明 GIF 文件来说,将数据存储在变量中并将其直接返回给客户端可能更容易(老实说它很小),所以我认为 X-Accel-Redirect 可能是一个在这种情况下就太过分了。

Why not make use of X-Accel-Redirect, http://wiki.nginx.org/XSendfile, so you can forward the request to your ruby app and then just set a response header and nginx returns the file.

Update, well for a 1x1px Transparent GIF File it's probably easier to store the data in a variable and return it to client directly (honestly it's that small), so I think X-Accel-Redirect is probably a overkill in this case.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文