一个nginx工作进程是同时处理两个请求还是一个接一个地处理?
关于过滤器的真正酷的部分 链是每个过滤器不等待 使前一个过滤器完成;它 可以处理前一个过滤器的 正在生成的输出,有点 就像 Unix 管道一样。 (来自此处)
我想上面正在谈论这样的代码每个过滤器的末尾:
if (!chain_contains_last_buffer)
return ngx_http_next_body_filter(r, in);
即,nginx 将过滤器一一链接起来。 但由于它位于每个过滤器的末尾,因此必须等到当前过滤器完成。我不明白 nginx 如何设法使每个过滤器不等待前一个过滤器完成。
所以上面是关于nginx过滤器的并发性,接下来是关于nginx请求处理的并发性:
我们知道nginx使用epoll
来处理请求:
events = epoll_wait(ep, event_list, (int) nevents, timer);
for (i = 0; i < events; i++) {
...
rev->handler(rev);
}
通过上面的代码,我认为nginx不会可以同时处理两个请求,它只能一个接一个地执行(每个处理程序足够快地完成其工作,因此下一个请求很快就会得到处理),对吗?
或者我有什么遗漏的地方吗?
The really cool part about the filter
chain is that each filter doesn't wait
for the previous filter to finish; it
can process the previous filter's
output as it's being produced, sort of
like the Unix pipeline. (from here)
I guess the above is talking about such code at the end of each filter:
if (!chain_contains_last_buffer)
return ngx_http_next_body_filter(r, in);
That is,nginx chains the filters one by one.
But as it's at the end of each filter,it has to wait until current filter is done. I don't see how nginx manages to make each filter doesn't wait for the previous filter to finish
.
So the above is about the concurrency of nginx filter,next is about the concurrency of nginx request handling:
As we know nginx uses epoll
to deal with requests:
events = epoll_wait(ep, event_list, (int) nevents, timer);
for (i = 0; i < events; i++) {
...
rev->handler(rev);
}
With code like above,I don't think nginx can handle two request concurrently,it can only do it one by one(each handler
finishes its job fast enough so the next request gets handled pretty soon),right?
Or is there any gotcha I'm missing?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
有一种方法可以测试这一点。编写一个休眠的过滤器,并在过滤器链中使用它。然后测试一下是否可以让 nginx 在前一个请求休眠时处理请求。
然后再次运行测试,但这次不要让过滤器休眠,而是使用选择超时让它等待,如下所示:
There is a way to test this. Write a filter that sleeps, and use it in the filter chain. Then test to see if you can get nginx to serve a request while a previous request is sleeping.
Then run the test again, but this time don't let the filter sleep, make it wait using select timeouts like so: