ASP.NET、IIS /CLR 线程和同步与异步编程相关的请求
我只是想在这里澄清一些概念。如果有人愿意分享他们在这个问题上的专业知识,我们将不胜感激。
以下是我对IIS如何与线程相关的工作原理的理解,如有错误请指正。
HTTP.sys
据我了解,对于 IIS 6.0(我现在将保留 IIS 7.0),Web 浏览器发出请求,由 HTTP.sys 内核驱动程序接收,HTTP.sys 将其交给 IIS 6.0 的线程池(I/O 线程?)这样就可以释放自己。
IIS 6.0 线程/线程池
IIS 6.0 的线程返回时将其移交给 ASP.NET,ASP.NET 返回临时的 HSE_STATUS_PENDING 给 IIS 6.0,从而释放 IIS 6.0 线程,然后将其转发给 CLR 线程。
CLR 线程/线程池
当 ASP.NET 在 CLR 线程池中找到一个空闲线程时,它会执行请求。如果没有可用的 CLR 线程,它会在应用程序级队列中排队(性能很差)
所以根据之前的理解,我的问题如下。
在同步模式下,这是否意味着每 1 个 CLR 线程有 1 个请求?
*) 如果是这样,1 个 CPU 上可以处理多少个并发请求?或者我应该反问?每 1 个 CPU 允许多少个 CLR 线程?假设允许 50 个 CLR 线程,这是否意味着在任何给定时间只能服务 50 个请求?使困惑。
如果我将“processModle”配置中的“requestQueueLimit”设置为 5000,这到底意味着什么?你可以在应用程序队列中排队5000个请求吗?这真的不是很糟糕吗?既然应用程序队列的性能很差,为什么要把它设置得这么高呢?
如果您正在编写异步页面,那么在上述过程中到底从哪里开始获得好处?
我研究发现,默认情况下,IIS 6.0 的线程池大小为 256。 5000 个并发请求进来,由 256 个 IIS 6.0 线程处理,然后这 256 个线程中的每个线程将其交给我正在处理的 CLR 线程。默认情况下猜测甚至更低。这本身不是异步的吗?有点困惑。另外,在同步模式下,瓶颈何时何地开始出现?并在异步模式下? (不确定我是否有任何意义,我只是感到困惑)。
当 IIS 线程池(全部 256 个)繁忙时会发生什么?
当所有 CLR 线程都忙时会发生什么? (我假设所有请求都在应用程序级队列中排队)
当应用程序队列超过 requestQueueLimit 时会发生什么?
非常感谢您的阅读,非常感谢您在此事上的专业知识。
I'm just trying to clear up some concepts here. If anyone is willing to share their expertise on this matter, it's greatly appreciated it.
The following is my understanding of how IIS works in relation to threads, please correct me if I'm wrong.
HTTP.sys
As I understand it, for IIS 6.0 (I'll leave IIS 7.0 for now), web browser makes a request, gets pick up by HTTP.sys kernel driver, HTTP.sys hands it over to IIS 6.0's threadpool (I/O thread?) and such free up itself.
IIS 6.0 Thread/ThreadPool
IIS 6.0's thread in returns hands it over to ASP.NET, which returns a temporary HSE_STATUS_PENDING to IIS 6.0, such frees up the IIS 6.0 thread and then forward it to a CLR Thread.
CLR Thread/ThreadPool
When ASP.NET picks up a free thread in the CLR threadpool, it executes the request. If there are no available CLR threads, it gets queued up in the application level queue (which has bad performance)
So based on the previous understanding, my questions are the following.
In synchronous mode, does that mean 1 request per 1 CLR thread?
*) If so, how many CONCURRENT requests can be served on a 1 CPU? Or should I ask the reverse? How may CLR threads are allowed per 1 CPU? Say, 50 CLR threads are allowed, does that mean then it's limited to serve 50 requests at any given time? Confused.
If I set the "requestQueueLimit" in "processModle" configuration to 5000, what does it mean really? You can queue up 5000 requests in the application queue? Isn't that really bad? Why would you ever set it so high since application queue has bad performance?
If you are programming asynchronous page, exactly where it starts to get the benefit in the above process?
I researched and see that by default, IIS 6.0's threadpool size is 256. 5000 concurrent requests comes in, handled by 256 IIS 6.0 threads and then each of the 256 threads, hands it off to CLR threads which i'm guessing is even lower by default. isn't that itself asynchronous? A bit confused. In addition, where and when does the bottleneck start to show up in synchronous mode? and in asynchronous mode? (Not sure if I'm making any sense, I'm just confused).
What happens when IIS threadpool (all 256 of them) are busy?
What happens when all CLR threads are busy? (I assume then, all requests are queued up in the application level queue)
What happens when application queue is over the requestQueueLimit?
Thanks a lot for reading, greatly appreciate your expertise on this matter.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
您对 CLR 的移交过程非常准确,但事情变得有趣的地方是:
如果请求的每一步都是 CPU 绑定的/否则是同步的,是的:该请求将占用该线程的生命周期。
但是,如果请求处理任务的任何部分涉及任何异步任务,甚至是纯托管代码之外的任何 I/O 相关任务(数据库连接、文件读/写等),则有可能(如果不太可能)这将会发生:
请求进入 CLR 区域,由线程 A 接收
请求调用文件系统
在幕后,向非托管代码的转换发生在某个级别,这会导致 IO 完成端口线程(与普通线程池线程不同)在类似回调的方式中分配方式。
一旦发生切换,线程 A返回到线程池,以便能够为请求提供服务。
一旦 I/O 任务完成,执行就会重新排队,假设线程 A 正忙 - 线程 B 接收请求。
这种“有趣”的行为也称为“线程敏捷性”,并且是尽可能避免在 ASP.NET 应用程序中使用任何静态线程的原因之一。
现在,回答您的一些问题:
请求队列限制是在请求开始被彻底丢弃之前可以“排队”的请求数量。比如说,如果您有一个异常“突发”的应用程序,您可能会收到很多非常短暂的请求,那么设置这么高可以防止请求被丢弃,因为它们会在队列中堆积起来,但同样会很快耗尽。
异步处理程序允许您创建与上述场景相同的“完成后给我打电话”类型的行为;例如,如果您说需要进行 Web 服务调用,那么通过某些 HttpWebRequest 调用同步调用该调用将默认阻塞直到完成,锁定该线程直到完成。异步调用同一服务(或异步处理程序、任何 Begin/EndXXX 模式...)允许您对实际被占用的人员进行一些控制 - 您的调用线程可以继续执行操作,直到该 Web 服务返回,这实际上可能是在之后请求已完成。
需要注意的一件事是,只有一个 ThreadPool - 所有非 IO 线程都从那里拉出,因此,如果您将所有内容都转移到异步处理,您可能会因耗尽线程池的后台工作而不是服务请求而自食其果.
You're pretty spot-on with the handoff process to the CLR, but here's where things get interesting:
If every step of the request is CPU-bound/otherwise synchronous, yes: that request will suck up that thread for its lifetime.
However, if any part of the request processing tasks out to anything asynchronous or even anything I/O related outside of purely managed code (db connection, file read/write, etc), it is possible, if not probable, that this will happen:
Request comes into CLR-land, picked up by thread A
Request calls out to filesystem
Under the hood, the transition to unmanaged code happens at some level which results in an IO completion port thread (different than a normal thread pool thread) being allocated in a callback-like manner.
Once that handoff occurs, Thread A returns back to the thread pool, where it is able to service requests.
Once the I/O task completes, execution is re-queued, and let's say Thread A is busy - Thread B picks up the request.
This sort of "fun" behavior is also called "Thread Agility", and is one reason to avoid using ANYTHING that is Thread Static in an ASP.NET application if you can.
Now, to some of your questions:
The request queue limit is the number of requests that can be "in line" before requests start getting flat-out dropped. If you had, say, an exceptionally "bursty" application, where you may get a LOT of very short lived requests, setting this high would prevent dropped requests, since they would bunch up in the queue, but drain equally as quickly.
Asynchronous handlers allow you to create the same "call me when you're done" type of behavior that the above scenario has; for example, if you say needed to make a web service call, calling that synchronously via say some HttpWebRequest call would by default block until completion, locking up that thread until it was done. Calling the same service asynchronously (or an asynchronous handler, any Begin/EndXXX pattern...) allows you some control over who actually gets tied up - your calling thread can continue along performing actions until that web service returns, which might actually be after the request has completed.
One thing to note is there is but one ThreadPool - all non-IO threads are pulled from there, so if you move everything to asynchronous processing, you may just bite yourself by exhausting your threadpool doing background work, and not servicing requests.