并发请求的真正含义是什么?

发布于 2024-10-17 19:36:15 字数 259 浏览 0 评论 0原文

当我们谈论 Web 应用程序的容量时,我们经常提到它可以处理的并发请求。

作为我的另一个问题 讨论过,以太网使用 TDM(时分复用),并且没有 2 个信号可以同时沿线路传输。因此,如果 Web 服务器通过以太网连接到外部世界,则实际上根本不会有并发请求。所有的请求都会一个接一个地到来。

但如果网络服务器通过无线网卡之类的东西与外界连接,我相信多个信号可以通过电磁波同时到达。只有这种情况,才有真正的并发请求可谈。

我的说法正确吗?

谢谢。

When we talk about capacity of a web application, we often mention the concurrent requests it could handle.

As my another question discussed, Ethernet use TDM (Time Division Multiplexing) and no 2 signals could pass along the wire simultaneously. So if the web server is connected to the outside world through a Ethernet connection, there'll be literally no concurrent requests at all. All requests will come in one after another.

But if the web server is connected to the outside world through something like a wireless network card, I believe the multiple signals could arrive at the same time through the electro-magnetic wave. Only in this situation, there are real concurrent requests to talk about.

Am I right on this?

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

横笛休吹塞上声 2024-10-24 19:36:15

我想网络应用程序的“并发请求”不会深入到链接级别。这更多的是应用程序处理请求以及处理过程中到达多少请求的问题。

例如,如果一个请求平均需要 2 秒才能完成(从 Web 服务器接收到通过应用程序处理它再到发回响应),那么如果每个请求收到许多请求,则它可能需要处理大量并发请求。第二。

请求需要重叠并同时处理,否则请求队列将无限期地填满。这似乎是常识,但对于许多 Web 应用程序来说,这是一个真正令人担忧的问题,因为大量的请求可能会导致应用程序的资源(例如数据库)陷入困境。因此,如果应用程序的数据库交互很差(过程过于复杂、索引/优化不佳、与许多其他应用程序共享的数据库的链接缓慢等),那么就会产生瓶颈,限制应用程序可以处理的并发请求数量,即使应用程序本身应该能够处理它们。

I imagine "concurrent requests" for a web application doesn't get down to the link level. It's more a question of the processing of a request by the application and how many requests arrive during that processing.

For example, if a request takes on average 2 seconds to fulfill (from receiving it at the web server to processing it through the application to sending back the response) then it could need to handle a lot of concurrent requests if it gets many requests per second.

The requests need to overlap and be handled concurrently, otherwise the queue of requests would just fill up indefinitely. This may seem like common sense, but for a lot of web applications it's a real concern because the flood of requests can bog down a resource for the application, such as a database. Thus, if the application has poor database interactions (overly complex procedures, poor indexing/optimization, a slow link to a database shared by many other applications, etc.) then that creates a bottleneck which limits the number of concurrent requests the application can handle, even though the application itself should be able to handle them.

软甜啾 2024-10-24 19:36:15

想象一下一个http服务器在80端口监听,会发生什么:

  • 客户端连接到服务器来请求某个页面;它使用某个原始本地端口从某个原始 IP 地址进行连接。

  • 操作系统(实际上是网络堆栈)查看传入请求的目标 IP(因为服务器可能有多个 NIC)和目标端口 (80),并验证是否已注册某些应用程序来处理该端口上的数据( http 服务器)。 4 个数字(源 IP、源端口、目标 IP、端口 80)的组合唯一标识一个连接。如果这样的连接尚不存在,则会将一个新连接添加到网络堆栈的内部表中,并将连接请求传递到 http 服务器的侦听套接字。从现在开始,网络堆栈仅将该连接的数据传递给应用程序。

  • 多个客户端可以发送请求,每个客户端都会发生上述情况。因此,从网络角度来看,所有事情都是串行发生的,因为数据一次到达一个数据包。

  • 从软件角度来看,http 服务器正在侦听传入的请求。在客户端开始出现错误之前,它可以排队的请求数量由程序员根据硬件容量确定(这是并发性的第一位:可以有多个请求等待处理)。对于每个套接字,它将创建一个新的套接字(尽可能快地继续清空请求队列),并让应用程序的另一部分(不同的线程)完成请求的实际处理。这些处理例程将(理想情况下)花费大部分时间等待数据到达并(理想情况下)对其快速做出反应。

  • 由于数据处理通常比网络 I/O 快许多倍,因此即使硬件仅由一个处理器组成,服务器也可以在处理网络流量的同时处理许多请求。多个处理器增强了这种能力。因此,从软件角度来看,所有事情都是同时发生的。

  • 如何实现数据的实际处理是性能的关键所在(您希望它尽可能高效)。存在多种可能性(Socket 类提供的异步套接字操作、线程池、独特线程、.NET 4 中的新并行功能)。

.Imagining a http server listening at port 80, what happens is:

  • a client connects to the server to request some page; it is connecting from some origin IP address, using some origin local port.

  • the OS (actually the network stack) looks at the incoming request's destination IP (since the server may have more than one NIC) and destination port (80) and verifies that some application is registered to handle data on that port (the http server). The combination of 4 numbers (origin IP, origin port, destination IP, port 80) uniquely identifies a connection. If such a connection does not exists yet, a new one is added to the network stack's internal table and a connection request is passed on to the http server's listening socket. From now on the network stack just passes on data for that connection to the application.

  • Multiple client can be sending requests, for each one the above happens. So from the network perspective, all happens serially, since data arrives one packet at a time.

  • From the software perspective, the http server is listening to incoming requests. The number of requests it can have queued before the clients start getting errors is determined by the programmer based on the hardware capacity (this is the first bit of concurrency: there can be multiple requests waiting to be processed). For each one it will create a new socket (as fast as possible in order to continue emptying the request queue) and let the actual processing of the request be done by another part of the application (different threads). These processing routines will (ideally) spend most of their time waiting for data to arrive and react (ideally) quickly to it.

  • Since usually the processing of data is many times faster than the network I/O, the server can handle many requests while processing network traffic, even if the hardware consist of only one processor. Multiple processors increase this capability. So from the software perspective all happens concurrently.

  • How the actual processing of the data is implemented is where the key to performance lies (you want it to be as efficient as possible). Several possibilities exist (async socket operations as provided by the Socket class, threadpool, unique threads, the new parallel features from .NET 4).

温柔戏命师 2024-10-24 19:36:15

确实,没有两个数据包可以同时到达(除非按照 Gabe 的评论使用了多个网卡)。然而,网络请求通常需要多个数据包。当多个请求几乎同时传入时(无论使用有线还是无线访问),这些包的到达是分散的。此外,这些请求的处理可能会重叠。

在图中添加多线程(或多处理器/核心),您可以看到即使各个数据包到达,从数据库读取(需要大量等待响应)等冗长的操作也很容易重叠以串行方式。

编辑:在上面添加注释以纳入 Gabe 的反馈。

It's true that no two packets can arrive at the exact same time (unless multiple network cards are in use per Gabe's comment). However, web request usually requires a number of packets. The arrival of these packages is interspersed when multiple requests are coming in at near the same time (whether using wired or wireless access). Also, the processing of these requests can overlap.

Add multi-threading (or multiple processors / cores) to the picture, and you can see how lengthy operations such as reading from a database (which requires a lot of waiting around for a response) can easily overlap even though the individual packets are arriving in a serial fashion.

Edit: Added note above to incorporate Gabe's feedback.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文