当 tcp/udp 服务器发布速度快于客户端消费速度时会发生什么?

发布于 2024-08-17 02:03:48 字数 312 浏览 6 评论 0原文

我试图了解当服务器发布(通过 tcp、udp 等)速度快于客户端消耗数据时会发生什么情况。

在程序中,我知道如果队列位于生产者和消费者之间,它就会开始变大。如果没有队列,那么生产者根本无法生产任何新内容,直到消费者可以消费为止(我知道可能还有更多变化)。

我不清楚当数据离开服务器(可能是不同的进程、机器或数据中心)并发送到客户端时会发生什么。如果客户端根本无法足够快地响应传入数据,假设服务器和消费者耦合非常松散,那么传输中的数据会发生什么情况?

我在哪里可以阅读以获取有关该主题的详细信息?我是否只需要阅读 TCP/UDP 的底层细节?

谢谢

I am trying to get a handle on what happens when a server publishes (over tcp, udp, etc.) faster than a client can consume the data.

Within a program I understand that if a queue sits between the producer and the consumer, it will start to get larger. If there is no queue, then the producer simply won't be able to produce anything new, until the consumer can consume (I know there may be many more variations).

I am not clear on what happens when data leaves the server (which may be a different process, machine or data center) and is sent to the client. If the client simply can't respond to the incoming data fast enough, assuming the server and the consumer are very loosely coupled, what happens to the in-flight data?

Where can I read to get details on this topic? Do I just have to read the low level details of TCP/UDP?

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

并安 2024-08-24 02:03:48

对于 TCP,有一个用于流量控制的 TCP 窗口。 TCP 一次只允许一定数量的数据保持未确认状态。如果服务器生成数据的速度快于客户端消耗数据的速度,则未确认的数据量将会增加,直到 TCP 窗口“满”,此时发送 TCP 堆栈将等待并且不会发送更多数据,直到客户端确认一些待处理的数据。

对于 UDP 来说,没有这样的流量控制系统;毕竟这是不可靠的。客户端和服务器上的 UDP 堆栈都可以根据需要丢弃数据报,它们之间的所有路由器也是如此。如果您发送的数据报多于链接可以传递给客户端的数据报,或者如果链接传递的数据报多于客户端代码可以接收的数据报,那么其中一些数据报将被丢弃。除非您在基本 UDP 上构建了某种形式的可靠协议,否则服务器和客户端代码可能永远不会知道。尽管实际上您可能会发现数据报不会被网络堆栈丢弃,并且 NIC 驱动程序只是消耗所有可用的非分页池并最终使系统崩溃(请参阅 此博客文章了解更多详细信息)。

回到 TCP,服务器代码如何处理 TCP 窗口变满取决于您使用的是阻塞 I/O、非阻塞 I/O 还是异步 I/O。

  • 如果您使用阻塞 I/O,那么您的发送调用将被阻塞,并且您的服务器将会变慢;实际上,您的服务器现在与您的客户端保持同步。在客户端收到待处理数据之前,它无法发送更多数据。

  • 如果服务器使用非阻塞 I/O,那么您可能会收到一个错误返回,告诉您调用将被阻塞;你可以做其他事情,但你的服务器稍后需要重新发送数据...

  • 如果你使用异步 I/O,那么事情可能会更复杂。例如,对于 Windows 上使用 I/O 完成端口的异步 I/O,您根本不会注意到任何不同。您的重叠发送仍然会被很好地接受,但您可能会注意到它们需要更长的时间才能完成。重叠的发送正在您的服务器计算机上排队,并且正在使用重叠缓冲区的内存,并且可能还会耗尽“非分页池”。如果您继续发出重叠发送,那么您将面临耗尽非分页池内存或使用潜在无限量内存作为 I/O 缓冲区的风险。因此,对于异步 I/O 和服务器生成数据的速度可能比客户端使用数据的速度快,您应该编写自己的流控制代码,并使用写入的完成来驱动该代码。我已经在我的博客这里 和这里和我的服务器框架提供了自动为您处理它的代码。

就“传输中”的数据而言,两个对等方中的 TCP 堆栈将确保数据按预期到达(即按顺序且不丢失任何内容),它们将通过在需要时重新发送数据来实现这一点。

With TCP there's a TCP Window which is used for flow control. TCP only allows a certain amount of data to remain unacknowledged at a time. If a server is producing data faster than a client is consuming data then the amount of data that is unacknowledged will increase until the TCP window is 'full' at this point the sending TCP stack will wait and will not send any more data until the client acknowledges some of the data that is pending.

With UDP there's no such flow control system; it's unreliable after all. The UDP stacks on both client and server are allowed to drop datagrams if they feel like it, as are all routers between them. If you send more datagrams than the link can deliver to the client or if the link delivers more datagrams than your client code can receive then some of them will get thrown away. The server and client code will likely never know unless you have built some form of reliable protocol over basic UDP. Though actually you may find that datagrams are NOT thrown away by the network stack and that the NIC drivers simply chew up all available non-paged pool and eventually crash the system (see this blog posting for more details).

Back with TCP, how your server code deals with the TCP Window becoming full depends on whether you are using blocking I/O, non-blocking I/O or async I/O.

  • If you are using blocking I/O then your send calls will block and your server will slow down; effectively your server is now in lock step with your client. It can't send more data until the client has received the pending data.

  • If the server is using non blocking I/O then you'll likely get an error return that tells you that the call would have blocked; you can do other things but your server will need to resend the data at a later date...

  • If you're using async I/O then things may be more complex. With async I/O using I/O Completion Ports on Windows, for example, you wont notice anything different at all. Your overlapped sends will still be accepted just fine but you might notice that they are taking longer to complete. The overlapped sends are being queued on your server machine and are using memory for your overlapped buffers and probably using up 'non-paged pool' as well. If you keep issuing overlapped sends then you run the risk of exhausting non-paged pool memory or using a potentially unbounded amount of memory as I/O buffers. Therefore with async I/O and servers that COULD generate data faster than their clients can consume it you should write your own flow control code that you drive using the completions from your writes. I have written about this problem on my blog here and here and my server framework provides code which deals with it automatically for you.

As far as the data 'in flight' is concerned the TCP stacks in both peers will ensure that the data arrives as expected (i.e. in order and with nothing missing), they'll do this by resending data as and when required.

长安忆 2024-08-24 02:03:48

TCP 有一个名为流量控制的功能。

作为 TCP 协议的一部分,客户端告诉服务器在不填满缓冲区的情况下还可以发送多少数据。如果缓冲区已满,客户端会告诉服务器它还不能发送更多数据。一旦缓冲区被清空一点,客户端就会告诉服务器它可以再次开始发送数据。 (这也适用于客户端向服务器发送数据时)。

另一方面,UDP 则完全不同。 UDP 本身不会执行此类操作,并且如果数据传入速度快于进程的处理能力,则会开始丢弃数据。如果应用程序不会丢失数据(即,如果它需要“可靠”的数据流),则应由应用程序向应用程序协议添加逻辑。

TCP has a feature called flow control.

As part of the TCP protocol, the client tells the server how much more data can be sent without filling up the buffer. If the buffer fills up, the client tells the server that it can't send more data yet. Once the buffer is emptied out a bit, the client tells the server it can start sending data again. (This also applies to when the client is sending data to the server).

UDP on the other hand is completely different. UDP itself does not do anything like this and will start dropping data if it is coming in faster then the process can handle. It would be up to the application to add logic to the application protocol if it can't lose data (i.e. if it requires a 'reliable' data stream).

书间行客 2024-08-24 02:03:48

如果你真的想理解 TCP,你几乎需要结合 RFC 来阅读实现;实际的 TCP 实现并不完全按照指定的那样。例如,Linux 有一个“内存压力”概念,它可以防止耗尽内核(相当小的)DMA 内存池,并且还可以防止一个套接字运行任何其他套接字的缓冲区空间。

If you really want to understand TCP, you pretty much need to read an implementation in conjunction with the RFC; real TCP implementations are not exactly as specified. For example, Linux has a 'memory pressure' concept which protects against running out of the kernel's (rather small) pool of DMA memory, and also prevents one socket running any others out of buffer space.

冷了相思 2024-08-24 02:03:48

服务器不可能长期比客户端快。当它比客户端快一段时间后,它所在的系统将在它在套接字上写入时阻止它(写入可以在已满的缓冲区上阻塞,就像读取可以在空缓冲区上阻塞一样)。

The server can't be faster than the client for a long time. After it has been faster than the client for a while, the system where it is hosted will block it when it writes on the socket (writes can block on a full buffer just as reads can block on an empty buffer).

北座城市 2024-08-24 02:03:48

对于 TCP,这种情况不会发生。

如果是UDP,数据包将会丢失。

With TCP, this cannot happen.

In case of UDP, packets will be lost.

洛阳烟雨空心柳 2024-08-24 02:03:48

TCP Wikipedia 文章 显示了 TCP 标头格式,其中包含窗口大小和确认序列号保留。其余字段和描述应该很好地概述传输节流的工作原理。 RFC 793 规定了基本操作;第 41 和 42 页详细介绍了流量控制。

The TCP Wikipedia article shows the TCP header format which is where the window size and acknowledgment sequence number are kept. The rest of the fields and the description there should give a good overview of how transmission throttling works. RFC 793 specifies the basic operations; pages 41 and 42 details the flow control.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文