由于客户端读取数据的速度不够快而导致套接字节流?

发布于 2024-08-04 17:44:07 字数 212 浏览 8 评论 0原文

我通过 TCP 套接字建立了客户端/服务器连接,服务器以最快的速度向客户端写入数据。

查看我的网络活动,生产客户端以大约 2.5 Mb/s 的速度接收数据。

我编写的一个新的轻量级客户端只是读取和基准速率,其速率约为 5.0Mb/s(这可能是服务器可以传输的最大速度)。

我想知道这里的速率是由什么决定的,因为客户端没有向服务器发送任何数据来告诉它任何速率限制。

I have a client/server connection over a TCP socket, with the server writing to the client as fast as it can.

Looking over my network activity, the production client receives data at around 2.5 Mb/s.

A new lightweight client that I wrote to just read and benchmark the rate, has a rate of about 5.0Mb/s (Which is probably around the max speed the server can transmit).

I was wondering what governs the rates here, since the client sends no data to the server to tell it about any rate limits.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

菊凝晚露 2024-08-11 17:44:07

在 TCP 中它是客户端。如果服务器的 TCP 窗口已满 - 它需要等待,直到来自客户端的更多 ACK 到来。它在 TCP 堆栈中对您隐藏,但 TCP 引入了保证传输,这也意味着服务器发送数据的速度不能快于客户端处理数据的速度。

In TCP it is the client. If server's TCP window is full - it needs to wait until more ACKs from client came. It is hidden from you inside the TCP stack, but TCP introduces guaranteed delivery, which also means that server can't send data faster than rate at which client is processing them.

笑梦风尘 2024-08-11 17:44:07

TCP 具有流量控制,并且它是自动发生的。阅读http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Flow_control

当管道因流量控制而填满时,服务器 I/O 套接字写入操作将无法完成,直到流量控制被解除为止。

TCP has flow control and it happens automatically. Read about it at http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Flow_control

When the pipe fills due to flow control, the server I/O socket write operations won't complete untill the flow control is releaved.

木落 2024-08-11 17:44:07

服务器正在以 5.0Mb/s 的速度写入数据,但是如果您的客户端是这里的瓶颈,那么服务器必须等待“发送缓冲区”中的数据完全发送到客户端,或者释放足够的空间以放入更多数据。

正如您所说,轻量级客户端能够以 5.0Mb/s 的速度接收,那么您必须检查客户端中的接收后操作。如果您正在接收数据,然后在读取更多数据之前对其进行处理,那么这可能是瓶颈。

最好异步接收数据,一旦接收完成,就要求客户端套接字再次开始接收数据,同时在单独的线程池线程中处理接收到的数据。这样,您的客户端始终可以接收传入数据,并且服务器可以全速发送数据。

The server is writing data at 5.0Mb/s, but if your client is the bottleneck here then server has to wait before the data in "Sent Buffer" is completely sent to client, or enough space is released to put in more data.

As you said that the light weight client was able to receive at 5.0Mb/s, then it will be the post-receiving operations in your client that you have to check. If you are receiving data and then processing it before you read more data, then this might be the bottleneck.

It is better to receive data asynchronously, and as soon as one receive is complete, ask the client sockets to start receiving data again, while you process the received data in a separate thread pool thread. This way your client is always available to receive incomming data, and server can send it at full speed.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文