套接字接收调用将线程冻结大约。 5秒
我有一个用 C++ 实现的客户端服务器架构,在 Windows 7 下具有阻塞套接字。一切都运行良好,达到一定的负载级别。如果有几个客户端(例如大于4)接收或发送兆字节的数据,有时与一个客户端的通信会冻结大约5秒。在这种情况下,所有其他客户端都按预期工作。
缓冲区大小为 8192 字节,服务器端的日志记录如下:
TimeStamp (s.ms) - receive bytes
…
1299514524.618 - 8192
1299514524.618 - 8192
1299514524.618 - 0004
1299514529.641 - 8192
1299514529.641 - 3744
1299514529.641 - 1460
1299514529.641 - 1460
1299514529.641 - 8192
…
似乎在这5秒内只能读取4个字节。此外,我发现冻结时间总是在 5 秒左右 - 从来没有 4 秒或更少,也从来没有 6 或更多……
有什么想法吗?
最好的问候
迈克尔
I've a client server architecture implemented in C++ with blocking sockets under Windows 7. Everything is running well up to a certain level of load. If there are a couple of clients (e.g. > 4) receiving or sending megabytes of data, sometimes the communication with one client freezes for approximately 5 seconds. All other clients are working as expected in that case.
The buffer size is 8192 bytes and logging on the server side reads as follows:
TimeStamp (s.ms) - received bytes
…
1299514524.618 - 8192
1299514524.618 - 8192
1299514524.618 - 0004
1299514529.641 - 8192
1299514529.641 - 3744
1299514529.641 - 1460
1299514529.641 - 1460
1299514529.641 - 8192
…
It seems that only 4 bytes can be read within that 5 seconds. Furthermore I found out that the freezing time is always arounds that 5 seconds - never 4 or less and never 6 or more...
Any ideas?
Best regards
Michael
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
这是一个 Windows 错误。
KB 2020447 - 使用环回地址的套接字通信会间歇性地遇到五秒的延迟
KB 2861819中提供了修补程序
- 基于 Windows Socket 的数据传输停止五秒钟Windows 7 和 Windows Server 2008 R2 中的应用程序
This is a Windows bug.
KB 2020447 - Socket communication using the loopback address will intermittently encounter a five second delay
A Hotfix is available in
KB 2861819 - Data transfer stops for five seconds in a Windows Socket-based application in Windows 7 and Windows Server 2008 R2
我在高负载的情况下遇到过这个问题:最后一个 TCP 数据包有时会在倒数第二个之前到达,因为没有为包排序定义默认堆栈,
这种疾病导致收到与您所描述的类似的结果。
采用的解决方案是:将负载分布在更多服务器上
I've had this problem in situations of high load: the last packet of TCP data sometimes reached before the second to last, as the default stack is not defined for package sorting,
this disorder caused in receiving similar result to what you describe.
The solution adopted was: load distribution in more servers