使用 TcpClient 进行更快的通信方式?
我正在用 C# 编写一个客户端/服务器应用程序,并且进展顺利。目前,一切正常,而且非常稳健。我的问题是在通过连接发送数据包时遇到一些延迟。
在客户端,我这样做:(
NetworkStream ns = tcpClient.GetStream();
// Send packet
byte[] sizePacket = BitConverter.GetBytes(request.Length);
byte[] requestWithHeader = new byte[sizePacket.Length + request.Length];
sizePacket.CopyTo(requestWithHeader, 0);
request.CopyTo(requestWithHeader, sizePacket.Length);
ns.Write(requestWithHeader, 0, requestWithHeader.Length);
// Receive response
ns.Read(sizePacket, 0, sizePacket.Length);
int responseLength = BitConverter.ToInt32(sizePacket, 0);
byte[] response = new byte[responseLength];
int bytesReceived = 0;
while (bytesReceived < responseLength)
{
int bytesRead = ns.Read(response, bytesReceived, responseLength - bytesReceived);
bytesReceived += bytesRead;
}
遗漏了一些异常捕获等)服务器执行相反的操作,即它在 NetworkStream.Read() 上阻塞,直到它有一个完整的请求,然后处理它并使用 Write 发送响应()。
Write()/Read() 的原始速度不是问题(即发送大数据包很快),但是在不关闭连接的情况下一个接一个地发送几个小数据包可能会非常慢(延迟 50-100)多发性硬化症)。奇怪的是,这些延迟出现在典型 ping 时间 <1 毫秒的 LAN 连接上,但如果服务器在本地主机上运行,则不会发生这些延迟,即使 ping 时间实际上是相同的(至少差异不应该存在)约为 100 毫秒)。如果我在每个数据包上重新打开连接,导致大量握手,这对我来说是有意义的,但我没有。这就好像服务器进入等待状态使其与客户端失去同步,然后在重新建立本质上丢失的连接时会出现一些问题。
那么,我做错了吗?有没有办法保持 TcpServer 和 TcpClient 之间的连接同步,以便服务器始终准备好接收数据? (反之亦然:有时处理来自客户端的请求需要几毫秒,然后客户端似乎还没有准备好接收来自服务器的响应,直到它在 Read() 上阻塞后有一些时间醒来.)
I'm writing a client/server application in C#, and it's going great. For now, everything works and it's all pretty robust. My problem is that I run into some delays when sending packets across the connection.
On the client side I'm doing this:
NetworkStream ns = tcpClient.GetStream();
// Send packet
byte[] sizePacket = BitConverter.GetBytes(request.Length);
byte[] requestWithHeader = new byte[sizePacket.Length + request.Length];
sizePacket.CopyTo(requestWithHeader, 0);
request.CopyTo(requestWithHeader, sizePacket.Length);
ns.Write(requestWithHeader, 0, requestWithHeader.Length);
// Receive response
ns.Read(sizePacket, 0, sizePacket.Length);
int responseLength = BitConverter.ToInt32(sizePacket, 0);
byte[] response = new byte[responseLength];
int bytesReceived = 0;
while (bytesReceived < responseLength)
{
int bytesRead = ns.Read(response, bytesReceived, responseLength - bytesReceived);
bytesReceived += bytesRead;
}
(Left out some exception catching etc.) The server does the opposite, i.e. it blocks on NetworkStream.Read() until it has a whole request, then processes it and sends a response using Write().
The raw speed of Write()/Read() isn't a problem (i.e. sending large packets is fast), but sending several small packets, one after another, without closing the connection, can be terribly slow (delays of 50-100 ms). What's strange is that these delays show up on LAN connections with typical ping times <1 ms, but they do not occur if the server is running on localhost, even though the ping time would effectively be the same (at least the difference should not be on the order of 100 ms). That would make sense to me if I were reopening the connection on every packet, causing lots of handshaking, but I'm not. It's just as if the server going into a wait state throws it out of sync with the client, and then it stumbles a bit as it reestablishes what is essentially a lost connection.
So, am I doing it wrong? Is there a way to keep the connection between TcpServer and TcpClient synchronised so the server is always ready to receive data? (And vice versa: sometimes processing the request from the client takes a few ms, and then the client doesn't seem to be ready to receive the response from the server until it's had a few moments to wake up after blocking on Read().)
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
事实证明我的服务器和客户端毕竟不是完全对称的。我注意到了,但我认为这根本不重要。显然这是一件大事。具体来说,服务器执行了以下操作:
我将其更改为:
现在延迟完全消失了,或者至少不再以毫秒为单位进行测量。所以这相当于 100 倍的加速。 \o/
这仍然很奇怪,因为它向套接字写入与以前完全相同的数据,所以我猜套接字在写入操作期间接收到一些秘密元数据,然后以某种方式传递到远程套接字,这可能会将其解释为一个机会小睡一下。要么是这样,要么第一次写入将套接字置于接收模式,导致它在收到任何内容之前被要求再次发送时出错。
我想这意味着您发现的所有示例代码都在其中,它们展示了如何以固定大小的块写入和读取套接字(通常前面有一个 int 描述要跟随的数据包的大小,与我的第一个版本),没有提到这样做会带来非常严重的性能损失。
It turns out my server and client were not completely symmetrical after all. I had noticed, but I didn't think it mattered at all. Apparently it's a huge deal. Specifically, the server did this:
Which I changed into this:
And now the delay is completely gone, or at least it's no longer measurable in milliseconds. So that's something like a 100x speedup right there. \o/
It's still odd because it's writing exactly the same data to the socket as before, so I guess the socket receives some secret metadata during the write operation, which is then somehow communicated to the remote socket, which may interpret it as an opportunity to take a nap. Either that, or the first write puts the socket into receive mode, causing it to trip up when it's then asked to send again before it has received anything.
I suppose the implication would be that all of this example code you find lying around, which shows how to write to and read from sockets in fixed-size chunks (often preceded by a single int describing the size of the packet to follow, same as my first version), is failing to mention that there's a very heavy performance penalty for doing so.