Winsock WSAAsyncSelect 在没有无限缓冲区的情况下发送

发布于 2024-09-05 18:05:11 字数 501 浏览 9 评论 0原文

这更多的是一个设计问题,而不是一个具体的代码问题,我确信我错过了明显的问题,我只需要另一双眼睛。

我正在编写一个基于 WSAAsyncSelect 的多客户端服务器,每个连接都被放入我编写的连接类的一个对象中,其中包含关联的设置和缓冲区等。

我的问题涉及 FD_WRITE,我了解它的操作方式:立即发送一个 FD_WRITE连接建立后。此后,您应该发送直到收到 WSAEWOULDBLOCK,此时您将剩余要发送的内容存储在缓冲区中,并等待被告知可以再次发送。

这就是我遇到的问题,我在每个连接对象中将该保持缓冲区设置为多大?收到新的 FD_WRITE 之前的时间量是未知的,我可能会在这段时间内尝试发送很多内容,一直添加到我的传出缓冲区中。如果我使缓冲区动态化,如果出于某种原因我无法 send() 并减少缓冲区,内存使用量可能会失控。

所以我的问题是你一般如何处理这种情况?注意我不是在谈论winsock 使用的网络缓冲区本身,而是我自己创建的用于“排队”发送的缓冲区之一。

希望我解释得足够好,谢谢大家!

This is more of a design question than a specific code question, I'm sure I am missing the obvious, I just need another set of eyes.

I am writing a multi-client server based on WSAAsyncSelect, each connection is made into an object of a connection class I have written which contains associated settings and buffers etc.

My question concerns FD_WRITE, I understand how it operates: One FD_WRITE is sent immediately after a connection is established. Thereafter, you should send until WSAEWOULDBLOCK is received at which point you store what is left to send in a buffer, and wait to be told that it is ok to send again.

This is where I have a problem, how large do I make this holding buffer within each connections object? The amount of time until a new FD_WRITE is received is unknown, I could be attempting to send a lot of stuff during this period, all the time adding to my outgoing buffer. If I make the buffer dynamic, memory usage could spiral out of control if for whatever reason, I am unable to send() and reduce the buffer.

So my question is how do you generally handle this situation? Note I am not talking about the network buffer itself which winsock uses, but one of my own creation used to "queue" up sends.

Hope I explained that well enough, thanks all!

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

┈┾☆殇 2024-09-12 18:05:11

当然,正确的设计取决于您的应用程序的性质。

某些程序可以在必须执行某些操作之前预测可以生成的数据量,因此它们可以使用固定大小的缓冲区。例如,我设计的一种协议具有命令响应结构和 2 字节长度前缀,因此我可以使用 64K 缓冲区,并且知道我永远不会溢出它们。如果缓冲区已满,则程序必须等待答复才能从该缓冲区发送数据,因此不会再向该缓冲区添加更多数据。

固定大小缓冲区的另一个好用途是当数据来自另一个 I/O 源时。考虑一个网络服务器:最基本的是,它从磁盘中读取文件并将它们吐出到网络上。因此,您知道一次从磁盘读取多少数据,因此您知道缓冲区必须有多大。

我无法找到使用动态缓冲区的充分理由。

您不需要它们的主要原因是 TCP 的滑动窗口。如果连接对等方之一停止接收数据,则当 TCP 窗口填满时,远程对等方的堆栈将停止发送数据。未读取的数据将保留在接收堆栈的缓冲区中,直到发送到的程序请求它为止。这为接收器提供了一种将传入数据限制到其可以处理的水平的方法。据我所知,这使得固定大小的缓冲区在所有条件下都很实用。

Naturally, the correct design depends on the nature of your application.

Some programs can predict the amount of data that can be generated before something must be done with it, so they can use a fixed-size buffer. One protocol I designed, for instance, had a command-response structure and a 2-byte length prefix, so I could use 64K buffers and know I'd never overflow them. If a buffer is full, the program must be waiting for a reply before it is allowed to send data from that buffer, so no more data will be added to that buffer.

Another good use for fixed-size buffers is when the data comes from another I/O source. Consider a web server: at its most basic, it slurps files from disk and spits them out on the wire. You therefore know how much you are reading from disk at a time, so you know how big your buffers must be.

I'm having trouble coming up with a good reason to use dynamic buffers.

The main reason you don't need them is TCP's sliding window. If one of the connection peers stops receiving data, the remote peer's stack will stop sending data when the TCP window fills up. The unread data will stay in the receiving stack's buffers until the program it was sent to requests it. This gives the receiver a way to throttle the incoming data to a level it can handle. As far as I can tell, that makes fixed-size buffers practical under all conditions.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文