WSARecv,Completionport Model,如何管理Buffer并避免溢出?

发布于 2025-01-06 23:46:01 字数 375 浏览 1 评论 0原文

我的问题:我的 Completionport 服务器将从不同的客户端接收未知大小的数据,问题是,我不知道如何避免缓冲区溢出/如何避免我的(接收)缓冲区被数据“溢出”。

现在提出问题: 1)如果我通过 WSARecv 进行接收调用,工作线程是否像回调函数一样工作?我的意思是,它是仅在接收呼叫完成时才挖掘它,还是在接收发生时也挖掘它? lpNumberOfBytes(来自 GetQueuedCompletionStatus)变量是否包含到目前为止接收的字节数或接收的字节总数?

2)如何避免溢出,我想到了动态分配的缓冲区结构,但话又说回来,我如何找出包将有多大?

编辑:我不想问这个问题,但是有没有任何“简单”的方法来管理缓冲区并避免溢出?同步对我来说听起来是不受限制的,至少现在是这样

My Problem: My Completionport Server will receive Data of unknown size from different clients, the thing is, that i don't know how avoid buffer overruns/ how to avoid my (receiving) buffer being "overfilled" with data.

now to the Quesitons:
1) If i make a receive call via WSARecv, does the workerthread work like a callback function ? I mean, does it dig up the receive call only when it has completed or does it also dig it up when the receiving is happening ? Does the lpNumberOfBytes (from GetQueuedCompletionStatus) variable contain the number of bytes received till now or the total number of bytes received ?

2) How to avoid overruns, i thought of dynamically allocated buffer structures, but then again, how do i find out how big the package is going to get ?

edit: i hate to ask this, but is there any "simple" method for managing the buffer and to avoid overruns ? synchronisations sounds off limit to me, atleast right now

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

太阳哥哥 2025-01-13 23:46:01

如果我通过 WSARecv 进行接收调用,工作线程是否像回调函数一样工作?

请参阅@valdo 帖子。完成数据会排队到您的线程池中,并且将准备好处理它。

“我的意思是,它是否仅在完成后才挖掘接收呼叫?”是的 - 因此得名。请注意,“完成”的含义可能有所不同。取决于协议。对于 TCP,这意味着已从对等方接收到一些流数据字节。

“lpNumberOfBytes(来自 GetQueuedCompletionStatus)变量是否包含到目前为止接收的字节数或接收的字节总数?”它包含仅在 IOCP 完成时接收并加载到缓冲区数组中的字节数。

“如何避免溢出,我想到了动态分配的缓冲区结构,但话又说回来,我如何找出包将有多大?”如果提供缓冲区数组,则不会出现溢出 - 加载缓冲区的内核线程不会超过传递的缓冲区长度。在应用程序级别,考虑到 TCP 的流式传输性质,您可以决定如何将缓冲区数组处理为可用的应用程序级协议单元。您必须利用您对所提供服务的了解来决定合适的缓冲区管理方案。

最后一个 IOCP 服务器是通用的。我使用了一个缓冲池数组和一个“缓冲载体”对象池,在启动时分配(以及一个套接字对象池)。每个缓冲池保存不同大小的缓冲区。在建立新连接时,我使用最小池中的一个缓冲区发出 WSARecv。如果此缓冲区完全填满,我将使用下一个最大池中的缓冲区作为下一个 WSARecv,依此类推。

然后是防止多个处理程序线程发生无序缓冲所需的序列号问题:(

If i make a receive call via WSARecv, does the workerthread work like a callback function ?

See @valdo post. Completion data si queued to your pool of threads and one will be made ready to process it.

'I mean, does it dig up the receive call only when it has completed?' Yes - hence the name. Note that the meaning of 'completed' may vary. depending on the protocol. With TCP, it means that some streamed data bytes have been received from the peer.

'Does the lpNumberOfBytes (from GetQueuedCompletionStatus) variable contain the number of bytes received till now or the total number of bytes received ?' It contains the number of bytes received and loaded into the buffer array provided in that IOCP completion only.

'How to avoid overruns, i thought of dynamically allocated buffer structures, but then again, how do i find out how big the package is going to get ?' You cannot get overruns if you provide the buffer arrays - the kernel thread/s that load the buffer/s will not exceed the passed buffer lengths. At application level, given the streaming nature of TCP, it's up to you to decide how to process the buffer arrays into useable application-level protocol-units. You must decide, using your knowledge of the services provided, on a suitable buffer management scheme.

Last IOCP server was somwewhat general-purpose. I used an array of buffer pools and a pool of 'buffer-carrier' objects, allocated at startup, (along with a pool of socket objects). Each buffer pool held buffers of a different size. Upon a new connection, I issued an WSARecv using one buffer from the smallest pool. If this buffer got completely filled, I used a buffer from the next largest pool for the next WSARecv, and so on.

Then there's the issue of the sequence numbers needed to prevent out-of-order buffering with multiple handler threads :(

暗地喜欢 2025-01-13 23:46:01

_1.完成端口是一种队列(具有关于等待从队列中出列 I/O 完成的线程优先级的复杂逻辑)。每当 I/O 完成(无论成功与否)时,它都会在完成端口中排队。然后,它被名为 GetQueuedCompletionStatus 的线程之一出队。

这样您就永远不会使“正在进行”的 I/O 出列。此外,它是由您的工作线程异步处理的。也就是说,它会延迟到您的线程调用 GetQueuedCompletionStatus 为止。

_2.这实际上是一个复杂的问题。总体而言,同步并不是一项简单的任务,尤其是在对称多线程(其中有多个线程,每个线程都可能执行所有操作)时。

您通过完成的 I/O 收到的参数之一是指向 OVERLAPPED 结构的指针(您提供给发出 I/O 的函数,例如 WSARecv) 。通常的做法是分配基于 OVERLAPPED 的您自己的结构(继承它或将其作为第一个成员)。收到完成后,您可以将出队的 OVERLAPPED 转换为实际的数据结构。在那里,您可能拥有同步所需的一切:同步对象、状态描述等。

但请注意,即使您拥有自定义上下文,正确同步事物(以获得良好的性能并避免死锁)也不是一项简单的任务。这需要精确的设计。

_1. Completion port is a sort of a queue (with sophisticated logic concerning priority of threads waiting to dequeue an I/O completion from it). Whenever an I/O completes (either successfully or not), it's queued into the completion port. Then it's dequeued by one of the thread called GetQueuedCompletionStatus.

So that you never dequeue an I/O "in progress". Moreover, it's processed by your worker thread asynchronously. That is, it's delayed until your thread calls GetQueuedCompletionStatus.

_2. This is actually a complex matter. Synchronization is not a trivial task overall, especially when it comes to symmetric multi-threading (where you have several threads, each may be doing everything).

One of the parameters you receive with a completed I/O is a pointer to an OVERLAPPED structure (that you supplied to the function that issued I/O, such as WSARecv). It's a common practice to allocate your own structure that is based on OVERLAPPED (either inherits it or has it as the first member). Upon receiving a completion you may cast the dequeued OVERLAPPED to your actual data structure. There you may have everything needed for the synchronization: sync objects, state description and etc.

Note however that it's not a trivial task to synchronize things correctly (to have a good performance and avoid deadlocks) even when you have the custom context. This demands an accurate design.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文