命名管道:很多客户。如何谨慎创建线程?线程池?

发布于 2024-09-14 04:26:52 字数 527 浏览 13 评论 0原文

情况:

我在 Windows 上使用命名管道进行 IPC,使用 C++。

服务器通过CreateNamedPipe创建命名管道实例,并等待客户端通过ConnectNamedPipe连接。

每次客户端调用 CreateFile 访问命名管道时,服务器都会使用 CreateThread 创建一个线程来为该客户端提供服务。之后,服务器重复循环,通过 CreateNamedPipe 创建管道实例,并通过 ConnectNamedPipe 侦听下一个客户端,等等...

问题:

每个客户端请求都会触发服务器上的 CreateThread。如果客户端来得又快又猛,就会有很多次调用 CreateThread。

问题:

问题1:是否可以重用已创建的线程来服务未来的客户端请求? 如果可以的话,我应该怎么做?

Q2:线程池在这种情况下会有帮助吗?

Situation:

I'm am using named pipes on Windows for IPC, using C++.

The server creates a named pipe instance via CreateNamedPipe, and waits for clients to connect via ConnectNamedPipe.

Everytime a client calls CreateFile to access the named pipe, the server creates a thread using CreateThread to service that client. After that, the server reiterates the loop, creating a pipe instance via CreateNamedPipe and listening for the next client via ConnectNamedPipe, etc ...

Problem:

Every client request triggers a CreateThread on the server. If clients come fast and furious, there would be many calls to CreateThread.

Questions:

Q1: Is it possible to reuse already created threads to service future client requests?
If this is possible, how should I do this?

Q2: Would Thread Pool help in this situation?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

心碎的声音 2024-09-21 04:26:53

我今天使用 IOCompletion 端口编写了一个命名管道服务器,只是为了看看如何实现。

基本逻辑流程是:

  • 我通过 CreateNamedPipe 创建了第一个命名管道
  • 我使用该句柄创建了主 Io Completion Port 对象: CreateIoCompletionPort
  • 我创建了一个工作线程池 - 就像拇指吸吮一样,CPU x2。每个工作线程在循环中调用 GetQueuedCompletionStatus。
  • 然后调用 ConnectNamedPipe 传入一个重叠结构。当此管道连接时,GetQueuedCompletionStatus 调用之一将返回。
  • 然后,我的主线程还通过调用 GetQueuedCompletionStatus 加入工作线程池。

确实就是这样。

每次线程从 GetQueuedCompletionStatus 返回时,都是因为关联的管道已连接、已读取数据或已关闭。
每次连接管道时,我都会立即创建一个未连接的管道来接受下一个客户端(一次可能应该有多个客户端等待)并在当前管道上调用 ReadFile,传递一个重叠的结构 - 确保当数据到达时 GetQueuedCompletionStatus 将告诉我这件事。

有一些令人恼火的边缘情况,其中函数返回失败代码,但 GetLastError() 成功。由于函数“失败”,您必须立即处理成功,因为没有发布排队完成状态。相反,(我相信 Vista 添加了一个 API 来“修复”这个问题)如果数据立即可用,重叠的函数可以返回成功,但也会发布排队的完成状态,因此在这种情况下请小心不要重复处理数据。

I wrote a named pipe server today using IOCompletion ports just to see how.

The basic logic flow was:

  • I created the first named pipe via CreateNamedPipe
  • I created the main Io Completion Port object using that handle: CreateIoCompletionPort
  • I create a pool of worker threads - as a thumb suck, CPUs x2. Each worker thread calls GetQueuedCompletionStatus in a loop.
  • Then called ConnectNamedPipe passing in an overlapped structure. When this pipe connects, one of the GetQueuedCompletionStatus calls will return.
  • My main thread then joins the pool of workers by also calling GetQueuedCompletionStatus.

Thats about it really.

Each time a thread returns from GetQueuedCompletionStatus its because the associated pipe has been connected, has read data, or has been closed.
Each time a pipe is connected, I immediately create a unconnected pipe to accept the next client (should probably have more than one waiting at a time) and call ReadFile on the current pipe, passing an overlapped structure - ensuring that as data arrives GetQueuedCompletionStatus will tell me about it.

There are a couple of irritating edge cases where functions return a fail code, but GetLastError() is a success. Because the function "failed" you have to handle the success immediately as no queued completion status was posted. Conversely, (and I belive Vista adds an API to "fix" this) if data is available immediately, the overlapped functions can return success, but a queued completion status is ALSO posted so be careful not to double handle data in that case.

蝶舞 2024-09-21 04:26:53

在 Windows 上,构建并发服务器的最有效方法是使用带有完成端口的异步模型。但是,是的,您也可以使用线程池并使用阻塞 I/O,因为这是一个更简单的编程抽象。

Vista/Windows 2008 提供了线程池抽象。

On Windows, the most efficient way to build a concurrent server is to use an asynch model with completion ports. But yes you can use a thread pool and use blocking i/o too, as that is a simpler programming abstraction.

Vista/Windows 2008 provide a thread pool abstraction.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文