在 C++ 中构建多线程工作队列(消费者/生产者)

发布于 2024-08-13 06:02:35 字数 245 浏览 5 评论 0原文

我有以下场景:我有一个线程应该填充 带有整数对的容器(本质上是任务描述),我有一个很大的 应从此容器中获取元素并执行操作的工作线程数 (8-16) 一些工作。

我认为这个问题可以通过阻塞队列轻松解决——例如,在项目删除时,线程同步对队列的访问,如果没有可用数据则休眠。

我(也许是错误的)认为类似的东西应该存在于 STL 或 boost 中,但我找不到任何东西。

我真的必须自己实现那件事吗?这似乎是一个很常见的场景......

I have the following scenario: I have a single thread that is supposed to fill a
container with pairs of integers (in essence, task descriptions), and I have a large
number of worker threads (8-16) that should take elements from this container and perform
some work.

I thought the problem could be easily solved by a blocking queue -- e.g. on item-removal, threads synchronize access to the queue, and sleep if there is no data available.

I (perhaps wrongly) assumed that something like this should exist in the STL or in boost, but I was unable to find anything.

Do I actually have to implement that thing myself ? It seems like such a common scenario...

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

白色秋天 2024-08-20 06:02:35

如果您自己实现它,那么该实现应该是信号量、互斥体和队列对象的相当简单的组合。

这是一些伪代码:

Produce{
    pthread_mutex_lock(&mutex);
    queue.push_back(someObjectReference);
    pthread_mutex_unlock(&mutex);
    sem_post(&availabilitySem);
}

Consume{
    sem_wait(&availabilitySem);
    pthread_mutex_lock(&mutex);
    queue.pop_front(someObjectReference);
    pthread_mutext_unlock(&mutex);
}

If you do implement it yourself, the implementation should be a fairly straightforward combination of a semaphore, a mutex, and a queue object.

Here's some pseudo-code:

Produce{
    pthread_mutex_lock(&mutex);
    queue.push_back(someObjectReference);
    pthread_mutex_unlock(&mutex);
    sem_post(&availabilitySem);
}

Consume{
    sem_wait(&availabilitySem);
    pthread_mutex_lock(&mutex);
    queue.pop_front(someObjectReference);
    pthread_mutext_unlock(&mutex);
}
阳光下的泡沫是彩色的 2024-08-20 06:02:35

如果您使用的是 Windows,请查看 VS2010 中的代理库,这是一个核心场景。

http://msdn.microsoft.com/en-us /library/dd492627(VS.100).aspx

//an unbounded_buffer is like a queue
unbounded_buffer<int> buf;

//you can send messages into it with send or asend
send(buf,1);

//receive will block and wait for data
int result = receive(buf)

您可以使用线程、“代理”或“任务”来获取数据...或者您可以将缓冲区链接在一起并转换您的阻塞语义生产者/消费者数据流网络的问题。

If you are on windows take a look at the agents library in VS2010 this is a core scenario.

http://msdn.microsoft.com/en-us/library/dd492627(VS.100).aspx

i.e.

//an unbounded_buffer is like a queue
unbounded_buffer<int> buf;

//you can send messages into it with send or asend
send(buf,1);

//receive will block and wait for data
int result = receive(buf)

you can use threads, 'agents' or 'tasks' to get the data out... or you can link buffers together and convert your blocking semantic producer / consumer problem to a data flow network.

青朷 2024-08-20 06:02:35

如果您使用的是 Windows,并且想要一个在如何管理允许运行以处理其中的项目的线程方面高效的队列,请查看 IO 完成端口(请参阅 此处)。我的免费服务器框架包括一个基于如果您打算走这条路,IOCP 也可能会引起您的兴趣;尽管它可能对于你想要的东西来说太专业了。

If you are on Windows and want a queue that is efficient in terms of how it manages the threads that are allowed to run to process items from it then take a look at IO Completion Ports (see here). My free server framework includes a task queue implementation that's based on IOCPs and that may also be of interest if you intend to go down this route; though it's possibly too specialised for what you want.

如梦 2024-08-20 06:02:35

我认为 message_queue 来自 boost::interprocess 就是你想要的。第二个链接有一个使用示例。

I think message_queue from boost::interprocess is what you want. The second link has a usage example.

怂人 2024-08-20 06:02:35

您应该查看 ACE(自适应通信环境) 和ACE_Message_Queue。总是有 boost 的 message_queue ,但 ACE 在高性能并发方面处于领先地位。

You should take a look at ACE (Adaptive Communication Environment) and the ACE_Message_Queue. There's always boost's message_queue, but ACE is where it's at in terms of high performance concurrency.

百思不得你姐 2024-08-20 06:02:35

如果您使用的是 OSX Snow Leopard,您可能需要查看 Grand Central Dispatch

If you're on OSX Snow Leopard, you might want to look at Grand Central Dispatch.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文