C++/boost/thread 中可能出现死锁
假设以下代码在单核处理器上运行:
#include <cstdio>
#include <boost/thread.hpp>
#include <boost/thread/condition.hpp>
#include "boost/date_time/posix_time/posix_time.hpp"
#include <deque>
#include <cstdlib>
#include <time.h>
std::deque<int> buffer;
boost::mutex bufferMutex;
boost::condition bufferHasSome;
boost::condition bufferEmpty;
void Reader()
{
boost::mutex mutex;
boost::mutex::scoped_lock lock(mutex);
//read as fast as possible:
while(true)
{
while(buffer.size() <= 0) //1.1
{
bufferHasSome.wait(lock); //1.2
}
bufferMutex.lock();
for(int i = 0; i < buffer.size(); i++)
{
printf("%d\n", buffer.front());
buffer.pop_front();
}
bufferMutex.unlock();
//everything was read:
bufferEmpty.notify_one();
}
}
void Writer()
{
boost::mutex mutex;
boost::mutex::scoped_lock lock(mutex);
int index = 0;
while(true)
{
//write portion:
for(int i = rand() % 5; i >= 0; i--)
{
bufferMutex.lock();
buffer.push_back(index);
bufferMutex.unlock(); //2.1
bufferHasSome.notify_one(); //2.2
index++;
boost::this_thread::sleep(boost::posix_time::milliseconds(rand() % 10));
}
//definetely wait while written portion will be read:
while(buffer.size() > 0)
{
bufferEmpty.wait(lock);
}
}
}
int main()
{
srand(time(NULL));
boost::thread readerThread(Reader);
boost::thread writerThread(Writer);
getchar();
return 0;
}
处理器在 Reader 线程内的 1.1(其中 size = 0)之后停止并切换到 Writer > 其中索引已添加 (2.1) 到 buffer 中,并且 bufferHasSome 已收到通知(在 2.2 处)(但没有有人还在等待,所以这只是无效操作);然后处理器切换回Reader线程并开始(在1.2)等待有人向缓冲区写入一些内容,但只有一个人可以写入正在等待有人读取缓冲区。 该程序平均 150 次迭代后就会冻结 - 我认为这是因为这个原因。 我错过了什么?如何修复它?
Suppose the following code is run on single-core processor:
#include <cstdio>
#include <boost/thread.hpp>
#include <boost/thread/condition.hpp>
#include "boost/date_time/posix_time/posix_time.hpp"
#include <deque>
#include <cstdlib>
#include <time.h>
std::deque<int> buffer;
boost::mutex bufferMutex;
boost::condition bufferHasSome;
boost::condition bufferEmpty;
void Reader()
{
boost::mutex mutex;
boost::mutex::scoped_lock lock(mutex);
//read as fast as possible:
while(true)
{
while(buffer.size() <= 0) //1.1
{
bufferHasSome.wait(lock); //1.2
}
bufferMutex.lock();
for(int i = 0; i < buffer.size(); i++)
{
printf("%d\n", buffer.front());
buffer.pop_front();
}
bufferMutex.unlock();
//everything was read:
bufferEmpty.notify_one();
}
}
void Writer()
{
boost::mutex mutex;
boost::mutex::scoped_lock lock(mutex);
int index = 0;
while(true)
{
//write portion:
for(int i = rand() % 5; i >= 0; i--)
{
bufferMutex.lock();
buffer.push_back(index);
bufferMutex.unlock(); //2.1
bufferHasSome.notify_one(); //2.2
index++;
boost::this_thread::sleep(boost::posix_time::milliseconds(rand() % 10));
}
//definetely wait while written portion will be read:
while(buffer.size() > 0)
{
bufferEmpty.wait(lock);
}
}
}
int main()
{
srand(time(NULL));
boost::thread readerThread(Reader);
boost::thread writerThread(Writer);
getchar();
return 0;
}
and processor stopped after 1.1 (where size = 0) within Reader thread and switched to Writer where index was added (2.1) into the buffer and bufferHasSome was notified (at 2.2) (but no one is waiting for it yet so it was just void operation); then processor switched back to Reader thread and started (at 1.2) to wait while somebody will write something to the buffer but only one who can write is waiting for somebody to read the buffer.
This program freeze after averagely 150 iterations - I think it's because of this.
What did I missed? How to fix it?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(4)
我在这里看到几个问题。最重要的是,您正在检查锁之外的共享值(即 buffer.size())。其次,每个函数都有这些奇怪的互斥体,它们绝对不执行任何操作,因为它们不在线程之间共享。如果您在检查 buffer.size() 之前锁定 bufferMutex,然后等待 bufferMutex(这意味着您将解锁它,这是正确的,然后在通知线程时重新锁定它),我认为死锁威胁应该消失。
I see a couple problems here. Most importantly, you're checking shared values (namely, buffer.size()) outside of a lock. Secondly, you have these weird mutexes local to each function which do absolutely nothing, since they're not shared between threads. If you lock the bufferMutex before checking buffer.size(), then wait on bufferMutex (meaning you'll unlock it, which is correct, then re-lock it when the thread is notified), I think the deadlock threat should be gone.
想必你的作者正在等待读者读完之后才再次写作。
在任何情况下,您只需要一个“全局”互斥体,并且在等待条件变量时也应该使用它。
本地互斥锁没有任何作用。
我还建议你的 main 函数加入你的两个线程(我会使用 thread_group )。您的循环还需要一些终止条件。也许编写器会在完成并广播时设置一个变量,并且您的读取器线程将检查该条件以及检查队列的状态。
Presumably your writer is waiting for the reader to have finished reading before it writes again.
In any case you need just the one "global" mutex and should use that when waiting on your condition variables too.
The local mutexes have no effect.
I would also suggest that your main function join your two threads (I would use thread_group). You will also need some terminating condition for your loop. Maybe the writer will set a variable when it has completed and broadcast, and your reader thread will check that condition as well as checking the state of the queue.
不太能回答你的问题。其他人似乎已经开始这样做了。我建议你看看TBB中的concurrent_bounded_queue。如果您使用它,您的代码将被简化并且不易出错。
Not quite an answer to your question. The others seem to have alrdy started with that. I would like to suggest that you take a look at concurrent_bounded_queue in TBB. If you use that your code will be simplified and less error-prone.
您的问题可能与此读取循环有关:
问题是上面的内容只会读取一半的元素。因为当你弹出元素时, buffer.size() 正在减少,所以当大小是开始时的一半时,迭代将结束。你应该用 while 循环替换它:
基本上,发生的事情是,有一段时间它很幸运并且有点工作(部分是由于条件变量的虚假唤醒),但最终,读取器线程永远不会真正工作清除缓冲区并且写入线程永远不会醒来。至少,我认为这就是问题所在……乍一看,多线程问题从来都不是微不足道的。
Your problem might have something to do with this reading loop:
The problem is that the above will only read half the elements. Because as your are popping elements, the buffer.size() is decreasing, so the iterations are going to end when the size is half of what it was when it started. You should just replace it by a while loop:
Basically, what was happening is that for a while it gets lucky and kinda works (in parts due to spurious wake-ups of the condition variables), but eventually, the reader thread just never really clears the buffer and the writer thread never wakes up. At least, I think that is the problem... multi-threading issues are never trivial to see at first glance.