持有 boost::interprocess::scoped_lock 时休眠会导致它永远不会被释放
我正在 Linux 上使用 boost::interprocess::shared_memory_object
按照 参考(匿名互斥示例)。
有一个服务器进程,它创建 shared_memory_object
并写入它,同时持有包装在 scoped_lock
中的 interprocess_mutex
; 一个客户端进程打印其他进程写入的内容 - 在本例中,它是一个 int
。
我遇到了一个问题:如果服务器在持有互斥体时休眠,则客户端进程永远无法获取它并永远等待。
Buggy server 循环:
using namespace boost::interprocess;
int n = 0;
while (1) {
std::cerr << "acquiring mutex... ";
{
// "data" is a struct on the shared mem. and contains a mutex and an int
scoped_lock<interprocess_mutex> lock(data->mutex);
data->a = n++;
std::cerr << n << std::endl;
sleep(1);
} // if this bracket is placed before "sleep", everything works
}
Server 输出:
acquiring mutex... 1
acquiring mutex... 2
acquiring mutex... 3
acquiring mutex... 4
Client 循环:
while(1) {
std::cerr << "acquiring mutex... ";
{
scoped_lock<interprocess_mutex> lock(data->mutex);
std::cerr << data->a << std::endl;
}
sleep(1);
}
Client 输出(永远等待):
acquiring mutex...
问题是,如果我将括号移至 sleep 调用之前的行,一切正常。 为什么? 我不认为与锁定的互斥体一起睡觉会导致互斥体永远被锁定。
我唯一的理论是,当内核唤醒服务器进程时,作用域结束并且互斥锁被释放,但是等待进程没有机会运行。 然后服务器重新获取锁...但这似乎没有多大意义。
谢谢!
I'm doing IPC on Linux using boost::interprocess::shared_memory_object
as per the reference (anonymous mutex example).
There's a server process, which creates the shared_memory_object
and writes to it, while holding an interprocess_mutex
wrapped in a scoped_lock
; and a client process which prints whatever the other one has written - in this case, it's an int
.
I ran into a problem: if the server sleeps while holding the mutex, the client process is never able to aquire it and waits forever.
Buggy server loop:
using namespace boost::interprocess;
int n = 0;
while (1) {
std::cerr << "acquiring mutex... ";
{
// "data" is a struct on the shared mem. and contains a mutex and an int
scoped_lock<interprocess_mutex> lock(data->mutex);
data->a = n++;
std::cerr << n << std::endl;
sleep(1);
} // if this bracket is placed before "sleep", everything works
}
Server output:
acquiring mutex... 1
acquiring mutex... 2
acquiring mutex... 3
acquiring mutex... 4
Client loop:
while(1) {
std::cerr << "acquiring mutex... ";
{
scoped_lock<interprocess_mutex> lock(data->mutex);
std::cerr << data->a << std::endl;
}
sleep(1);
}
Client output (waits forever):
acquiring mutex...
The thing is, if I move the bracket to the line before the sleep
call, everything works. Why? I didn't think sleeping with a locked mutex would cause the mutex to be eternally locked.
The only theory I have is that when the kernel wakes up the server process, the scope ends and the mutex is released, but the waiting process isn't given a chance to run. The server then re-acquires the lock... But that doesn't seem to make a lot of sense.
Thanks!
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(2)
持有互斥体时睡觉是错误的。 互斥体保护一些数据(即数据->a),并且应该最小化该数据的读/写范围。
sleeping while holding a mutex is wrong. Mutex protects some data (i.e. data->a) and scope should be minimized around read/write of that data.
你的理论是正确的。
如果您查看链接的参考文献中匿名互斥示例的底部,您将看到
释放互斥体不会通知任何可能正在等待它的其他人,并且由于您的进程刚刚醒来,因此几乎可以肯定它还有大量的调度时间来完成更多工作。 它将循环并在再次休眠之前重新获取互斥锁,这是客户端第一次有机会获取互斥锁本身。
将服务器
sleep()
移出作用域意味着它会在互斥体空闲时进入睡眠状态,从而使客户端有机会运行并为自己获取互斥体。如果您想放弃处理器,但仍然在您的范围内休眠,请尝试调用 sched_yield()(仅限 Linux)。
sleep(0)
也可能有效。Your theory is correct.
If you look at the bottom of the anonymous mutex example in the reference you linked, you'll see
Releasing the mutex doesn't notify anyone else that might be waiting on it, and since your process just woke up, it almost certainly has plenty of its scheduling quantum left to do more work. It will loop around and re-acquire the mutex before it sleeps again, which is the first opportunity the client has to acquire the mutex itself.
Moving the server
sleep()
outside of the scope means it goes to sleep while the mutex is free, giving the client a chance to run and acquire the mutex for itself.Try calling
sched_yield()
(Linux only) if you want to give up the processor, but still sleep within your scope.sleep(0)
may also work.