boost::thread 导致小事件句柄泄漏?
我正在调试这个数据库项目。它包装了对 SQLite 的访问以供更高级别的应用程序使用。它被设计为异步运行,也就是说,它具有 ExecuteRequestAsync() 和 IsRequestReady() 等方法。当调用 ExecuteRequestAsync 时,它会生成一个 boost::thread 来完成这项工作并立即返回该函数。当更高级别的应用程序决定不再需要正在运行的请求的结果时,它可以调用 DumpRequest() 来取消它。由于很难优雅地取消数据库请求,因此 DumpRequest 的实现只是维护一个“清理监视线程”,等待“已完成的请求”并将其删除。所有 boost::threads 都通过 boost::shared_ptr 进行管理,例如:
boost::shared_ptr<boost::thread> my_thread = new boost::thread(boost::bind(&DBCon::RunRequest, &this_dbcon));
当不再需要它时(被取消):
vector<boost::shared_ptr<boost::thread> > threads_tobe_removed;
// some iteration
threads_tobe_removed[i].get()->join();
threads_tobe_removed.erase(threads_tobe_removed.begin()+i);
我创建了这个单元测试项目来测试执行和转储请求的机制。它运行请求并随机取消正在运行的请求,并重复数千次。事实证明机制没问题。一切都按预期进行。
但通过sysinternal的Process Explorer观察单元测试项目,发现存在句柄泄漏问题。每经过 500 次左右,句柄计数就会增加 1,并且永远不会返回。这是正在增加的“事件”类型句柄。文件和线程句柄没有增加(当然,随着线程的产生,句柄数会增加,但是每一百次都会调用一次 Sleep(10000) 来等待它们被清理,以便可以观察到句柄计数)。
我自己没有管理过事件句柄。它们是由 boost::thread 在创建线程时创建的。我只保证优雅地关闭线程,我不知道事件的用途。
我想知道是否有人遇到过类似的问题?这次泄漏的原因可能是什么? Process Explorer 中的这个数字是否可靠到足以将其称为句柄泄漏?有什么办法可以追踪并修复它吗?
我在 Windows Vista 上使用静态链接的 boost 1.40 和 Visual C++。
I'm debugging this database project. It wraps access to SQLite for a higher level application. It's designed to run asynchronously, that is, it has methods like ExecuteRequestAsync() and IsRequestReady(). When ExecuteRequestAsync is called, it spawns a boost::thread to do the job and return the function immediately. When the higher level application decides that it no longer wants the result of a running request, it may call DumpRequest() to cancel it. Since it's difficult to gracefully cancel a database request, the implementation of DumpRequest just maintain a "cleanup monitor thread" that waits for "finished requests" and remove them. All boost::threads are managed through boost::shared_ptr, like:
boost::shared_ptr<boost::thread> my_thread = new boost::thread(boost::bind(&DBCon::RunRequest, &this_dbcon));
And when it's no longer needed (to be canceled):
vector<boost::shared_ptr<boost::thread> > threads_tobe_removed;
// some iteration
threads_tobe_removed[i].get()->join();
threads_tobe_removed.erase(threads_tobe_removed.begin()+i);
I created this unit test project to test the mechanism of executing and dumping the requests. It runs requests and randomly cancels running requests, and repeats for several thousand passes. The mechanism turned out to be okay. Everything worked as expected.
However, through observing the unit test project through sysinternal's Process Explorer, it's discovered that there's a handle leak problem. Every 500-ish passes, the handle count increases by 1, and never returns back. It's the "Event" type handle that is increasing. File and thread handles are not increasing (of course # of handles are increasing as threads are spawned, but there is a Sleep(10000) call every hundred passes to wait for them to be cleaned up so that the handle count can be observed).
I haven't been managing Event handles myself. They are created by boost::thread upon the creation of the thread. I only guarantee to gracefully close the threads, I have no idea what the Events are used for.
I'm wondering if anyone has experienced similar problems? What might be the cause of this leak? Is this number in Process Explorer reliable enough to call it a handle leak? Is there any way to trace and fix it?
I'm using statically linked boost 1.40 on Windows Vista, with Visual C++.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(1)
对
threads_tobe_removed
的访问是线程安全的吗?如果不是,则可能存在竞争条件,即一个线程通过调用 DumpRequest 向向量添加一个线程,而清理监视器线程从向量中删除一个线程。因此,boost::thread-对象可能会在不先加入线程的情况下被销毁,这将使线程在没有关联对象的情况下运行,这可能解释了泄漏。Is the access to
threads_tobe_removed
thread-safe? If not, there may be a race condition, when one thread adds a thread to the vector via a call toDumpRequest
, while the cleanup monitor thread deletes a thread from the vector. Thus,boost::thread
-objects may be destroyed without joining the thread first, which would leave the thread running without an associated object, which might explain the leak.