多线程读写a::stl::vector,向量资源难以释放

发布于 2024-07-07 18:37:27 字数 624 浏览 4 评论 0原文

我正在使用 VS2005 的 STL 编写代码。 我有一个 UI 线程来读取向量,还有一个工作线程来写入向量。 我使用 ::boost::shared_ptr 作为向量元素。

vector<shared_ptr<Class>> vec;

但我发现,如果我同时操作两个线程中的vec(我可以保证它们不会访问同一区域,UI线程总是读取具有信息的区域)

vec.clear()似乎无法释放资源。 问题发生在shared_ptr中,它无法释放其资源。

问题是什么? 是因为当向量达到其顺序容量时,它会在内存中重新分配,然后原始部分就会失效。

据我所知,当重新分配时,迭代器将无效,为什么当我使用vec[i]时也会出现一些问题。 // - - - - - - - - - - - - - - - - - - - - - - - -

什么需要什么样的锁? 我的意思是:如果向量的元素是一个shared_ptr,当一个线程A获得点smart_p时,另一个线程B将等待直到A完成对smart_p的操作,对吧? 或者只是简单地在线程尝试读取该点时添加锁,当读取操作完成后,线程 B 可以继续做某事。

I am writing code in VS2005 using its STL.
I have one UI thread to read a vector, and a work thread to write to a vector.
I use ::boost::shared_ptr as vector element.

vector<shared_ptr<Class>> vec;

but I find, if I manipulate the vec in both thread in the same time(I can guarantee they do not visit the same area, UI Thread always read the area that has the information)

vec.clear() seem can not release the resource. problem happend in shared_ptr, it can not release its resource.

What is the problem?
Does it because when the vector reach its order capacity, it reallocates in memory, then the original part is invalidated.

As far as I know when reallocating, iterator will be invalid, why some problem also happened when I used vec[i].
//-----------------------------------------------

What kinds of lock is needed?
I mean: If the vector's element is a shared_ptr, when a thread A get the point smart_p, the other thread B will wait till A finishes the operation on smart_p right?
Or just simply add lock when thread is trying to read the point, when the read opeation is finished, thread B can continu to do something.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

自控 2024-07-14 18:37:27

当您从多个线程访问同一资源时,锁定是必要的。 如果你不这样做,你就会有各种奇怪的行为,就像你所看到的那样。

由于您使用的是 Boost,因此使用锁定的一种简单方法是使用 Boost.Thread 库。 在这种情况下,最好使用的锁是读/写锁; 它们被称为 Boost.Thread 中的shared_mutex

但是,是的,由于线程之间缺乏同步,您所看到的本质上是未定义的行为。 希望这可以帮助!

编辑回答OP的第二个问题:当从向量中读取智能指针时,应该使用读取器锁,而在向向量写入或添加项目时应该使用写入器锁(因此,互斥锁仅适用于向量)。 如果多个线程将访问指向对象(即智能指针指向的对象),则应为它们设置单独的锁。 在这种情况下,最好也将互斥对象放入对象类中。

When you're accessing the same resource from more than one thread, locking is necessary. If you don't, you have all sorts of strange behaviour, like you're seeing.

Since you're using Boost, an easy way to use locking is to use the Boost.Thread library. The best kind of locks you can use for this scenario are reader/writer locks; they're called shared_mutex in Boost.Thread.

But yes, what you're seeing is essentially undefined behaviour, due to the lack of synchronisation between the threads. Hope this helps!

Edit to answer OP's second question: You should use a reader lock when reading the smart pointer out of the vector, and a writer lock when writing or adding an item to the vector (so, the mutex is for the vector only). If multiple threads will be accessing the pointed-to object (i.e., what the smart pointer points to), then separate locks should be set up for them. In that case, you're better off putting a mutex object in the object class as well.

往事随风而去 2024-07-14 18:37:27

另一种替代方法是通过确保仅在一个线程中访问向量来完全消除锁定。 例如,让工作线程向主线程发送一条消息,其中包含要添加到向量的元素。

Another alternative is to eliminate the locking altogether by ensuring that the vector is accessed in only one thread. For example, by having the worker thread send a message to the main thread with the element(s) to add to the vector.

闻呓 2024-07-14 18:37:27

可以像这样同时访问列表或数组。 然而, std::vector 并不是一个好的选择,因为它的大小调整行为。 为了正确地做到这一点,需要一个固定大小的数组,或者调整大小时的特殊锁定或复制更新行为。 它还需要具有锁定或原子更新的独立前指针和后指针。

另一个答案提到了消息队列。 我所描述的共享数组是实现这些功能的常见且有效的方法。

It is possible to do simultaneous access to a list or array like this. However, std::vector is not a good choice because of its resize behavior. To do it right needs a fixed-size array, or special locking or copy-update behavior on resize. It also needs independent front and back pointers again with locking or atomic update.

Another answer mentioned message queues. A shared array as I described is a common and efficient way to implement those.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文