为什么要将互斥锁作为参数传递给线程调用的函数?

发布于 2024-12-06 16:17:23 字数 730 浏览 1 评论 0原文

在某些地方,我看到人们创建线程池并创建线程并使用这些线程执行函数。在调用该函数时 boost::mutex 是通过引用传递的。为什么要这样做呢?我相信您可以在被调用函数本身中声明一个互斥锁,或者可以将其声明为类成员或全局互斥锁。谁能解释一下吗?

例如

   myclass::processData()
   {
         boost::threadpool::pool pool(2);
         boost::mutex mutex;

         for (int i =0; data<maxData; ++data)
             pool.schedule(boost::bind(&myClass::getData, boost_cref(*this), boost::ref(mutex)));
    }

然后,

    myClass::getData(boost::mutex& mutex)
    {
         boost::scoped_lock(mutex)    // Why can't we have class member variable mutex or                                     
                                      //local mutex here
        //Do somethign Here
}

At some places I have seen people creating a thread pool and creating threads and executing a function with those threads. While calling that function boost::mutex is passed by reference. Why it is done so? I believe you can have a mutex declared in the called function itself or can be declared a class member or global. Can anyone please explain?

e.g.

   myclass::processData()
   {
         boost::threadpool::pool pool(2);
         boost::mutex mutex;

         for (int i =0; data<maxData; ++data)
             pool.schedule(boost::bind(&myClass::getData, boost_cref(*this), boost::ref(mutex)));
    }

Then,

    myClass::getData(boost::mutex& mutex)
    {
         boost::scoped_lock(mutex)    // Why can't we have class member variable mutex or                                     
                                      //local mutex here
        //Do somethign Here
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

凡尘雨 2024-12-13 16:17:23

互斥体是不可复制的对象,虽然它们可以是类的成员,但它会使父类的复制能力变得非常复杂。因此,如果多个类实例需要共享相同的数据,一种首选方法是将互斥体创建为静态数据成员。否则,如果互斥量只需要锁定在类本身的实例中,则可以创建一个指向互斥量的指针作为非静态数据成员,然后该类的每个副本都将拥有自己的动态分配的互斥量(并且如果有要求,则保持可复制)。

在上面的代码示例中,基本上发生的是通过引用将全局互斥体传递到线程池中。这使得共享相同内存位置的所有线程能够使用完全相同的互斥体在该内存上创建独占锁,但无需管理互斥体本身的不可复制方面的开销。此代码示例中的互斥锁也可以是类 myClass 的静态数据成员,而不是通过引用传入的全局互斥锁,假设每个线程都需要锁定一些内存可以从每个线程全局访问。

本地互斥体的问题在于,它只是互斥体的本地可访问版本...因此,当线程锁定互斥体以共享一些全局可访问的数据时,数据本身不受保护,因为每个其他线程都会拥有它的自己的本地互斥锁,可以锁定和解锁。它违背了相互排斥的全部意义。

Mutex's are non-copyable objects, and while they can be members of a class, it would greatly complicate a parent class's copy-ability. Thus one preferred method, should a number of class instances need to share the same data, would be to create the mutex as a static data-member. Otherwise if the mutex only needed to be locked within an instance of the class itself, you could create a pointer to a mutex as a non-static data-member, and then each copy of the class would have it's own dynamically allocated mutex (and remain copyable if that is a requirement).

In the code example above, what's basically taking place is there is a global mutex being passed into the thread pool by reference. That enables all the threads sharing the same memory locations to create an exclusive lock on that memory using the exact same mutex, but without the overhead of having to manage the non-copyable aspect of the mutex itself. The mutex in this code example could have also been a static data-member of class myClass rather than a global mutex that is passed in by reference, the assumption being that each thread would need to lock some memory that is globally accessible from each thread.

The problem with a local mutex is that it's only a locally accessible version of the mutex ... therefore when a thread locks the mutex in order to share some globally accessible data, the data itself is not protected, since every other thread will have it's own local mutex that can be locked and unlocked. It defeats the whole point of mutual exclusion.

已下线请稍等 2024-12-13 16:17:23

我相信您可以在被调用函数本身中声明互斥锁,也可以将其声明为类成员或全局互斥锁。谁能解释一下吗?

在入口处创建新的互斥体不会保护任何内容。

如果您正在考虑声明静态(或全局)互斥体来保护非静态成员,那么您也可以将程序编写为单线程程序(好吧,有一些极端情况)。静态锁会阻塞除一个线程之外的所有线程(假设存在竞争);它相当于“一次最多可以有一个线程在此方法体中操作”。声明静态互斥体来保护静态数据就可以了。正如 David Rodriguez - dribeas 在另一个答案的评论中简洁地表述的那样:“互斥体应该处于受保护的数据级别”。

可以为每个实例声明一个成员变量,这将采用通用形式:

class t_object {
public:
    ...
    bool getData(t_data& outData) {
        t_lock_scope lock(this->d_lock);
        ...
        outData.set(someValue);
        return true;
    }

private:
    t_lock d_lock;
};

该方法很好,并且在某些情况下理想。在大多数情况下,当您构建一个系统时,实例打算从其客户端抽象锁定机制和错误,这是有意义的。一个缺点是它可能需要更多的获取,并且通常需要更复杂的锁定机制(例如可重入)。通过更多的获取:客户端可能知道一个实例仅在一个线程中使用:在这种情况下为什么还要锁定?同样,一堆小的线程安全方法也会带来大量的开销。通过锁定,您希望尽快进出保护区(不引入大量获取),因此关键部分通常是比典型操作更大的操作。

如果公共接口需要此锁作为参数(如您的示例中所示),则表明您的设计可以通过私有化锁定来简化(使对象以线程安全的方式运行,而不是将锁作为外部持有的资源传递)。

使用外部(或绑定或关联)锁,您可以潜在地减少获取(或锁定的总时间)。这种方法还允许您在事后向实例添加锁定。它还允许客户端配置锁的操作方式。客户端可以通过共享锁(在一组实例之间)来使用更少的锁。即使是一个简单的组合示例也可以说明这一点(支持两种模型):

class t_composition {
public:
    ...
private:
    t_lock d_lock; // << name and data can share this lock
    t_string d_name;
    t_data d_data;
};

考虑到某些多线程系统的复杂性,将正确锁定的责任推给客户端可能是一个非常糟糕的主意

两种模型(绑定模型和成员变量模型)都可以有效地使用。在给定场景中哪个更好因问题而异。

I believe you can have a mutex declared in the called function itself or can be declared a class member or global. Can anyone please explain?

creating a new mutex at the entry protects nothing.

if you were considering declaring a static (or global) mutex to protect non-static members, then you may as well write the program as a single threaded program (ok, there are some corner cases). a static lock would block all threads but one (assuming contest); it is equivalent to "a maximum of one thread may operate in this method's body at one time". declaring a static mutex to protect static data is fine. as David Rodriguez - dribeas worded it succinctly in another answer's comments: "The mutex should be at the level of the data that is being protected".

you can declare a member variable per instance, which would take the generalized form:

class t_object {
public:
    ...
    bool getData(t_data& outData) {
        t_lock_scope lock(this->d_lock);
        ...
        outData.set(someValue);
        return true;
    }

private:
    t_lock d_lock;
};

that approach is fine, and in some cases ideal. it makes sense in most cases when you are building out a system where instances intend to abstract locking mechanics and errors from their clients. one downside is that it can require more acquisitions, and it typically requires more complex locking mechanisms (e.g. reentrant). by more acquisitions: the client may know that an instance is used in only one thread: why lock at all in that case? as well, a bunch of small threadsafe methods will introduce a lot of overhead. with locking, you want to get in and out of the protected zones asap (without introducing many acquisitions), so the critical sections are often larger operations than is typical.

if the public interface requires this lock as an argument (as seen in your example), it's a signal that your design may be simplified by privatizing locking (making the object function in a thread safe manner, rather than passing the lock as an externally held resource).

using an external (or bound or associated) lock, you can potentially reduce acquisitions (or total time locked). this approach also allows you to add locking to an instance after the fact. it also allows the client to configure how the lock operates. the client can use fewer locks by sharing them (among a set of instances). even a simple example of composition can illustrate this (supporting both models):

class t_composition {
public:
    ...
private:
    t_lock d_lock; // << name and data can share this lock
    t_string d_name;
    t_data d_data;
};

considering the complexity of some multithreaded systems, pushing the responsibility of proper locking onto the client can be a very bad idea.

both models (bound and as member variable) can be used effectively. which is better in a given scenario varies by problem.

小耗子 2024-12-13 16:17:23

使用本地互斥体是错误的:线程池可能会调用多个函数实例,并且它们应该使用相同的互斥体。班级成员没问题。将互斥体传递给函数使其更加通用和可读。调用者可以决定传递哪个互斥锁:类成员或其他任何东西。

Using local mutex is wrong: thread pool may invoke several function instances, and they should work with the same mutex. Class member is OK. Passing mutex to the function makes it more generic and readable. Caller may decide which mutex to pass: class member or anything else.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文