观察者模式的线程实现 - C++

发布于 2024-07-18 06:56:58 字数 983 浏览 12 评论 0原文

我正在开发一个 C++ 程序,它有一个“扫描”方法,它将触发一个相对较长的运行扫描过程。 当该过程完成时,扫描方法将使用观察者模式通知观察者结果。

我想为每次扫描创建一个单独的线程。 这样我就可以同时运行多个扫描。 当每个扫描过程完成时,我希望扫描方法通知侦听器。

根据boost线程库,看起来我可以做这样的事情:

#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>

boost::mutex io_mutex;

void scan(int scan_target, vector<Listener> listeners)  
{
  //...run scan 
  {
    boost::mutex::scoped_lock  
      lock(io_mutex);
    std::cout << "finished scan" << endl;
    // notify listeners by iterating through the vector
    // and calling "notify()

  }
}

int main(int argc, char* argv[])
{

  vector<Listener> listeners
  // create 
  boost::thread thrd1(
    boost::bind(&scan, 1, listeners));
  boost::thread thrd2(
    boost::bind(&scan, 2, listeners));
  //thrd1.join();
  //thrd2.join();
  return 0;
} 

这看起来大致正确吗? 我需要互斥对侦听器的调用吗? 去掉连接就可以了吗?

I'm developing a C++ program which has a "scan" method which will trigger a relatively long running scanning procedure. When the procedure is completed, the scan method will notify observers of results using the observer pattern.

I would like to create a separate thread for each scan. This way I can run multiple scans simultaneously. When each scanning process completes, I would like the scan method to notify the listeners.

According the boost thread library, it looks like I can maybe do something like this:

#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>

boost::mutex io_mutex;

void scan(int scan_target, vector<Listener> listeners)  
{
  //...run scan 
  {
    boost::mutex::scoped_lock  
      lock(io_mutex);
    std::cout << "finished scan" << endl;
    // notify listeners by iterating through the vector
    // and calling "notify()

  }
}

int main(int argc, char* argv[])
{

  vector<Listener> listeners
  // create 
  boost::thread thrd1(
    boost::bind(&scan, 1, listeners));
  boost::thread thrd2(
    boost::bind(&scan, 2, listeners));
  //thrd1.join();
  //thrd2.join();
  return 0;
} 

Does this look roughly correct? Do I need to mutex the call to listenrs? Is it ok to get rid of the joins?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(3

人间不值得 2024-07-25 06:56:58

您是否需要锁定取决于您对通知执行的操作。 我认为如果您仅将锁放在需要单线程访问的侦听器的通知函数中会更合适。

Whether you need a lock or not depends on what you do with the notification. I think it would be more apt if you put the lock in notify function of only those listeners where you need a single threaded access.

心在旅行 2024-07-25 06:56:58

我不知道增强的东西,但从概念上来说它看起来是正确的。 您有观察者希望收到状态更改的通知。 当“合格事件”发生时,您必须运行观察者列表(或向量,类似的东西)来通知。 您还想确保并发通知不会给您带来麻烦,

(这是维基百科文章e。)

I don't know the boost stuff, but conceptually it looks right. You have Observers that want to be notified of state changes. When a "qualifying event" happens, you have to run through a list (or vector, something like that) of Observers to notify. You also want to make sure somehow that concurrent notifications don't cause you trouble,

(Here's the Wikipedia article on the pattern.)

灼疼热情 2024-07-25 06:56:58

您的 Listener::Notify() 是从多个线程调用的,因此除非 Notify() 没有副作用,否则您必须执行以下三个操作之一:

  1. 外部锁(您的示例):在调用 Listener:: 之前获取互斥锁Notify()
  2. 内部锁:Listener::Notify() 将在内部获取锁
  3. 无锁:在 google 中搜索“无锁算法”

每个选项都有优点/缺点...

对于(我认为)你需要的选项,选项1 会很好。

而且,您必须保留 join 语句,否则您的 main() 可能会在线程完成其工作之前退出。 还可以考虑使用 boost::thread_pooljoint_all

Your Listener::Notify() is being called from multiple threads, so unless Notify() has no side effects, you'll have to do one of the three:

  1. External lock (your example): acquiring a mutex before calling Listener::Notify()
  2. Internal lock: Listener::Notify() will acquire a lock internally
  3. Lock free: search for "lock free algorithm" in google

There are pros/cons for each option...

For what (i think) you need, option 1 will be good.

And, you'll have to keep the join statements otherwise your main() may exit before your threads can finish their job. Also consider using boost::thread_pool and joint_all.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文