(Ab)使用shared_ptr作为引用计数器

发布于 2024-11-01 14:26:19 字数 2223 浏览 7 评论 0原文

最近我想到了一个狡猾的计划(tm :P)) 我必须更新程序中的设置结构(假设每 15 秒更新一次)。设置结构由多个函数使用,并且每个函数都由多个线程调用。 所以我需要一个引用计数器来知道何时可以安全地释放旧的设置结构。 那么这是正确的方法吗? 如果你没有仔细阅读代码,请不要回应没关系,当涉及到共享指针时,这样的滥用很容易出错(相​​信我)。 编辑:我忘了提到重要的部分。我认为这个实现可以防止引用计数器下降到 0,因为我在 updateSettings() 中初始化它,并且它不会下降,直到再次调用它(然后 myFucntion 使用内存中的 2 个设置中的另一个)。

#include<memory>
#include <cstdio>
#include <iostream>
#include <vector>
using namespace std;
struct STNGS
{
    int i;
    vector<double> v;
};
static int CUR_STNG=0;
shared_ptr<STNGS> stngsArray[2];
int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> pStngs=stngsArray[CUR_STNG];
    STNGS& stngs=*pStngs;
    //do some stuff using stngs

}

void updateSettings()
{
    auto newIndex=(CUR_STNG+1)%2;
    stngsArray[newIndex].reset(new STNGS);
    CUR_STNG=newIndex;
}
void initialize()
{
    auto newIndex=CUR_STNG;
    stngsArray[newIndex].reset(new STNGS);
    CUR_STNG=newIndex;
}
int main()
{
    initialize();
    //launch bunch of threads that are calling myFunction
    while(true)
    {
        //call updateSettings every 15 seconds
    }
}

编辑:使用评论的反馈我更新了代码:

#include<memory>
#include <cstdio>
#include <iostream>
#include <vector>
using namespace std;
static const int N_STNG_SP=4;
static int CUR_STNG=0;
struct STNGS
{
    int i;
    vector<double> v;
    STNGS()
    {
        for (int i=0;i<10;++i)
            v.push_back(42);
    }
};
shared_ptr<STNGS> stngs[N_STNG_SP];
int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> pStngs=stngs[CUR_STNG];
    STNGS& stngs=*pStngs;
    //do some stuff using stngs
}

void updateSettings()
{
    auto pStng=new STNGS;
    //fill *pStng
    int newVer=(CUR_STNG+1)%N_STNG_SP;
    stngs[newVer].reset(pStng);
    CUR_STNG=newVer;
}
void initialize()
{
    auto pStng=new STNGS;
    //fill *pStng
    int newVer=(CUR_STNG+1)%N_STNG_SP;
    stngs[newVer].reset(pStng);
    CUR_STNG=newVer;
}
int main()
{
    initialize();
    //launch bunch of threads that are calling myFunction
    while(true)
    {
        //call updateSettings every 15 seconds
        updateSettings();
    }
}

Recently i thought of a cunning plan(tm :P))
I have to update settings structure in my program(lets say every 15 seconds). Settings structure is used by multiple functions and every of those functions is called by multiple threads.
So I need a reference counter to know when it is safe to free the old settings struct.
So is this the correct way to do it?
Please don't respond that it is OK if you haven't read the code carefully, when it comes to shared pointers it's easy to make mistakes when doing abuses like this(trust me).
EDIT:I forgott to mention important part. I think that this implementation prevents the ref counter dropping to 0, because I initialize it in updateSettings() and it doesn't drop until it is called again(and then myFucntion uses the other of the 2 settings in the memory).

#include<memory>
#include <cstdio>
#include <iostream>
#include <vector>
using namespace std;
struct STNGS
{
    int i;
    vector<double> v;
};
static int CUR_STNG=0;
shared_ptr<STNGS> stngsArray[2];
int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> pStngs=stngsArray[CUR_STNG];
    STNGS& stngs=*pStngs;
    //do some stuff using stngs

}

void updateSettings()
{
    auto newIndex=(CUR_STNG+1)%2;
    stngsArray[newIndex].reset(new STNGS);
    CUR_STNG=newIndex;
}
void initialize()
{
    auto newIndex=CUR_STNG;
    stngsArray[newIndex].reset(new STNGS);
    CUR_STNG=newIndex;
}
int main()
{
    initialize();
    //launch bunch of threads that are calling myFunction
    while(true)
    {
        //call updateSettings every 15 seconds
    }
}

EDIT:using feedback from the comments I updated the code:

#include<memory>
#include <cstdio>
#include <iostream>
#include <vector>
using namespace std;
static const int N_STNG_SP=4;
static int CUR_STNG=0;
struct STNGS
{
    int i;
    vector<double> v;
    STNGS()
    {
        for (int i=0;i<10;++i)
            v.push_back(42);
    }
};
shared_ptr<STNGS> stngs[N_STNG_SP];
int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> pStngs=stngs[CUR_STNG];
    STNGS& stngs=*pStngs;
    //do some stuff using stngs
}

void updateSettings()
{
    auto pStng=new STNGS;
    //fill *pStng
    int newVer=(CUR_STNG+1)%N_STNG_SP;
    stngs[newVer].reset(pStng);
    CUR_STNG=newVer;
}
void initialize()
{
    auto pStng=new STNGS;
    //fill *pStng
    int newVer=(CUR_STNG+1)%N_STNG_SP;
    stngs[newVer].reset(pStng);
    CUR_STNG=newVer;
}
int main()
{
    initialize();
    //launch bunch of threads that are calling myFunction
    while(true)
    {
        //call updateSettings every 15 seconds
        updateSettings();
    }
}

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(1

一身软味 2024-11-08 14:26:19

我不会相信这段代码。我相信除了两个引用计数之外,不同线程共享的所有内存都缺乏适当的内存屏障。

对我来说,这看起来是一个很好的 shared_mutex 应用程序。

编辑:

20.7.2.2 [util.smartptr.shared]/p4 说:

为了确定
存在数据竞争,成员
函数只能访问和修改
Shared_ptr 和weak_ptr 对象
它们本身而不是它们引用的对象
到。

然而,除了使用shared_mutex,另一种选择可能是使用20.7.2.5shared_ptr原子访问中的API[util.smartptr.shared.atomic]:

并发访问shared_ptr
来自多个线程的对象不
如果访问是,则引入数据竞争
完全通过以下功能完成
本节和实例是
作为他们的第一个参数传递。

template<class T>
    bool atomic_is_lock_free(const shared_ptr<T>* p);
template<class T>
    shared_ptr<T> atomic_load(const shared_ptr<T>* p);
template<class T>
    shared_ptr<T> atomic_load_explicit(const shared_ptr<T>* p, memory_order mo);
template<class T>
    void atomic_store(shared_ptr<T>* p, shared_ptr<T> r);
template<class T>
    void atomic_store_explicit(shared_ptr<T>* p, shared_ptr<T> r, memory_order mo);
template<class T>
    shared_ptr<T> atomic_exchange(shared_ptr<T>* p, shared_ptr<T> r);
template<class T>
    shared_ptr<T>
    atomic_exchange_explicit(shared_ptr<T>* p, shared_ptr<T> r, memory_order mo);
template<class T>
    bool
    atomic_compare_exchange_weak(shared_ptr<T>* p, shared_ptr<T>* v, shared_ptr<T> w);
template<class T>
    bool
    atomic_compare_exchange_strong( shared_ptr<T>* p, shared_ptr<T>* v, shared_ptr<T> w);
template<class T>
    bool
    atomic_compare_exchange_weak_explicit(shared_ptr<T>* p, shared_ptr<T>* v,
                                          shared_ptr<T> w, memory_order success,
                                          memory_order failure);
template<class T>
    bool
    atomic_compare_exchange_strong_explicit(shared_ptr<T>* p, shared_ptr<T>* v,
                                            shared_ptr<T> w, memory_order success,
                                            memory_order failure);

shared_mutex 会更容易得到正确的结果。但原子的shared_ptr API可能会产生更高性能的解决方案。

更新:

这是shared_mutex解决方案未经测试的代码(注意shared_mutex不是std,而是第3方库):

struct STNGS
{
    int i;
    vector<double> v;
    ting::shared_mutex m;
};

STNGS stngs;

int myFunction() //called by multiple threads
{
    shared_lock<shared_mutex> _(stngs.m);
    //do some stuff using stngs
    return 0;
}

void updateSettings()
{
    unique_lock<shared_mutex> _(stngs.m);
    //fill stngs
}

void initialize()
{
    //fill stngs
}

这是未经测试的代码,它使用shared_ptr的原子加载/存储函数:

struct STNGS
{
    int i;
    vector<double> v;
};

shared_ptr<STNGS> pStng;

int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> stngs = atomic_load(&pStng);
    //do some stuff using *stngs
    return 0;
}

void updateSettings()
{
    shared_ptr<STNGS> newStng(new STNGS);
    //fill *newStng
    atomic_store(&pStng, newStng);
}

void initialize()
{
    pStng.reset(new STNGS);
    //fill *pStng
}

I would not trust this code. I believe it is lacking proper memory barriers on all memory shared by the different threads, except for the two reference counts.

This looks like a good application for shared_mutex to me.

Edit:

20.7.2.2 [util.smartptr.shared]/p4 says:

For purposes of determining the
presence of a data race, member
functions shall access and modify only
the shared_ptr and weak_ptr objects
themselves and not objects they refer
to.

However, instead of using a shared_mutex, another option might be to use the API in 20.7.2.5 shared_ptr atomic access [util.smartptr.shared.atomic]:

Concurrent access to a shared_ptr
object from multiple threads does not
introduce a data race if the access is
done exclusively via the functions in
this section and the instance is
passed as their first argument.

template<class T>
    bool atomic_is_lock_free(const shared_ptr<T>* p);
template<class T>
    shared_ptr<T> atomic_load(const shared_ptr<T>* p);
template<class T>
    shared_ptr<T> atomic_load_explicit(const shared_ptr<T>* p, memory_order mo);
template<class T>
    void atomic_store(shared_ptr<T>* p, shared_ptr<T> r);
template<class T>
    void atomic_store_explicit(shared_ptr<T>* p, shared_ptr<T> r, memory_order mo);
template<class T>
    shared_ptr<T> atomic_exchange(shared_ptr<T>* p, shared_ptr<T> r);
template<class T>
    shared_ptr<T>
    atomic_exchange_explicit(shared_ptr<T>* p, shared_ptr<T> r, memory_order mo);
template<class T>
    bool
    atomic_compare_exchange_weak(shared_ptr<T>* p, shared_ptr<T>* v, shared_ptr<T> w);
template<class T>
    bool
    atomic_compare_exchange_strong( shared_ptr<T>* p, shared_ptr<T>* v, shared_ptr<T> w);
template<class T>
    bool
    atomic_compare_exchange_weak_explicit(shared_ptr<T>* p, shared_ptr<T>* v,
                                          shared_ptr<T> w, memory_order success,
                                          memory_order failure);
template<class T>
    bool
    atomic_compare_exchange_strong_explicit(shared_ptr<T>* p, shared_ptr<T>* v,
                                            shared_ptr<T> w, memory_order success,
                                            memory_order failure);

shared_mutex will be easier to get right. But the atomic shared_ptr API may yield a higher performance solution.

Update:

Here is untested code for the shared_mutex solution (note shared_mutex is not std, but is 3rd party library):

struct STNGS
{
    int i;
    vector<double> v;
    ting::shared_mutex m;
};

STNGS stngs;

int myFunction() //called by multiple threads
{
    shared_lock<shared_mutex> _(stngs.m);
    //do some stuff using stngs
    return 0;
}

void updateSettings()
{
    unique_lock<shared_mutex> _(stngs.m);
    //fill stngs
}

void initialize()
{
    //fill stngs
}

Here is untested code which uses the atomic load/store functions for shared_ptr:

struct STNGS
{
    int i;
    vector<double> v;
};

shared_ptr<STNGS> pStng;

int myFunction() //called by multiple threads
{
    shared_ptr<STNGS> stngs = atomic_load(&pStng);
    //do some stuff using *stngs
    return 0;
}

void updateSettings()
{
    shared_ptr<STNGS> newStng(new STNGS);
    //fill *newStng
    atomic_store(&pStng, newStng);
}

void initialize()
{
    pStng.reset(new STNGS);
    //fill *pStng
}
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文