内联重载运算符 c++

发布于 2024-12-06 04:57:08 字数 92 浏览 1 评论 0原文

如果必须经常使用重载运算符,是否可以/应该内联该运算符以获得更好的效率(时间或其他)?

我想重载“+”运算符以在代码中频繁添加大向量。因此就有了这个问题。

Can/should an overloaded operator have to be inlined to gain better efficiency (wrt time or whatever) if that operator will have to be used frequently?

I want to overload the '+' operator to add big vectors very frequently in my code. Hence the question.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

兮颜 2024-12-13 04:57:08

理想情况下,您应该分析代码,然后决定内联哪些内容。当您决定内联常规运算符和超载运算符时,实际上并没有太大区别。

Ideally, you'd profile your code and then decide what to inline. There really isn't much of a difference between when you decide to inline regular operators, over overloaded ones.

执手闯天涯 2024-12-13 04:57:08

如果要添加大向量,则相对于实际添加两个向量的时间,对 plus 的函数调用的开销将会很小。因此,将operator+标记为内联不太可能改善您的整体运行时间。

If you are adding big vectors, then the overhead of a function call to the plus will be small relative to the time to actually add the two vectors. So marking the operator+ inline is not likely to improve your overall run time.

瞄了个咪的 2024-12-13 04:57:08

让编译器决定优化。

关键字inline具有误导性:事实上,编译器总是可以做它需要的事情,就像旧的auto一样(你还记得吗?那些日子?)并注册

它的现代含义是“在标头中定义:如果不使用则丢弃,如果出现多次则合并”。

Let the compiler to decide about optimization.

The keyword inline is misleading: the compiler can -in fact- always do what it needs, just like with the old auto (do you remenber those days?) and register.

It's modern meaning is "defined in header: discard if not used, merge if seen more times".

┊风居住的梦幻卍 2024-12-13 04:57:08

编译器应该在发布版本中自动为您内联小型函数。
更重要的是定义移动构造函数和移动赋值。如果您的数组非常大并且您同时执行多个操作,您还可以使用表达式类来提高执行速度。

template <class left, class right>
struct AddExpr { 
    const left& _left;
    const right& _right;

    AddExpr(const left& Left, const right& Right)
    :_left(Left), _right(Right)
    {assert(left.count() == right.count());}
    int count() const {return _left.count();}
    int operator[](int index) const {return _left[i]+_right[i];}
};
class Array {
    int* data;
    int size;

    int count() const {return size;}
    Array& operator=(AddExpr expr) {
        for(int i=0; i<expr.count(); ++i)
           data[i] = expr[i];
};
AddExpr operator+(const Array& lhs, const Array& rhs) 
{return AddExpr<Array, Array>(lhs, rhs);}
AddExpr operator+(const Array& lhs, const Expr& rhs) 
{return AddExpr<Array, Expr>(lhs, rhs);}
AddExpr operator+(const Expr& lhs, const Array& rhs) 
{return AddExpr<Expr, Array>(lhs, rhs);}
AddExpr operator+(const Expr& lhs, const Expr& rhs) 
{return AddExpr<Expr, Expr>(lhs, rhs);}

int main() {
    Array a, b, c, d;
    Array c = (a+b) + (c+d);  //awesome on lines like this
}

这会删除所有临时对象,并大大提高缓存效率。但我完全忘记了这种技术叫什么。

The compiler should inline smallish functions for you automatically in release builds.
Much more important is to define a move constructor and move assignment. If your arrays are very large and you're doing multiple operations at the same time, you can also use expression classes to improve execution speed.

template <class left, class right>
struct AddExpr { 
    const left& _left;
    const right& _right;

    AddExpr(const left& Left, const right& Right)
    :_left(Left), _right(Right)
    {assert(left.count() == right.count());}
    int count() const {return _left.count();}
    int operator[](int index) const {return _left[i]+_right[i];}
};
class Array {
    int* data;
    int size;

    int count() const {return size;}
    Array& operator=(AddExpr expr) {
        for(int i=0; i<expr.count(); ++i)
           data[i] = expr[i];
};
AddExpr operator+(const Array& lhs, const Array& rhs) 
{return AddExpr<Array, Array>(lhs, rhs);}
AddExpr operator+(const Array& lhs, const Expr& rhs) 
{return AddExpr<Array, Expr>(lhs, rhs);}
AddExpr operator+(const Expr& lhs, const Array& rhs) 
{return AddExpr<Expr, Array>(lhs, rhs);}
AddExpr operator+(const Expr& lhs, const Expr& rhs) 
{return AddExpr<Expr, Expr>(lhs, rhs);}

int main() {
    Array a, b, c, d;
    Array c = (a+b) + (c+d);  //awesome on lines like this
}

This removes all the temporary objects, and greatly improves cache efficiency. But I've completely forgotten what this technique is called.

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文