Lapackpp vs Boost BLAS

发布于 2024-10-10 03:09:38 字数 350 浏览 10 评论 0原文

首先,我是 C++ 新手。

我正在为我的硕士论文编写一个程序,其中一部分假设以递归方式解决回归问题。

我想解决:

Ax = y

在我的情况下,计算速度是不可忽略的,这就是我想知道使用 Boost::BLAS

x = (A^T A)^{-1}A^Ty

是否需要比 Lapackpp 更少的计算时间(我正在使用 gentoo)。

聚苯乙烯 我能够在 Lapackpp 项目站点找到类文档,但找不到示例。有人可以给我提供一些例子,以防 Lapack 比 Boost::BLAS 更快吗?

谢谢

for start, i am newbie in C++.

i am writing a program for my Master thesis which part of it suppose to solve regression in a recursive way.

I would like to solve:

Ax = y

In my case computation speed is not neglectable, that is way i would like to know if Boost::BLAS using

x = (A^T A)^{-1}A^Ty

will require less computation time then Lapackpp (I am using gentoo).

P.S.
I was able to find at Lapackpp project site Class documentations but not examples. Could someone provides me some examples in case Lapack is faster then Boost::BLAS

Thanks

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

孤城病女 2024-10-17 03:09:38

从数值分析的角度来看,您永远不想编写

  • 显式反转矩阵的代码,或
  • 形成用于回归的正规方程矩阵 (A^T A),

这两种方法都需要更多工作且不太准确(与直接解决同一问题的替代方案相比,可能不太稳定)。

每当你看到一些数学显示矩阵求逆时,这应该被理解为“求解线性方程组”,或者对矩阵进行因式分解并使用因式分解来求解该系统。 BLAS 和 Lapack 都有执行此操作的例程。

同样,对于回归,请调用计算回归的库函数,或自行阅读如何执行此操作。正规方程方法是教科书上错误的方法

From a numerical analysis standpoint, you never want to write code that

  • Explicitly inverts a matrix, or
  • Forms the matrix of normal equations (A^T A) for a regression

Both of these are more work and less accurate (and likely less stable) than the alternatives that solve the same problem directly.

Whenever you see some math showing a matrix inversion, that should be read to mean "solve a system of linear equations", or factor the matrix and use the factorization to solve the system. Both BLAS and Lapack have routines to do this.

Similarly, for the regression, call a library function that computes a regression, or read how to do it yourself. The normal equations method is the textbook wrong way to do it.

深海里的那抹蓝 2024-10-17 03:09:38

高级接口和低级优化是两个不同的事情。

LAPACK 和 uBLAS 提供高级接口和未优化的低级实现。硬件优化的低级例程(或绑定)应该来自其他地方。一旦提供了绑定,LAPACK 和 uBLAS 就可以使用优化的低级例程,而不是它们自己的未优化的实现。

例如,ATLAS提供优化的低级例程,但仅提供有限的高级(3级BLAS等)接口。您可以将 ATLAS 绑定到 LAPACK。然后 LAPACK 将使用 ATLAS 进行低级工作。将 LAPACK 视为将技术工作委托给经验丰富的工程师 (ATLAS) 的高级经理。 uBLAS 也是如此。您可以绑定uBLAS和MKL。结果将是优化的 C++ 库。检查文档并使用谷歌找出如何做到这一点。

High level interface and low level optimizations are two different things.

LAPACK and uBLAS provide high level interface and un-optimized low level implementation. Hardware optimized low level routines (or bindings) should come from somewhere else. Once bindings are provided, LAPACK and uBLAS can use optimized low level routines instead of their own un-optimized implementations.

For example, ATLAS provides optimized low level routines, but only limited high level (level 3 BLAS and etc) interface. You can bind ATLAS to LAPACK. Then LAPACK would use ATLAS for low level work. Think of LAPACK as a senior manager who delegates technical work to experienced engineers (ATLAS). The same for uBLAS. You can bind uBLAS and MKL. The result would be optimized C++ library. Check the documentation and use google to figure out how to do it.

韶华倾负 2024-10-17 03:09:38

你真的需要用C++来实现吗?例如 python/numpy 会成为您的替代方案吗?对于递归回归(最小二乘),我建议查找麻省理工学院斯特朗教授关于线性代数的讲座和/或他的书籍。

Do you really need to implement with C++? Would for example python/ numpy be an alternative for you? And for recursive regression (least squares) I'll recommend to look for MIT Professor Strang's lectures on linear algebra and/ or his books.

旧时模样 2024-10-17 03:09:38

Armadillo 将 BLAS 和 LAPACK 包装在一个漂亮的 C++ 接口中,并提供以下与您的问题直接相关的类似 Matlab 的函数:

  • solve(),求解线性方程组
  • pinv(),伪逆(内部使用 SVD)

Armadillo wraps BLAS and LAPACK in a nice C++ interface, and provides the following Matlab-like functions directly related to your problem:

  • solve(), to solve a system of linear equations
  • pinv(), pseudo-inverse (which uses SVD internally)
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文