求解 C++ 中的正规方程组
我想求解线性方程组:
Ax = b
A 是一个 nx m 矩阵(不是方阵),b 和 x 都是 nx 1 向量。当 A 和 b 已知时,n 约为 50-100,m 约为 2(换句话说,A 可能是最大值 [100x2])。
我知道 x
的解决方案: $x = \inv(A^TA) A^T b$
我找到了几种解决它的方法:uBLAS (Boost)、Lapack 、 Eigen 等,但我不知道使用这些包的“x”的 CPU 计算时间有多快。我也不知道这在数字上是否是快速解决“x”的原因,
对我来说重要的是CPU计算时间会尽可能短并且有良好的文档,因为我是新手。
求解完正规方程Ax = b
后,我想使用回归和也许稍后应用卡尔曼滤波器来改进我的近似值。
我的问题是哪个 C++ 库对于我上面描述的需求来说更强大、更快?
I would like to solve the system of linear equations:
Ax = b
A is a n x m
matrix (not square), b and x are both n x 1
vectors. Where A and b are known, n is from the order of 50-100 and m is about 2 (in other words, A could be maximum [100x2]).
I know the solution of x
: $x = \inv(A^T A) A^T b$
I found few ways to solve it: uBLAS (Boost), Lapack, Eigen and etc. but i dont know how fast are the CPU computation time of 'x' using those packages. I also don't know if this numerically a fast why to solve 'x'
What is for my important is that the CPU computation time would be short as possible and good documentation since i am newbie.
After solving the normal equation Ax = b
i would like to improve my approximation using regressive and maybe later applying Kalman Filter.
My question is which C++ library is the robuster and faster for the needs i describe above?
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
这是最小二乘解,因为未知数比方程多。如果 m 确实等于 2,这告诉我简单的线性最小二乘法对您来说就足够了。公式可以写成封闭形式。你不需要图书馆。
如果 m 是个位数,我仍然会说你可以使用 A(transpose)*A*X = A(transpose)*b 轻松解决这个问题。用于求解系数的简单 LU 分解就足够了。这应该是一个比您想象的更简单的问题。
This is a least squares solution, because you have more unknowns than equations. If m is indeed equal to 2, that tells me that a simple linear least squares will be sufficient for you. The formulas can be written out in closed form. You don't need a library.
If m is in single digits, I'd still say that you can easily solve this using A(transpose)*A*X = A(transpose)*b. A simple LU decomposition to solve for the coefficients would be sufficient. It should be a much more straightforward problem than you're making it out to be.
uBlas 并未经过优化,除非您将其与优化的 BLAS 绑定一起使用。
以下内容针对多线程和 SIMD 进行了优化:
顺便说一句,我不知道你到底在做什么,但通常来说,正规方程不是进行线性回归的正确方法。除非您的矩阵条件良好,否则应首选 QR 或 SVD。
uBlas is not optimized unless you use it with optimized BLAS bindings.
The following are optimized for multi-threading and SIMD:
Btw, I don't know exactly what are you doing, but as a rule normal equations are not a proper way to do linear regression. Unless your matrix is well conditioned, QR or SVD should be preferred.
如果许可不是问题,您可以尝试 gnu 科学库
http://www.gnu.org/ software/gsl/
它附带了一个 blas 库,如果以后需要,您可以将其替换为优化的库(例如 intel、ATLAS 或 ACML(AMD 芯片)库)。
If liscencing is not a problem, you might try the gnu scientific library
http://www.gnu.org/software/gsl/
It comes with a blas library that you can swap for an optimised library if you need to later (for example the intel, ATLAS, or ACML (AMD chip) library.
如果您可以访问 MATLAB,我建议您使用它的 C 库。
If you have access to MATLAB, I would recommend using its C libraries.
如果您确实需要专门化,您可以使用 Skilling 方法来近似矩阵求逆(达到任意精度)。它仅使用阶 (N^2) 运算(而不是通常矩阵求逆 - LU 分解等的阶 N^3)。
其描述在吉布斯的论文中链接到这里(大约第27页):
http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz
If you really need to specialize, you can approximate matrix inversion (to arbitrary precision) using the Skilling method. It uses order (N^2) operations only (rather than the order N^3 of usual matrix inversion - LU decomposition etc).
Its described in the thesis of Gibbs linked to here (around page 27):
http://www.inference.phy.cam.ac.uk/mng10/GP/thesis.ps.gz