大矩阵求逆方法
您好,我一直在做一些关于矩阵求逆(线性代数)的研究,我想使用 C++ 模板编程来进行算法,我发现有很多方法,例如:高斯乔丹消除法或 LU 分解,我发现函数 LU_factorize (c++ boost 库)
我想知道是否还有其他方法,从程序员或数学家的角度来看,哪种方法更好(优点/缺点)?
如果没有其他更快的方法,boost 库中是否已经有(矩阵)求逆函数? ,因为我已经搜索了很多,但没有找到任何。
Hi I've been doing some research about matrix inversion (linear algebra) and I wanted to use C++ template programming for the algorithm , what i found out is that there are number of methods like: Gauss-Jordan Elimination or LU Decomposition and I found the function LU_factorize (c++ boost library)
I want to know if there are other methods , which one is better (advantages/disadvantages) , from a perspective of programmers or mathematicians ?
If there are no other faster methods is there already a (matrix) inversion function in the boost library ? , because i've searched alot and didn't find any.
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(5)
正如您提到的,标准方法是执行 LU 分解,然后求解恒等式。这可以使用 LAPACK 库来实现,例如使用 dgetrf(因子)和 dgetri(计算逆)。大多数其他线性代数库具有大致相同的功能。
当矩阵是奇异或接近奇异时,有一些较慢的方法可以更优雅地降级,并且出于这个原因而被使用。例如,Moore-Penrose 伪逆 等于逆,如果矩阵是可逆的,即使矩阵不可逆也通常很有用;它可以使用奇异值分解来计算。
As you mention, the standard approach is to perform a LU factorization and then solve for the identity. This can be implemented using the LAPACK library, for example, with
dgetrf
(factor) anddgetri
(compute inverse). Most other linear algebra libraries have roughly equivalent functions.There are some slower methods that degrade more gracefully when the matrix is singular or nearly singular, and are used for that reason. For example, the Moore-Penrose pseudoinverse is equal to the inverse if the matrix is invertible, and often useful even if the matrix is not invertible; it can be calculated using a Singular Value Decomposition.
我建议您查看 Eigen 源代码。
I'd suggest you to take a look at Eigen source code.
请通过 Google 或 Wikipedia 查找以下流行语。
首先,确保您确实想要逆。求解系统不需要需要反转矩阵。矩阵求逆可以通过求解n个系统来执行,单位基向量作为右侧。所以我将专注于解决系统问题,因为这通常是你想要的。
这取决于“大”的含义。基于分解的方法通常必须存储整个矩阵。分解矩阵后,您可以一次求解多个右侧(从而轻松反转矩阵)。我不会在这里讨论因式分解方法,因为您可能已经知道它们了。
请注意,当矩阵很大时,其条件数很可能接近于零,这意味着该矩阵是“数值不可逆的”。补救措施:预处理。检查维基百科对此。文章写得很好。
如果矩阵很大,你不想存储它。如果它有很多零,那么它就是一个稀疏矩阵。它要么具有结构(例如,带对角线、块矩阵,...),并且您有专门的方法来求解涉及此类矩阵的系统,或者没有。
当你面对一个没有明显结构的稀疏矩阵,或者一个你不想存储的矩阵时,你必须使用迭代方法。它们只涉及矩阵向量乘法,不需要特定形式的存储:您可以在需要时计算系数,或者按照您想要的方式存储非零系数等。
这些方法是:
最后,您可以使用稀疏矩阵执行某种分解,并使用专门设计的算法来最大限度地减少要存储的非零元素的数量。
Please Google or Wikipedia for the buzzwords below.
First, make sure you really want the inverse. Solving a system does not require inverting a matrix. Matrix inversion can be performed by solving n systems, with unit basis vectors as right hand sides. So I'll focus on solving systems, because it is usually what you want.
It depends on what "large" means. Methods based on decomposition must generally store the entire matrix. Once you have decomposed the matrix, you can solve for multiple right hand sides at once (and thus invert the matrix easily). I won't discuss here factorization methods, as you're likely to know them already.
Please note that when a matrix is large, its condition number is very likely to be close to zero, which means that the matrix is "numerically non-invertible". Remedy: Preconditionning. Check wikipedia for this. The article is well written.
If the matrix is large, you don't want to store it. If it has a lot of zeros, it is a sparse matrix. Either it has structure (eg. band diagonal, block matrix, ...), and you have specialized methods for solving systems involving such matrices, or it has not.
When you're faced with a sparse matrix with no obvious structure, or with a matrix you don't want to store, you must use iterative methods. They only involve matrix-vector multiplications, which don't require a particular form of storage: you can compute the coefficients when you need them, or store non-zero coefficients the way you want, etc.
The methods are:
And finally, you can perform some sort of factorization with sparse matrices, with specially designed algorithms to minimize the number of non-zero elements to store.
根据矩阵的实际大小,您可能在任何给定时间只需要在内存中保留一小部分列。这可能需要覆盖对矩阵元素的低级写入和读取操作,我不确定 Eigen(一个相当不错的库)是否允许您这样做。
对于矩阵非常大的这些非常狭窄的情况,有 StlXXL 库设计用于对主要存储在磁盘中的数组进行内存访问
编辑更准确地说,如果您有一个无法在可用RAM中修复的矩阵,则首选方法是分块反转。矩阵被递归地分割,直到每个矩阵都适合 RAM(当然这是算法的调整参数)。这里棘手的部分是避免在将矩阵拉入和拉出磁盘时使 CPU 缺乏矩阵来进行反转。这可能需要在适当的并行文件系统中进行调查,因为即使使用 StlXXL,这也可能是主要瓶颈。尽管如此,让我重复一遍这句咒语; 过早的优化是所有编程罪恶的根源。这种邪恶只能通过编码、执行和分析的净化仪式来消除
depending on the how large the matrix actually is, you probably need to keep only a small subset of the columns in memory at any given time. This might require overriding the low-level write and read operations to the matrix elements, which i'm not sure if Eigen, an otherwise pretty decent library, will allow you to.
For These very narrow cases where the matrix is really big, There is StlXXL library designed for memory access to arrays that are mostly stored in disk
EDIT To be more precise, if you have a matrix that does not fix in the available RAM, the preferred approach is to do blockwise inversion. The matrix is split recursively until each matrix does fit in RAM (this is a tuning parameter of the algorithm of course). The tricky part here is to avoid starving the CPU of matrices to invert while they are pulled in and out of disk. This might require to investigate in appropiate parallel filesystems, since even with StlXXL, this is likely to be the main bottleneck. Although, let me repeat the mantra; Premature optimization is the root of all programming evil. This evil can only be banished with the cleansing ritual of Coding, Execute and Profile
您可能想要使用 LAPACK 的 C++ 包装器。 LAPACK 是非常成熟的代码:经过充分测试、优化等。
这样的包装器之一是 英特尔数学内核库。
You might want to use a C++ wrapper around LAPACK. The LAPACK is very mature code: well-tested, optimized, etc.
One such wrapper is the Intel Math Kernel Library.