加速计算矩阵辅因子的 Python 代码

发布于 2024-11-18 01:17:22 字数 725 浏览 2 评论 0原文

作为复杂任务的一部分,我需要计算矩阵辅因子。我使用这个用于计算矩阵次要的好代码以简单的方式完成了此操作。这是我的代码:

def matrix_cofactor(matrix):
    C = np.zeros(matrix.shape)
    nrows, ncols = C.shape
    for row in xrange(nrows):
        for col in xrange(ncols):
            minor = matrix[np.array(range(row)+range(row+1,nrows))[:,np.newaxis],
                           np.array(range(col)+range(col+1,ncols))]
            C[row, col] = (-1)**(row+col) * np.linalg.det(minor)
    return C

事实证明,这个矩阵辅因子代码是瓶颈,我想优化上面的代码片段。关于如何做到这一点有什么想法吗?

As part of a complex task, I need to compute matrix cofactors. I did this in a straightforward way using this nice code for computing matrix minors. Here is my code:

def matrix_cofactor(matrix):
    C = np.zeros(matrix.shape)
    nrows, ncols = C.shape
    for row in xrange(nrows):
        for col in xrange(ncols):
            minor = matrix[np.array(range(row)+range(row+1,nrows))[:,np.newaxis],
                           np.array(range(col)+range(col+1,ncols))]
            C[row, col] = (-1)**(row+col) * np.linalg.det(minor)
    return C

It turns out that this matrix cofactor code is the bottleneck, and I would like to optimize the code snippet above. Any ideas as to how to do this?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

七颜 2024-11-25 01:17:22

如果您的矩阵是可逆的,则辅助因子与逆矩阵相关:

def matrix_cofactor(matrix):
    return np.linalg.inv(matrix).T * np.linalg.det(matrix)

这会带来很大的加速(50x50 矩阵约为 1000 倍)。主要原因是根本性的:这是一个O(n^3) 算法,而基于minor-det 的算法是O(n^5)

这可能意味着对于不可逆矩阵,有一些巧妙的方法来计算辅因子(即,不使用上面使用的数学公式,而是使用其他一些等效的定义)。


如果您坚持使用基于 det 的方法,您可以执行以下操作:

大部分时间似乎都花在 det 内。 (查看 line_profiler 自己找出答案。)您可以尝试通过链接 Numpy 来加快该部分的速度与英特尔MKL,但除此之外,没有什么可以做的。

您可以像这样加速代码的其他部分:

minor = np.zeros([nrows-1, ncols-1])
for row in xrange(nrows):
    for col in xrange(ncols):
        minor[:row,:col] = matrix[:row,:col]
        minor[row:,:col] = matrix[row+1:,:col]
        minor[:row,col:] = matrix[:row,col+1:]
        minor[row:,col:] = matrix[row+1:,col+1:]
        ...

根据矩阵的大小,这可以获得 10-50% 的总运行时间。原始代码具有 Python range 和列表操作,这比直接切片索引慢。您还可以尝试更聪明,只复制实际更改的次要部分 --- 然而,在上述更改之后,接近 100% 的时间都花在了 numpy.linalg.det 因此其他部分的进一步优化没有多大意义。

If your matrix is invertible, the cofactor is related to the inverse:

def matrix_cofactor(matrix):
    return np.linalg.inv(matrix).T * np.linalg.det(matrix)

This gives large speedups (~ 1000x for 50x50 matrices). The main reason is fundamental: this is an O(n^3) algorithm, whereas the minor-det-based one is O(n^5).

This probably means that also for non-invertible matrixes, there is some clever way to calculate the cofactor (i.e., not use the mathematical formula that you use above, but some other equivalent definition).


If you stick with the det-based approach, what you can do is the following:

The majority of the time seems to be spent inside det. (Check out line_profiler to find this out yourself.) You can try to speed that part up by linking Numpy with the Intel MKL, but other than that, there is not much that can be done.

You can speed up the other part of the code like this:

minor = np.zeros([nrows-1, ncols-1])
for row in xrange(nrows):
    for col in xrange(ncols):
        minor[:row,:col] = matrix[:row,:col]
        minor[row:,:col] = matrix[row+1:,:col]
        minor[:row,col:] = matrix[:row,col+1:]
        minor[row:,col:] = matrix[row+1:,col+1:]
        ...

This gains some 10-50% total runtime depending on the size of your matrices. The original code has Python range and list manipulations, which are slower than direct slice indexing. You could try also to be more clever and copy only parts of the minor that actually change --- however, already after the above change, close to 100% of the time is spent inside numpy.linalg.det so that furher optimization of the othe parts does not make so much sense.

九八野马 2024-11-25 01:17:22

np.array(range(row)+range(row+1,nrows))[:,np.newaxis] 的计算不依赖于 col 所以你可以可以将其移到内部循环之外并缓存该值。根据您拥有的列数,这可能会带来小的优化。

The calculation of np.array(range(row)+range(row+1,nrows))[:,np.newaxis] does not depended on col so you could could move that outside the inner loop and cache the value. Depending on the number of columns you have this might give a small optimization.

烟织青萝梦 2024-11-25 01:17:22
from sympy import *
A = Matrix([[1,2,0],[0,3,0],[0,7,1]])
A.adjugate().T

输出(即辅因子矩阵)为:

Matrix([
[ 3, 0,  0],
[-2, 1, -7],
[ 0, 0,  3]])
from sympy import *
A = Matrix([[1,2,0],[0,3,0],[0,7,1]])
A.adjugate().T

And the output (which is cofactor matrix) is:

Matrix([
[ 3, 0,  0],
[-2, 1, -7],
[ 0, 0,  3]])
べ繥欢鉨o。 2024-11-25 01:17:22

我建议使用 SVD,而不是使用逆矩阵和行列式

def cofactors(A):
    U,sigma,Vt = np.linalg.svd(A)
    N = len(sigma)
    g = np.tile(sigma,N)
    g[::(N+1)] = 1
    G = np.diag(-(-1)**N*np.product(np.reshape(g,(N,N)),1)) 
    return U @ G @ Vt 

Instead of using the inverse and determinant, I'd suggest using the SVD

def cofactors(A):
    U,sigma,Vt = np.linalg.svd(A)
    N = len(sigma)
    g = np.tile(sigma,N)
    g[::(N+1)] = 1
    G = np.diag(-(-1)**N*np.product(np.reshape(g,(N,N)),1)) 
    return U @ G @ Vt 
~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文