如何在 C++ 中计算列随机矩阵的特征向量

发布于 2024-10-05 18:48:17 字数 261 浏览 0 评论 0原文

我有一个列随机矩阵 A(n×n 实数,非负矩阵),想要在 C++ 中求解以下方程: Ax=x

我假设我需要找出特征向量 x,其中特征值将必须是 1(对吗?),但我无法在 C++ 中弄清楚它。到目前为止,我已经检查了一些数学库,例如 Seldon、CPPScaLapack、Eigen...其中,Eigen 似乎是一个不错的选择,但我无法理解如何利用它们中的任何一个来求解上面的方程。

你能给我一些建议/代码片段或想法来解决这个方程吗?非常感谢任何帮助。

谢谢。

I have a column stochastic matrix A(n-by-n real, non-negative matrix) and want to solve the following equation in C++: Ax=x

I am assuming that I need to find out an eigenvector, x, where eigenvalue would have to be 1(right?) but I couldn't figure it out in C++. So far I have checked out some math-libs such as Seldon, CPPScaLapack, Eigen... Among them, Eigen seems a good option but I couldn't understand how to utilize any of them to solve the equation above.

Can you give me some suggestions/code snippets or ideas to solve the equation? Any help is highly appreciated.

Thanks.

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(2

空宴 2024-10-12 18:48:17

由于随机矩阵 $M$ 的最大特征向量是统一的,因此您可以通过迭代找到该特征向量(除非您真的不擅长猜测初始值)。

从一些随机选择的初始向量 $v_1$ 开始,其值(概率)总和为 1。将 $M$ 应用于 $v_1$ 以获得 $Mv_1$。现在重新规范化这个新向量$Mv_1$,即除以它的元素之和得到$v_2$。这是一个新的概率向量,它将更接近所需的特征向量(除非您最初的猜测恰好与特征向量正交)。

重复此过程,直到 $v_k$ 接近稳定。根据需要,这应该是一个 $Mv_k = v_k$ 的向量。

Since the largest eigenvector of a stochastic matrix $M$ is unity, you can find this eigenvector by iteration (unless you're really bad at guessing initial values).

Start with some randomly chosen initial vector $v_1$ whose values (probabilities) sum to unity. Apply $M$ to $v_1$ to get $Mv_1$. Now renormalize this new vector $Mv_1$, that is, divide by the sum of its elements to get $v_2$. This is a new vector of probabilities and it will be closer to the desired eigenvector (unless your initial guess happened to be orthogonal to the eigenvector).

Repeat this process until $v_k$ approaches stability. That should be a vector with $Mv_k = v_k$, as desired.

嗫嚅 2024-10-12 18:48:17

另一种方法是计算矩阵减去单位矩阵的核。这可能会或可能不会比 Carl 解释的使用幂迭代更快,具体取决于矩阵的大小和其他特征值。当矩阵变大并且第二个特征值远离 1 时,幂迭代会更好。

这个想法是将 Ax = x 重写为 Ax - x = 0。然后使用 Ix = x,其中 I 表示单位矩阵。因此,Ax - x = 0 相当于 Ax - Ix = 0 或 (AI) x = 0。所以你的特征向量 x 位于 AI 的内核(或零空间)中。

本教程页面介绍了如何使用 Eigen 计算矩阵的核。一些未经测试的代码:

MatrixXf M;
/* initialize M */
FullPivLU<MatrixXf> lu_decomp(M);
VectorXf x = lu_decomp.kernel().col(0);
/* x now contains the vector you want */

您可能会发现内核是空的。这意味着矩阵不是真正随机的,或者您需要调整阈值(请参阅上面链接的页面)。

Another method is to compute the kernel of your matrix minus the identity matrix. This may or may not be faster than using power iteration as explained by Carl, depending on the size of the matrix and the other eigenvalues. Power iteration is better when the matrix gets bigger and the second eigenvalue gets further away from one.

The idea is to rewrite Ax = x as Ax - x = 0. Then use that Ix = x, where I denotes the identity matrix. Thus, Ax - x = 0 is equivalent to Ax - Ix = 0 or (A-I) x = 0. So your eigenvector x lies in the kernel (or null space) of A-I.

This tutorial page explains how to compute the kernel of a matrix using Eigen. Some untested code:

MatrixXf M;
/* initialize M */
FullPivLU<MatrixXf> lu_decomp(M);
VectorXf x = lu_decomp.kernel().col(0);
/* x now contains the vector you want */

You may find that the kernel is empty. This means that either the matrix is not really stochastic, or that you need to adapt the threshold (see the page linked above).

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文