线性回归的BigO是什么?

发布于 2024-08-15 16:22:34 字数 80 浏览 13 评论 0原文

尝试对多大的系统进行线性回归才合理?

具体来说:我有一个具有约 300K 采样点和约 1200 个线性项的系统。这在计算上可行吗?

How large a system is it reasonable to attempt to do a linear regression on?

Specifically: I have a system with ~300K sample points and ~1200 linear terms. Is this computationally feasible?

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(4

久隐师 2024-08-22 16:22:34

线性回归计算为 (X'X)^-1 X'Y。

如果 X 是 (nxk) 矩阵:

  1. (X' X) 需要 O(n*k^2) 时间并生成 (kxk) 矩阵

    (X' X) 需要 O(n*k^2) 时间并生成 (kxk) 矩阵

  2. (kxk) 矩阵的矩阵求逆需要 O(k^3) 时间

  3. (X' Y) 需要 O(n*k^2) 时间并生成 (kxk) 矩阵

  4. 两个 (kxk) 矩阵的最终矩阵乘法需要 O(k^3) 时间

所以 Big-O运行时间为 O(k^2*(n + k))。

另请参阅:http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra

如果你会发现,使用 Coppersmith–Winograd 算法,你可以将时间降低到 O(k^2*(n+k^0.376))。

The linear regression is computed as (X'X)^-1 X'Y.

If X is an (n x k) matrix:

  1. (X' X) takes O(n*k^2) time and produces a (k x k) matrix

  2. The matrix inversion of a (k x k) matrix takes O(k^3) time

  3. (X' Y) takes O(n*k^2) time and produces a (k x k) matrix

  4. The final matrix multiplication of two (k x k) matrices takes O(k^3) time

So the Big-O running time is O(k^2*(n + k)).

See also: http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra

If you get fancy it looks like you can get the time down to O(k^2*(n+k^0.376)) with the Coppersmith–Winograd algorithm.

泡沫很甜 2024-08-22 16:22:34

您可以将其表示为矩阵方程:

alt text

其中矩阵 alt text 为 300K 行和 1200 列,系数向量 alt text为 1200x1,RHS 矢量 alt text 为 1200x1。

如果将两边都乘以矩阵的转置 alt text,则得到一个 1200x1200 的未知数方程组。您可以使用 LU 分解或您喜欢的任何其他算法来求解系数。 (这就是最小二乘法的作用。)

因此 Big-O 行为类似于 O(mmn),其中 m = 300K 且 n = 1200。您需要考虑转置,矩阵乘法、LU分解、前后代换得到系数。

You can express this as a matrix equation:

alt text

where the matrix alt text is 300K rows and 1200 columns, the coefficient vector alt text is 1200x1, and the RHS vector alt text is 1200x1.

If you multiply both sides by the transpose of the matrix alt text, you have a system of equations for the unknowns that's 1200x1200. You can use LU decomposition or any other algorithm you like to solve for the coefficients. (This is what least squares is doing.)

So the Big-O behavior is something like O(mmn), where m = 300K and n = 1200. You'd account for the transpose, the matrix multiplication, the LU decomposition, and the forward-back substitution to get the coefficients.

凡间太子 2024-08-22 16:22:34

线性回归的计算方式为(X'X)^-1 X'y

据我所知, y 是结果向量(或者换句话说:因变量)。

因此,如果 X 是 (n × m) 矩阵且 y 是 (n × 1) 矩阵:

  1. (n × m) 矩阵的转置需要 O(n⋅m) 时间并生成 (m × n) 矩阵
  2. (X' X) 需要 O(n⋅m²) 时间并生成(m × m) 矩阵
  3. (m × m) 矩阵的矩阵求逆需要 O(m3) 时间
  4. (X' y) 需要 O(n ⋅m) 时间并生成 (m × 1) 矩阵
  5. (m × m) 和 (mx 1) 矩阵的最终矩阵乘法需要 O(m²) 时间

因此Big-O 运行时间为 O(n⋅m + n⋅m2 + m3 + n⋅m + m2)

现在,我们知道:

  • m2 ≤ m3
  • n⋅m ≤ n⋅m2

因此渐近地,实际的 Big-O 运行时间为 O(n⋅m2 + m3) = O(m2( n + m))

这就是我们所得到的
但是

,我们知道 n → 无穷大和 m → 无穷大之间存在显着差异。
https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variables

那么哪一个我们应该选择吗?显然,观察的数量更有可能增长,而不是属性的数量。
所以我的结论是,如果我们假设属性的数量保持不变,我们可以忽略 m 项,这是一种解脱,因为多元线性回归的时间复杂度变成仅仅是线性的 O(n) 。另一方面,当属性数量大幅增加时,我们可以预期计算时间会大幅增加。

The linear regression is computed as (X'X)^-1 X'y.

As far as I learned, y is a vector of results (or in other words: dependant variables).

Therefore, if X is an (n × m) matrix and y is an (n × 1) matrix:

  1. The transposing of a (n × m) matrix takes O(n⋅m) time and produces a (m × n) matrix
  2. (X' X) takes O(n⋅m²) time and produces a (m × m) matrix
  3. The matrix inversion of a (m × m) matrix takes O(m³) time
  4. (X' y) takes O(n⋅m) time and produces a (m × 1) matrix
  5. The final matrix multiplication of a (m × m) and a (m x 1) matrices takes O(m²) time

So the Big-O running time is O(n⋅m + n⋅m² + m³ + n⋅m + m²).

Now, we know that:

  • m² ≤ m³
  • n⋅m ≤ n⋅m²

so asymptotically, the actual Big-O running time is O(n⋅m² + m³) = O(m²(n + m)).

And that's what we have from
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra

But, we know that there's a significant difference between the case n → ∞ and m → ∞.
https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variables

So which one should we choose? Obviously it's the number of observations which is more likely to grow, rather than the number of attributes.
So my conclusion is that if we assume that the number of attributes remains constant, we can ignore the m terms and that's a relief because the time complexity of a multivariate linear regression becomes a mere linear O(n). On the other hand, we can expect our computing time explodes by a large value when the number of attributes increase substantially.

时光暖心i 2024-08-22 16:22:34

封闭式模型的线性回归计算如下:
的导数

RSS(W) = -2H^t (y-HW)

因此,我们求解

-2H^t (y-HW) = 0

则W值为

W = (H^t H)^-1 H^2 y

其中:
W:是期望权重的向量
H:是特征矩阵 N*D,其中 N 是观测值数量,D 是特征数量
y:是实际值

那么,复杂度

H^t H 等于 n D^2

转置的复杂度为 D^3

所以, 的复杂度

(H^t H)^-1 是 n * D^2 + D^3

The linear regression of closed-form model is computed as follow:
derivative of

RSS(W) = -2H^t (y-HW)

So, we solve for

-2H^t (y-HW) = 0

Then, the W value is

W = (H^t H)^-1 H^2 y

where:
W: is the vector of expected weights
H: is the features matrix N*D where N is the number of observations, and D is the number of features
y: is the actual value

Then, the complexity of

H^t H is n D^2

The complexity of the transpose is D^3

So, The complexity of

(H^t H)^-1 is n * D^2 + D^3

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文