- Introduction to Python
- Getting started with Python and the IPython notebook
- Functions are first class objects
- Data science is OSEMN
- Working with text
- Preprocessing text data
- Working with structured data
- Using SQLite3
- Using HDF5
- Using numpy
- Using Pandas
- Computational problems in statistics
- Computer numbers and mathematics
- Algorithmic complexity
- Linear Algebra and Linear Systems
- Linear Algebra and Matrix Decompositions
- Change of Basis
- Optimization and Non-linear Methods
- Practical Optimizatio Routines
- Finding roots
- Optimization Primer
- Using scipy.optimize
- Gradient deescent
- Newton’s method and variants
- Constrained optimization
- Curve fitting
- Finding paraemeters for ODE models
- Optimization of graph node placement
- Optimization of standard statistical models
- Fitting ODEs with the Levenberg–Marquardt algorithm
- 1D example
- 2D example
- Algorithms for Optimization and Root Finding for Multivariate Problems
- Expectation Maximizatio (EM) Algorithm
- Monte Carlo Methods
- Resampling methods
- Resampling
- Simulations
- Setting the random seed
- Sampling with and without replacement
- Calculation of Cook’s distance
- Permutation resampling
- Design of simulation experiments
- Example: Simulations to estimate power
- Check with R
- Estimating the CDF
- Estimating the PDF
- Kernel density estimation
- Multivariate kerndel density estimation
- Markov Chain Monte Carlo (MCMC)
- Using PyMC2
- Using PyMC3
- Using PyStan
- C Crash Course
- Code Optimization
- Using C code in Python
- Using functions from various compiled languages in Python
- Julia and Python
- Converting Python Code to C for speed
- Optimization bake-off
- Writing Parallel Code
- Massively parallel programming with GPUs
- Writing CUDA in C
- Distributed computing for Big Data
- Hadoop MapReduce on AWS EMR with mrjob
- Spark on a local mahcine using 4 nodes
- Modules and Packaging
- Tour of the Jupyter (IPython3) notebook
- Polyglot programming
- What you should know and learn more about
- Wrapping R libraries with Rpy
Matrix Decompositions
Matrix decompositions are an important step in solving linear systems in a computationally efficient manner.
LU Decomposition and Gaussian Elimination
LU stands for ‘Lower Upper’, and so an LU decomposition of a matrix \(A\) is a decomposition so that
\[A= LU\]
where \(L\) is lower triangular and \(U\) is upper triangular.
Now, LU decomposition is essentially gaussian elimination, but we work only with the matrix \(A\) (as opposed to the augmented matrix).
Let’s review how gaussian elimination (ge) works. We will deal with a \(3\times 3\) system of equations for conciseness, but everything here generalizes to the \(n\times n\) case. Consider the following equation:
\[\begin{split}\left(\begin{matrix}a_{11}&a_{12} & a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{matrix}\right)\left(\begin{matrix}x_1\\x_2\\x_3\end{matrix}\right) = \left(\begin{matrix}b_1\\b_2\\b_3\end{matrix}\right)\end{split}\]
For simplicity, let us assume that the leftmost matrix \(A\) is non-singular. To solve the system using ge, we start with the ‘augmented matrix’:
\[\begin{split}\left(\begin{array}{ccc|c}a_{11}&a_{12} & a_{13}& b_1 \\a_{21}&a_{22}&a_{23}&b_2\\a_{31}&a_{32}&a_{33}&b_3\end{array}\right)\end{split}\]
We begin at the first entry, \(a_{11}\). If \(a_{11} \neq 0\), then we divide the first row by \(a_{11}\) and then subtract the appropriate multiple of the first row from each of the other rows, zeroing out the first entry of all rows. (If \(a_{11}\) is zero, we need to permute rows. We will not go into detail of that here.) The result is as follows:
\[\begin{split}\left(\begin{array}{ccc|c} 1 & \frac{a_{12}}{a_{11}} & \frac{a_{13}}{a_{11}} & \frac{b_1}{a_{11}} \\ 0 & a_{22} - a_{21}\frac{a_{12}}{a_{11}} & a_{23} - a_{21}\frac{a_{13}}{a_{11}} & b_2 - a_{21}\frac{b_1}{a_{11}}\\ 0&a_{32}-a_{31}\frac{a_{12}}{a_{11}} & a_{33} - a_{31}\frac{a_{13}}{a_{11}} &b_3- a_{31}\frac{b_1}{a_{11}}\end{array}\right)\end{split}\]
We repeat the procedure for the second row, first dividing by the leading entry, then subtracting the appropriate multiple of the resulting row from each of the third and first rows, so that the second entry in row 1 and in row 3 are zero. We could continue until the matrix on the left is the identity. In that case, we can then just ‘read off’ the solution: i.e., the vector \(x\) is the resulting column vector on the right. Usually, it is more efficient to stop at reduced row eschelon form (upper triangular, with ones on the diagonal), and then use back substitution to obtain the final answer.
Note that in some cases, it is necessary to permute rows to obtain reduced row eschelon form. This is called partial pivoting. If we also manipulate columns, that is called full pivoting.
It should be mentioned that we may obtain the inverse of a matrix using ge, by reducing the matrix \(A\) to the identity, with the identity matrix as the augmented portion.
Now, this is all fine when we are solving a system one time, for one outcome \(b\). Many applications involve solutions to multiple problems, where the left-hand-side of our matrix equation does not change, but there are many outcome vectors \(b\). In this case, it is more efficient to decompose \(A\).
First, we start just as in ge, but we ‘keep track’ of the various multiples required to eliminate entries. For example, consider the matrix
\[\begin{split}A = \left(\begin{matrix} 1 & 3 & 4 \\ 2& 1& 3\\ 4&1&2 \end{matrix}\right)\end{split}\]
We need to multiply row \(1\) by \(2\) and subtract from row \(2\) to eliminate the first entry in row \(2\), and then multiply row \(1\) by \(4\) and subtract from row \(3\). Instead of entering zeroes into the first entries of rows \(2\) and \(3\), we record the multiples required for their elimination, as so:
\[\begin{split}\left(\begin{matrix} 1 & 3 & 4 \\ (2)& -5 & -5\\ (4)&-11&-14 \end{matrix}\right)\end{split}\]
And then we eliminate the second entry in the third row:
\[\begin{split}\left(\begin{matrix} 1 & 3 & 4 \\ (2)& -5 & -5\\ (4)&(\frac{-11}{5})&-3 \end{matrix}\right)\end{split}\]
And now we have the decomposition:
\[\begin{split}L= \left(\begin{matrix} 1 & 0 & 0 \\ 2& 1 & 0\\ 4&\frac{-11}5&1 \end{matrix}\right) U = \left(\begin{matrix} 1 & 3 & 4 \\ 0& -5 & -5\\ 0&0&-3 \end{matrix}\right)\end{split}\]
We can solve the system by solving two back-substitution problems:
\[Ly = b\]
and
\[Ux=y\]
These are both \(O(n^2)\), so it is more efficient to decompose when there are multiple outcomes to solve for.
Let do this with numpy:
import numpy as np import scipy.linalg as la np.set_printoptions(suppress=True) A = np.array([[1,3,4],[2,1,3],[4,1,2]]) print(A) P, L, U = la.lu(A) print(np.dot(P.T, A)) print print(np.dot(L, U)) print(P) print(L) print(U)
[[1 3 4] [2 1 3] [4 1 2]] [[ 4. 1. 2.] [ 1. 3. 4.] [ 2. 1. 3.]] [[ 4. 1. 2.] [ 1. 3. 4.] [ 2. 1. 3.]] [[ 0. 1. 0.] [ 0. 0. 1.] [ 1. 0. 0.]] [[ 1. 0. 0. ] [ 0.25 1. 0. ] [ 0.5 0.1818 1. ]] [[ 4. 1. 2. ] [ 0. 2.75 3.5 ] [ 0. 0. 1.3636]]
Note that the numpy decomposition uses partial pivoting (matrix rows are permuted to use the largest pivot). This is because small pivots can lead to numerical instability. Another reason why one should use library functions whenever possible!
Cholesky Decomposition
Recall that a square matrix \(A\) is positive definite if
\[\begin{split}u^TA u > 0\end{split}\]
for any non-zero n-dimensional vector \(u\),
and a symmetric, positive-definite matrix \(A\) is a positive-definite matrix such that
\[A = A^T\]
Let \(A\) be a symmetric, positive-definite matrix. There is a unique decomposition such that
\[A = L L^T\]
where \(L\) is lower-triangular with positive diagonal elements and \(L^T\) is its transpose. This decomposition is known as the Cholesky decompostion, and \(L\) may be interpreted as the ‘square root’ of the matrix \(A\).
Algorithm:
Let \(A\) be an \(n\times n\) matrix. We find the matri \(L\) using the following iterative procedure:
\[\begin{split}A = \left(\begin{matrix}a_{11}&A_{12}\\A_{12}&A_{22}\end{matrix}\right) = \left(\begin{matrix}\ell_{11}&0\\ L_{12}&L_{22}\end{matrix}\right) \left(\begin{matrix}\ell_{11}&L_{12}\\0&L_{22}\end{matrix}\right)\end{split}\]
1.) Let \(\ell_{11} = \sqrt{a_{11}}\)
2.) \(L_{12} = \frac{1}{\ell_{11}}A_{12}\)
3.) Solve \(A_{22} - L_{12}L_{12}^T = L_{22}L_{22}^T\) for \(L_{22}\)
Example:
\[\begin{split}A = \left(\begin{matrix}1&3&5\\3&13&23\\5&23&42\end{matrix}\right)\end{split}\] \[\ell_{11} = \sqrt{a_{11}} = 1\] \[L_{12} = \frac{1}{\ell_{11}} A_{12} = A_{12}\]
And so we conclude that \(\ell_{33}=1\).
This yields the decomposition:
\[\begin{split}\left(\begin{matrix}1&3&5\\3&13&23\\5&23&42\end{matrix}\right) = \left(\begin{matrix}1&0&0\\3&2&0\\5&4&1\end{matrix}\right)\left(\begin{matrix}1&3&5\\0&2&4\\0&0&1\end{matrix}\right)\end{split}\]
Now, with numpy:
A = np.array([[1,3,5],[3,13,23],[5,23,42]]) L = la.cholesky(A) print(np.dot(L.T, L)) print(L) print(A)
[[ 1. 3. 5.] [ 3. 13. 23.] [ 5. 23. 42.]] [[ 1. 3. 5.] [ 0. 2. 4.] [ 0. 0. 1.]] [[ 1 3 5] [ 3 13 23] [ 5 23 42]]
Cholesky decomposition is about twice as fast as LU decomposition (though both scale as \(n^3\)).
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论