- Introduction to Python
- Getting started with Python and the IPython notebook
- Functions are first class objects
- Data science is OSEMN
- Working with text
- Preprocessing text data
- Working with structured data
- Using SQLite3
- Using HDF5
- Using numpy
- Using Pandas
- Computational problems in statistics
- Computer numbers and mathematics
- Algorithmic complexity
- Linear Algebra and Linear Systems
- Linear Algebra and Matrix Decompositions
- Change of Basis
- Optimization and Non-linear Methods
- Practical Optimizatio Routines
- Finding roots
- Optimization Primer
- Using scipy.optimize
- Gradient deescent
- Newton’s method and variants
- Constrained optimization
- Curve fitting
- Finding paraemeters for ODE models
- Optimization of graph node placement
- Optimization of standard statistical models
- Fitting ODEs with the Levenberg–Marquardt algorithm
- 1D example
- 2D example
- Algorithms for Optimization and Root Finding for Multivariate Problems
- Expectation Maximizatio (EM) Algorithm
- Monte Carlo Methods
- Resampling methods
- Resampling
- Simulations
- Setting the random seed
- Sampling with and without replacement
- Calculation of Cook’s distance
- Permutation resampling
- Design of simulation experiments
- Example: Simulations to estimate power
- Check with R
- Estimating the CDF
- Estimating the PDF
- Kernel density estimation
- Multivariate kerndel density estimation
- Markov Chain Monte Carlo (MCMC)
- Using PyMC2
- Using PyMC3
- Using PyStan
- C Crash Course
- Code Optimization
- Using C code in Python
- Using functions from various compiled languages in Python
- Julia and Python
- Converting Python Code to C for speed
- Optimization bake-off
- Writing Parallel Code
- Massively parallel programming with GPUs
- Writing CUDA in C
- Distributed computing for Big Data
- Hadoop MapReduce on AWS EMR with mrjob
- Spark on a local mahcine using 4 nodes
- Modules and Packaging
- Tour of the Jupyter (IPython3) notebook
- Polyglot programming
- What you should know and learn more about
- Wrapping R libraries with Rpy
Matrices as Linear Transformations
Let’s consider: what does a matrix do to a vector? Matrix multiplication has a geometric interpretation. When we multiply a vector, we either rotate, reflect, dilate or some combination of those three. So multiplying by a matrix transforms one vector into another vector. This is known as a linear transformation.
Important Facts:
- Any matrix defines a linear transformation
- The matrix form of a linear transformation is NOT unique
- We need only define a transformation by saying what it does to a basis
Suppose we have a matrix \(A\) that defines some transformation. We can take any invertible matrix \(B\) and
\[BAB^{-1}\]
defines the same transformation. This operation is called a change of basis, because we are simply expressing the transformation with respect to a different basis.
This is what we do in PCA. We express the matrix in a basis of eigenvectors (more on this later).
Let \(f(x)\) be the linear transformation that takes \(e_1=(1,0)\) to \(f(e_1)=(2,3)\) and \(e_2=(0,1)\) to \(f(e_2) = (1,1)\). A matrix representation of \(f\) would be given by:
\[\begin{split}A = \left(\begin{matrix}2 & 1\\3&1\end{matrix}\right)\end{split}\]
This is the matrix we use if we consider the vectors of \(\mathbb{R}^2\) to be linear combinations of the form
\[c_1 e_1 + c_2 e_2\]
Now, consider a second pair of (linearly independent) vectors in \(\mathbb{R}^2\), say \(v_1=(1,3)\) and \(v_2=(4,1)\). We first find the transformation that takes \(e_1\) to \(v_1\) and \(e_2\) to \(v_2\). A matrix representation for this is:
\[\begin{split}B = \left(\begin{matrix}1 & 4\\3&1\end{matrix}\right)\end{split}\]
Our original transformation \(f\) can be expressed with respect to the basis \(v_1, v_2\) via
\[BAB^{-1}\]
A = np.array([[2,1],[3,1]]) # transformation f in standard basis e1 = np.array([1,0]) # standard basis vectors e1,e2 e2 = np.array([0,1]) print(A.dot(e1)) # demonstrate that Ae1 is (2,3) print(A.dot(e2)) # demonstrate that Ae2 is (1,1) # new basis vectors v1 = np.array([1,3]) v2 = np.array([4,1]) # How v1 and v2 are transformed by A print("Av1: ") print(A.dot(v1)) print("Av2: ") print(A.dot(v2)) # Change of basis from standard to v1,v2 B = np.array([[1,4],[3,1]]) print(B) B_inv = linalg.inv(B) print("B B_inv ") print(B.dot(B_inv)) # check inverse # transform e1 under change of coordinates T = B.dot(A.dot(B_inv)) # B A B^{-1} coeffs = T.dot(e1) print(coeffs[0]*v1 + coeffs[1]*v2)
[2 3] [1 1] Av1: [5 6] Av2: [ 9 13] [[1 4] [3 1]] B B_inv [[ 1.0000e+00 0.0000e+00] [ 5.5511e-17 1.0000e+00]] [ 1.1818 0.5455]
def plot_vectors(vs): """Plot vectors in vs assuming origin at (0,0).""" n = len(vs) X, Y = np.zeros((n, 2)) U, V = np.vstack(vs).T plt.quiver(X, Y, U, V, range(n), angles='xy', scale_units='xy', scale=1) xmin, xmax = np.min([U, X]), np.max([U, X]) ymin, ymax = np.min([V, Y]), np.max([V, Y]) xrng = xmax - xmin yrng = ymax - ymin xmin -= 0.05*xrng xmax += 0.05*xrng ymin -= 0.05*yrng ymax += 0.05*yrng plt.axis([xmin, xmax, ymin, ymax])
e1 = np.array([1,0]) e2 = np.array([0,1]) A = np.array([[2,1],[3,1]])
# Here is a simple plot showing Ae_1 and Ae_2 # You can show other transofrmations if you like plt.figure(figsize=(8,4)) plt.subplot(1,2,1) plot_vectors([e1, e2]) plt.subplot(1,2,2) plot_vectors([A.dot(e1), A.dot(e2)]) plt.tight_layout()
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论