Proof by Scar Tissue

Robotics and Learning

View My GitHub Profile

Solving Linear Equations

Vectors and Linear Equations

The central theme of Linear Algebra is to solve systems of linear equations. The canonical representation of a system of linear equations in Linear Algebra is $Ax = b$. There are two ways of interpreting this system of linear equations in Linear Algebra.

Idea of Elimination

Again assume the system of linear equations given by $Ax=b$. Where $A$ is the coefficient matrix, $x$ is to be solved for and $b$ is the right hand matrix. Just as we use elimination in solving systems of linear equations, Matrix elimination utilises pivot multipliers to retrieve Upper Triangle system** of Matrices. The idea is to transform the coefficient matrix to an upper triangular system and then use the method **back substitution to arrive at the solution to the system. This forms the base algorithmic approach to solving a system of linear equations.

There are 3 different failure cases as described in the book

In the third case, you can switch up rows in the system to retrieve a triangular system and arrive at a solution. In the case of the first two cases - you get no solution or infinitely many solutions. The third case is called non-singular and the first two cases are called singular.

The idea of elimination given here is called the Gaussian Elimination Method.

For the algorithm, please refer to section 2.2 of the book.

Elimination using Matrices

Each of the steps mentioned in the Gaussian Elimination Method, can be implemented as matrix operations. The idea is to device an Elimination Matrix. This matrix, $E$, is then used to multiply the coefficient matrix to perform the necessary row eliminations. So the question then arises, what is matrix multiplication. Funnily this is the first time the book addresses matrix multiplication. The following is how matrix multiplication is defined

If $B$ has several columns $b_{1}, b_{2}, b_{3}$, then the columns of $AB$ are $Ab_{1}, Ab_{2}, Ab_{3}$

The section also introduces the idea of an augmented matrix.

Matrix $P_{1, 2}$ is the row-interchange matrix. When $A$ is multiplied with this matrix, it’s first and second rows are exchanged.

Rules for Matrix Operations (Multiplication and Addition)

Addition is obvious, so for now I am going to leave out how you add two matrices. Multiplication is interesting, and also fundamental to Robotics. Basic transformation operations between frames are done using Matrix multiplication. The textbook gives 4 ways to multiply 2 matrices.

To multiply $AB$, $A$ must have $n$ columns and $B$ must have $n$ rows.

The Fundamental Law of Matrix Multiplication is given by the following:

\[(AB)C = A(BC)\]

Four ways of multiplying Matrices

  1. The Dot Product method
  2. The Column method - here you split $B$ into its constituent column vectors, and then treat matrix multiplication as $n$ different matrix-vector multiplications. As you may recall, Matrix-Vector multiplication is a linear combination of the columns of the matrix with scalar in the vector.
  3. The Row method - this picture is reversed, each row multiplies the whole matrix $B$
  4. The Columns multiply Rows method

Laws for matrix operations

For matrix multiplication:

\[AB \neq BA \\ A(B + C) = AB + AC \\ (A + B)C = AC + BC \\ A(BC) = (AB)C\]

Inverse Matrices

Assume $A$ is a square matrix. Whatever the matrix $A$ does, $A^{-1}$ undoes. The product of a matrix and it’s inverse is the indentity matrix.

\[A A^{-1} = I\]

The following are a few notes on inverses

  1. The inverse exists $iff$ elimination produces $n$ pivots. Refer to the Gaussian Elimination Method to understand what pivots mean.
  2. A matrix cannot have 2 different inverses
  3. If $A$ is invertible the only solution to $Ax=b$ is $x=A^{-1}b$.
  4. If $A$ is invertible, then $Ax=0$ can have only the zero solution $x=A^{-1}0=0$.
  5. A square matrix $A$ is invertible only if its determinant is non-zero.
  6. A diagonal matrix has an inverse provided no diagonal entries are zero.

The inverse of a Product $AB$

\[(AB)^{-1} = B^{-1}A^{-1}\]

Think about the geometry of the above matrix multiplication. At this point in the book, the geometry of matrices has not yet been discussed. So you can visualise it as a bunch of vectors.

Singular vs. Invertible

The central question about inverting matrices, how do we know if a matrix has an inverse? This is asked particularly in the light of formulating the solution of a set of linear equations $Ax = b$. The solution is given as $x = A^{-1}b$, only if $A^{-1}$ exists.

A square matrix $A$ with a full set of pivots will always have a two-sided inverse. What is a two sided inverse?

And finally, something about diagonally dominant matrices and their inverses.

Any matrix that does not have an inverse is a singular matrix.

Elimination = Factorization: $A=LU$

Many key ideas in Linear Algebra, are really factorizations of a matrix. The original matrix can be described as a product of 2 to 3 special matrices. The first factorization comes from elimination.

\[A = LU\]

$L$ represents the inverse of all the elimination matrices used to transform $A$ to $U$. The final step is to divide by the pivots to attain the following formulation.

\[A = LDU\]

Transpose and Permutations

Symmetric Matrices follow $A^{T} = A$.