MATH 136 - Linear Algebra
MWF 3:30PM - 4:20PM
QNC 2502
Thomas Jung Spier

1 | Vectors in Euclidean Space

1.1 | Vector Addition and Scalar Multiplication

The Euclidean Space is a collection of points

Represents the vector connecting the origin or TAIL to the point or HEAD.

Definition (Vectors in ) The set is defined as

Definition (Vector Equality) Two vectors

Are equal if and only if and

Definition (Vector Addition and Scalar Multiplication) Let be defined as above, and be a scalar.

Definition (Linear Combinations) For and , we call a linear combination of . This yields a vector in

Theorem (Properties of Addition and Scalar Multiplication). Let and . Then:

  1. (Associativity of addition)
  2. (Commutativity of addition)
  3. (Zero element for addition)
  4. (Additive inverse):
  5. (Associativity):
  6. (Identity):
  7. (Distributive):
  8. (Distributive):

1.2 | Bases

Basic Vectors

You can reach every vector in with a linear combination of any two scalars

Let be a set of vectors in , We define the “span” of as

Geometric interpretation of span

  • The span of a set of vectors is the set of all locations I can reach while starting at the origin and walking along those vectors
  • Unless the vectors are linearly dependent,
    • The span of 2 vectors in 3D is a plane
    • The span of 3 vectors in 3D is everything

(Theorem) Cancelling from the Span Let . Some vector can be written as a linear combination of if and only if (this essentially lets you cancel vectors out of the span)

Linear Dependence: A set of vectors in is said to be linearly dependent if there exists , not all zero, such that

A set of vectors in is said to be linearly independent if the only solution to

is the trivial solution with

Basis The basis of a vector space a set of linearly independent vectors that span the full space The basis of the set is to empty set There are infinitely many basis of the set ; one could be

Exercise 1.7a:

The only solution is , thus the set is linearly independent

Exercise 1.8a

Exercise 1.8b

Exercise 1.10: Consider the set where is the vector in with th entry equal to 1 and all other entries 0. Prove that is a basis for Linear Independence: The only valid solution by inspection is Spanning Set: Consider

Let . Thus, , and .

Exercise 1.11: Consider

Which corresponds to the system

Simplifying, we get

If , we have a contradiction. Take . We would need

1.3 | Subspaces

(Definition) A subset of is called a subspace of if for every and , we have

  1. There exists a vector such that for all
  2. For every there exists such that

Closed Sets

  • A set that satisfies V1: is called closed under addition
  • A set that satisfies V6: is called closed under scalar multiplication

(Theorem) Subspace Test Let be a non-empty subset of . If , and for all and , then is a subspace of

Example 1.3.3: Show that the line through the origin of with the following vector equation is a subspace of .

Solution: By V6, we know that for every , thus it is a non-empty subset of . To show the line is closed under addition (S1), we pick any two vectors on the line and show that is also on the line.

To show that the line is closed under scalar multiplication (S6):

Example 1.3.5: Prove that the following set is a subspace of

Solution: By definition, is a subset of and satisfies the conditions of the set . Thus, is a non-empty subset of . To show that is closed under addition:

To show that is closed under scalar multiplication:

Example 1.3.6: Prove that the following is not a subspace of

Solution: By definition, is a subset of , but we see that does not satisfy the condition of , hence is not a subspace of .

(Theorem) Span is a Subspace If , then is a subspace of

Bases of Subspaces If a subset of has a basis, then must be a subspace of . Converse is also true: every subspace of has basis All bases of will contain the same number of elements. We define the dimension of as .

Example 1.3.10: Find a basis for the following subspace, and determine it’s dimension.

If , then and . Hence, and . Thus, every vector in can be represented as

Thus, the vector

is a spanning set and a basis for . Since contains one vector,

1.4 | Dot Product, Cross Product, Scalar Equations

Norm

Definition: Norm (the length of a vector )

Norm Properties

  1. and if and only if
  2. (Cauchy-Schwarz)
  3. (Triangle Inequality)

Dot Product

Definition: Dot Product (Algebraic) The dot product of is “the distance from the vector to the shadow”

Theorem: Dot Product (Geometric) If and is an angle between , then

Furthermore, are said to be orthogonal if their dot product is 0. A set of vectors is called an orthogonal set is they are all orthogonal with each other.

Proof of the dot product:

Dot Product Properties

  1. Zero:
  2. Commutative:
  3. Distributive:
  4. Associative:
  5. Orthogonal: if

Cross Product

Revisiting lines in

Suppose is orthogonal to the direction vector . Then, the vectors on the line are the vectors satisfying

Definition: Cross Product Finding a vector that is orthogonal to both vectors

Cross Product Properties

  1. If then for all we have
  2. or

Let with linearly independent and let be the plane in with vector equation

If , then an equation for the plane is

Ex 1.2.6: Determine a scalar equation for

Solution:

Scalar Equations of Planes

Cartesian / Scalar equation of a plane:

  • is a vector normal to the plane
  • with point on the plane

Vector Equation of a plane:

  • is a point
  • is a fixed point
  • are non-collinear vectors

1.5 | Projections

Definition: Projection / “the shadow of cast onto

Definition: Perpendicular / “the perpendicular of onto

perp_{\vec{v}}(\vec{u})=\vec{u}-proj_\vec{v}(\vec{u})

Deriving the Projection Formula: for some , with . If this holds, we must have:

Ex 1.3.0: Prove that

proj_\vec{v}(2\vec{x}-3\vec{y})-2proj_\vec{v}(\vec{x})-3proj_\vec{v}(\vec{y}) \begin{align} proj_\vec{v}(2\vec{x}-3\vec{y})&=\frac{(2\vec{x}-3\vec{y})\cdot\vec{v}}{||\vec{v}||^2}\vec{v} \\ &=\frac{2(\vec{x}\cdot\vec{v})-3(\vec{y}\cdot\vec{v})}{||\vec{v}||^2}\vec{v} \\ &= (\frac{2(\vec{x}\cdot\vec{v})}{||\vec{v}||^2} - \frac{3(\vec{y}\cdot\vec{v})}{||\vec{v}||^2})\vec{v} \\ &= \frac{2(\vec{x}\cdot\vec{v})}{||\vec{v}||^2}\vec{v} - \frac{3(\vec{y}\cdot\vec{v})}{||\vec{v}||}\vec{v} \\ &= 2proj_\vec{v}(\vec{x})-3proj_\vec{v}(y) \end{align}

2 | Systems of Linear Equations

2.1 | Basic Terminology

Definition: Linear Equation An equation in variables (unknowns) that can be written in the form

is called a linear equation. Multiple such simultaneous equations are called a system of linear equations.

Theorem: If a system of linear equation in variables has two distinct solutions , then for every is a solution, and furthermore these solutions are all distinct.

2.2 | Solving Systems of Linear Equations

Elementary Row Operations

Definition: Coefficient and Augmented Matrices The coefficient matrix for the system of linear equations

is the rectangular array

The augmented matrix of the system is

Example:

Definition: Elementary row operations (EROs):

  1. Multiplying a row by a non-zero scalar
  2. Add a multiple of one row to another ()
  3. Swapping two rows ()

Definition: Reduced row echelon form (RREF):

  1. If there is zero row, then it appears below all non-zero rows.
  2. The leftmost non-zero entry in each non-zero row is a 1 (leading 1)
  3. When comparing two non-zero rows, the leading 1 in the higher row is further to the left than the leading 1 in the lower row
  4. A leading 1 is the only non-zero entry in its column.

Theorem: Every matrix has a unique reduced row echelon form

Reducing Row 1

Reducing Row 2

This is an RREF :‘)

Algorithm: Gauss-Jordan Eliminations To solve a system of linear equations: just make leading 1s and clear everything

  1. Write the augmented matrix for the system.
  2. Use elementary row operations to row reduce the augmented matrix into RREF.
  3. Write the system of linear equations corresponding to the RREF.
  4. If the system contains an equation of the form 0 = 1, the system is inconsistent.
  5. Otherwise, move each free variable (if any) to the right hand side of each equation and assign each free variable a parameter.
  6. Determine the solution set by using vector operations to write the system as a linear combination of vectors.

Definition: Homogeneous Systems Systems with the form , which is the same as

Example:

Represents the following system

This system is consistent and the solution set has a parameter for a free variable.

If we let .

Consider the associated homogeneous system.

Solution:

Theorem: Solutions of homogeneous systems The solution set of a homogeneous system in linear equations and variables is a subspace of

Definition: Associated homogeneous system Given a non-homogeneous system of equations , the homogeneous system is called the associated homogeneous system. NOTE: the coefficients of a matrix are the same of it’s associated homogeneous system, same with the RREF of each

Definition: Free Variable Let be the RREF of the coefficient matrix, , of a consistent system of linear equations . If the jth variable of does not contain a leading 1, then we call variable a free variable of the system.

Example: This is a consistent system with one free variable ()

Exercise: Solve the following system:

The solution set is the following point :

Exercise: Check if a set of 5 vectors in is linearly independent Solution: The resulting coefficient matrix is a homogenous 5x4 matrix. This matrix cannot be consistent with a free variable, because it is larger than 4x4 (square).

2.3 | Rank

Definition: Rank of a Matrix The rank of a matrix, denoted by is the number of leading 1s in the reduced row echelon form of the matrix

Theorem: System-Rank Theorem Let be the coefficient matrix of a system of linear equations in variables,

  1. The system is inconsistent if an only if
  2. If the system is consistent, then the system contains free variables (parameters)

Example: Find all such that the given system has i) no solutions, ii) infinite solutions, iii) a unique solution:

i) The system has no solution iff . ii) iii)

2.4 | Linear Independence, Spanning Sets, Bases

Theorem: Extension of System-Rank Theorem

  • Let be a set of vectors in and let . The set is linearly independent if and only if .

Theorem: Rank implies Span

  • spans if and only if

Theorem: Linear Independence implies Span

  • in is linearly independent if and only if it spans .

2.5 | Complex Systems

3 | Matrices and Linear Mappings

3.1 | Operations on Matrices

Addition and Scalar Multiplication

Functions on Euclidean Spaces:

Definition: Matrix An matrix is a rectangular array with rows and columns. We denote the entry in the -th row and -th column by or That is, Two matrices and are equal if for all The set of all matrices with real entries is denoted by

Definition: Addition and Scalar Multiplication Let . We define and by

REMARKS

  1. Addition is only defined if both matrices have the same size
  2. A sum of scalar multiples of matrices is called a linear combination

Transpose

Definition: Transpose (like cs135) The transpose of an matrix is the matrix whose -th entry is the -th entry of . That is,

TRANSPOSE PROPERTIES

Matrix-Vector Multiplication

Definition: Matrix-Vector Product Let be an matrix whose rows are denoted for . For any , we define by

For instance, in 2 dimensions this would be:

Note: If is an matrix, then is only defined if .

Definition: Matrix-Vector Multiplication

PROPERTIES

Theorem: Column Extraction If is the -th standard basis vector and , then

Theorem: Row-Column Multiplication If , then

Matrix Multiplication

Definition: Matrix Multiplication For an matrix and an matrix , we define to be the matrix

PROPERTIES

Matrix Multiplication Formula:

Theorem: Matrix Equality Theorem If and are matrices such that for every , then

Definition: Identity Matrix The identity matrix, denoted by or , is the matrix such that for and whenever . That is,

Definition: Block Matrix If is an matrix, then we can write as the block matrix

Where is a block such that all blocks in the -th row have the same number of rows and all blocks in the -th column have the same number of columns.

3.2 | Linear mappings

Matrix Mappings

Domain and Codomain For sets , a function is a rule that associates to called the image of under . The set is called the domain of and is called the codomain of

Matrix Mappings If is an matrix, then a matrix mapping would be

We will write instead of:

Theorem: Deconstructing matrix mappings If is an matrix and is defined by , then for all , we have

Linear Mappings

Definition: Linear Mappings A function is a linear mapping if for ever , we have

Two linear mappings and are said to be equal if for all . We write A linear mapping is sometimes called a linear operator (domain = codomain)

Ex: Prove that defined by is a linear mapping. Solution: Let , and . Then

Ex: Prove that defined by is nonlinear. Solution:

Theorem: Linear mapping of If is a linear mapping, then

Theorem: Linear Mapping to Matrix Mapping Every linear mapping can be represented as a matrix mapping, where the -th column is the image of the -th standard basis vector of . That is,

Definition: Standard Matrix Let be a linear mapping. The matrix

is called the standard matrix of . It satisfies .

Rotations in

Rotating a vector about the origin

Theorem: If is a rotation with rotation , then the columns of are orthogonal unit vectors.

Reflections

Definition: Reflection Let denote the mapping that sends a vector to its mirror image in the hyperplane with the normal vector .

refl_p(\vec{x})=\vec{x}-2proj_\vec{n}(\vec{x})

3.3 | Special Subspaces

Range and onto

Definition: Range Let be a linear mapping. The range of is defined by

Example: Consider the linear mapping and the vectors

would be in the range of , would not because the system would be inconsistent, would be

Theorem: Range is a Subspace If is a linear mapping, then is a subspace of (This can be proved by the Subspace Test)

Definition: Onto / Surjective A linear mapping is called onto or surjective of

Example: Prove that is onto. Proof: We need show that if we pick any vector , then we can find a vector such that

Thus, since the system is consistent for all . Specifically, taking to be the solutions of the matrix, we have . Thus, is onto.

Kernel and one-to-one

Let be a linear mapping. The kernel (nullspace) of is the set of all vectors in the domain which are mapped to the zero vector in the codomain. That is,

Example: Given and the vectors

We have that

Theorem: Kernel is a subspace If is a linear mapping, then is a subspace of . (Can also be proven with subspace test)

Definition: One-to-one A linear mapping is called one-to-one (or injective) if

Theorem: One-to-one mapping is unique Let be a linear mapping. is only one-to-one if and only if for every such that , we must have .

If a mapping is both one-to-one and onto, it is called bijective.

Column Space and Nullspace

Theorem: Standard Matrix and kernel Let be a linear mapping with standard matrix . Then, if and only if $[L]\vec{x}=\vec{0}

Theorem: Corollary Let . The set is a subspace of .

Definition: Nullspace Let be an matrix. The nullspace (kernel) of is defined by

Theorem: Basis of nullspace Let be an matrix. Suppose the vector equation of the solution set of as determined by Gauss-Jordan is

Then is a basis for .

Rank-Nullity

Theorem: System-Rank Theorem on Null If is an matrix, then

Definition: Nullity

Theorem: Range and Span If is a linear mapping with standard matrix , then

Definition: Column Space Let . The column space of , denoted by , is the subspace of spanned by the columns of . That is,

Theorem: Column Spaces and Coefficient Matrices Let be an matrix and let . Then if and only if the system is consistent.

Theorem: Basis for Let be an matrix. Supposed that and that the RREF has leading ones in columns . Then is a basis for .

Theorem: Rank-Nullity Theorem: If is an matrix, then

Definition: Row Space and Left Nullspace Let be an matrix. The row space of is defined by

The left nullspace of is defined by

Theorem: Last fucking theorem in this godforsaken chapter Let be an matrix. Then,

3.4 | Operations on Linear Mappings

Definition: Addition and Scalar Multiplication (linear mappings as vectors/matrices) Let be linear mappings. We define the following:

Theorem: Linear mappings satisfy linear combinations If are linear mappings and , then are linear mappings. Moreover,

Theorem: Linear Mapping Properties If and , then:

  1. There exists a linear mapping , such that for all In particular, is the linear mapping defined by for all The mapping is called the zero mapping.
  2. For any linear mapping , there exists a linear mapping with the property that In particular, is the linear mapping defined by for all

Definition: Composition Let , be linear mappings. The composition of is the function

Theorem: Compositions can be decomposed If , are linear mappings, then is a linear mapping and

Definition: Identity Mapping The linear mapping defined by is called the identity mapping.

3.5 | Matrices with Complex Entries

Unit 3 Recap

For a linear mapping , Range, subspace of

Kernel, subspace of

For an , matrix, say Column Space, subspace of

Null Space, subspace of

System-Rank Theorem:

Nullity

Matrix Multiplication: = a i-th row b i-th column

4 | Inverses and Determinants

4.1 | Matrix Inverses

Definition: Left / Right Inverses Let be an matrix. If is an matrix such that , then is called a right inverse of . If is an matrix such that , then is called a left inverse of .

How to find a right inverse of ? We want to find an matrix such that

Comparing columns, we see that we need to find such that

This is the same as requiring that belongs to for . Since is a subspace of , this is the same to having .

Theorem: If is an matrix, then

  1. has a right inverse if and only if
  2. has a left inverse if and only if

A has a left inverse if and only if

So A has a left inverse a has a right inverse

Theorem: Matrix with left and right inverse If are matrices such that , then

Definition: Invertible Let be an matrix. If is a matrix such that , then is called the inverse of . We write and we say that is invertible

Corollary: An matrix is invertible if and only if

Theorem: If are matrices such that , then are invertible. Moreover, and

Example (FINALLY): Find the inverse of

Use the super-augmented matrix:

To confirm, we can check

Theorem: If and are invertible matrices and with , then

Theorem: 2x2 Matrix

Then is invertible if and only if . Moreover, if is invertible,

Theorem: Invertible Matrix Theorem For any matrix , the following are equivalent.

  1. is invertible
  2. The RREF of is I
  3. The system of equations is consistent with a unique solution for all
  4. The nullspace of is
  5. The columns of form a basis for
  6. The rows of form a basis for
  7. is invertible.

4.2 | Elementary Matrices

Definition: Elementary Matrix An matrix is called an elementary matrix if it can be obtained by from the identity matrix by performing exactly one elementary row operation

Theorem: Invertibility of If is an elementary matrix, then is invertible and is also an elementary matrix

Theorem: Matrix multiplication of Let be an matrix and be an elementary matrix. Then,

  1. If corresponds to the ERO , then is the same as
  2. If corresponds to the ERO , then is the same as
  3. If corresponds to the ERO , then is the same as

Theorem: Matrix multiplication doesn’t change rank

Turns out we can represent row reduction as elementary matrices!

Theorem: EROs as matrices If is an matrix with RREF , then there exists elementary matrices such that . In particular:

Example: Write as the product of elementary matrices

Thus, we get

4.3 | Determinants

Determinants

Definition: 2x2 Determinant

The determinant of is the following. If , then is invertible.

Definition: 3x3 Determinant

Simplifying this, we get

Definition: Cofactor Let be an matrix with . Let be the matrix obtained from by deleting the -th row and -th column. The cofactor of is

Example: Calculate the determinant of the given matrix using the cofactor

We expand along Row 2

Definition: determinant Let be an matrix with . The determinant of is defined as

where the determinant of a matrix is defined by

The Cofactor Expansion

Theorem: Cofactor Expansion Let be an matrix. For any

is called the cofactor expansion across the i-th row, OR for any ,

is called the cofactor expansion across the j-th column

Definition: Upper Triangular An matrix is said to be upper triangular if whenever . An matrix is said to be lower triangular if whenever

Theorem: If an matrix is upper triangular or lower triangular, then

Determinants and Row Operations

Theorem: Effects of EROs on determinants

  1. Corollary: If matrix has two identical rows, then
  2. _1=R_1+cR_2\implies\det(B)=\det(A)$

Determinants and Elementary Matrices

Addition to the Invertible Matrix Theorem: An matrix is invertible if and only if

Theorem:

  1. If is an invertible, then
  2. If is an matrix, then

Example: Prove

Example: Find a matrix satisfying and

4.4 | Determinants and Systems of Equations

Lemma: If is an matrix with cofactors and , then

Theorem: If is an invertible matrix, then

Definition: Cofactor Matrix, Adjugate Matrix Let be an matrix. The cofactor matrix of , denoted by , of is the matrix with entries

The adjugate of is the matrix, denoted by , is the matrix with entries

The two are inverses of each other:

Theorem: Cramer’s Rule (OPTIONAL) If is an invertible matrix, then the solution of is given by

where is the matrix obtained from by replacing the -th column of by

4.5 | Area and Volume

5 | Dimensions and Coordinates

5.1 | Bases and Dimension

Definition: Basis Let be a subspace of . A set contained in is called a basis for if is a linearly independent spanning set for . We define a basis for the zero subspace to be the empty set.

Theorem Every subset of has a basis.

Theorem If you add vectors to a basis, the basis is no longer a basis :o

Theorem If , then

  1. A set of vectors must be linearly dependent
  2. A set of vectors cannot span
  3. A set of vectors is linearly independent if and only if it spans

Theorem: Expanding sets to bases If is a -dimensional subspace of and is a linearly independent set in with , then there exist vectors that is a basis for

Theorem: Nested Subspaces Let and beg subspaces of such that . Then,

. Moreover, if and only if

5.2 | Coordinates

Theorem: If is a basis for the subspace of , then every can be written as a unique linear combination of the vectors in

Definition: B- coordinates and B-coordinate vector (the coefficients of the linear combination) Let be a basis for a subspace of . If , then are called the B-coordinates of , and we define the B-coordinate vector of as

Theorem: If is a subspace of with basis , then for any and we have

Example: Consider the ordered basis

  1. Compute given that
  1. Determine for

Definition: Change of Coordinates Matrix Let and both be bases for a subspace . The change of coordinates matrix from B-coordinates to C-coordinates is defined by

and for any we have

Theorem: If and are bases for a -dimensional subspace , then the change of coordinate matrices and satisfy

6 | Eigenvectors and Diagonalization

6.1 | Matrix of a Linear Mapping, Similar Matrices

Definition: -Matrix Let be a basis for and let be a linear operator. The B-matrix of is defined to be

It satisfies

Exercise: Determine where

Solution:

Therefore,

Definition: Diagonal Matrix An matrix is said to be a diagonal matrix if for all . We denote a diagonal matrix by

Consider

What is ? What is ?

Definition: Similar Matrices If and are matrices such that for some invertible matrix then is said to be similar to

Theorem: If is similar to , then

6.2 | Eigenvalues and Eigenvectors

Eigenvalues / Eigenvectors

Intro: A geometrically natural basis

If , then

and therefore,

Definition: Eigenvalue, eigenvector Let be an matrix. A scalar is called the eigenvalue of is there exists a vector such that . A vector satisfying is called an eigenvector of corresponding to The pair is called an eigenpair

Example

Definition: Characteristic Polynomial Let be an matrix. The characteristic polynomial of is the -th degree polynomial

To denote this, we write .

Theorem A scalar is an eigenvalue of an matrix if and only if

Example: Find the corresponding subspaces For , we find all nonzero solutions to the system .

Thus, the general solution is

For , we do the same with .

Thus, the general solution is

Eigenspaces

Definition: Eigenspace Let be an matrix with eigenvalue . We call the nullspace of the eigenspace of The eigenspace is denoted .

Theorem If is an upper or lower triangular matrix, then the eigenvalues of are the diagonal entries of .

Definition: Multiplicity Let be an matrix with eigenvalue . The algebraic multiplicity of , denote , is the number of times that is a root of the characteristic polynomial . That is,

The geometric multiplicity of , denoted , is the dimension of its eigenspace. That is,

Lemma: Let be similar matrices. Then, and have the same characteristic polynomial, and hence the same eigenvalues.

Theorem: If is an matrix with eigenvalue , then

6.3 | Diagonalization

Definition: Diagonalizable An matrix is said to be diagonalizable if is similar to a diagonal matrix . If , then we say that diagonalizes A.

Theorem: Diagonalization Theorem An matrix is diagonalizable over if and only if there exists a basis for for eigenvectors of .

Theorem: If is an matrix with eigenpairs , where , for , then is linearly independent.

Theorem: If is an matrix with distinct eigenvalues and is a basis for the eigenspace of for , then is a linearly independent set.

Theorem: Diagonalizability Test If is an matrix who’s characteristic polynomial factors as

. where are the distinct eigenvalues of , then is diagonalizable if and only if for .

Theorem: If is an matrix with distinct eigenvalues, then is diagonalizable.

ALGORITHM: Diagonalize / show not diagonalizable

  1. Find and factor the characteristic polynomial
  2. Let denote the roots of repeated according to multiplicity. If any of these eigenvalues aren’t real, then A is not diagonalizable over .
  3. Find a basis for the eigenspace of each by finding a basis for the nullspace of
  4. If for any , then is not diagonalizable. Otherwise, form a basis for eigenvectors of . Let . Then where is an eigenvalue corresponding to the eigenvector for

Example: Show that is not diagonalizable

Solution:

Hence, is the only eigenvalue and . We then get

So, a basis for is . Thus, is not diagonalizable because

Fact: IF is an eigenvalue of , then it must be the case that has infinitely many solutions. So if you’re determining an eigenspace for a supposed eigenvalue if , and during row reduction you find that , STOP!

Theorem: USEFUL FOR CHECKING EIGENVALUES If are the eigenvalues of an matrix (repeated according to algebraic multiplicity), then

6.4 | Powers of Matrices

Theorem: Let be an matrix. If there exists a matrix and diagonal matrix such that , then

Example: Calculate , given

Solution:

Thus, the eigenvalues of are and

Thus, we have that

Hence,

“operations on diagonal matrices are easy!” - carrie knoll