Picture of the authorMindect

Interpreting Eigenvalues and Eigenvectors

Welcome to the Week 4 Lab. Here you will practice finding and interpreting eigenvalues and eigenvectors for various linear transformations.

After this lab you will be able to:

  • use Python to find eigenvalues and eigenvectors
  • visualize and interpret eigenvalues and eigenvectors

Packages

Run the following cell to load the packages you'll need. The utils.py file includes a function you'll use later to plot transformations.

import numpy as np
import matplotlib.pyplot as plt
import utils

1 - Eigenvalues and Eigenvectors: Definition and Interpretation

1.1 - Definition of Eigenvalues and Eigenvectors

Let's consider a linear transformation defined by matrix A=[2321]A=\begin{bmatrix}2 & 3 \\ 2 & 1 \end{bmatrix}. Apply this transformation to the standard basis vectors e1=[10]e_1=\begin{bmatrix}1 \\ 0\end{bmatrix} and e2=[01]e_2=\begin{bmatrix}0 \\ 1\end{bmatrix} and visualize the result. Hopefully using a matrix to transform a basis of vectors is familiar from earlier lectures in the course.

A = np.array([[2, 3],[2, 1]])
e1 = np.array([[1],[0]])
e2 = np.array([[0],[1]])

You can use the function plot_transformation defined in utils.py to visualize the transformation generated by the matrix AA.

utils.plot_transformation(A, e1, e2, vector_name='e');

Both of the original basis vectors e1e_1 and e2e_2 changed their length and direction with the transformation AA. What if you could choose some other basis vectors where only their lengths will change? This means that for the vector vv, its transformation will be Av=λvAv=\lambda v.

As you saw in the lectures, a vector vv with this property is called an eigenvector and the scaling factor λ\lambda is called an eigenvalue.

Note that if vv is an eigenvector, so that Av=λvAv = \lambda v, then any multiple or scaled version of vv is also an eigenvector with the same eigenvalue. If we let kk represent that scale factor, we would write this mathematically as:

A(kv)=k(Av)=kλv=λ(kv),A(kv)=k(Av)=k \lambda v = \lambda (kv), where kk is any real valued constant different from zero.

In other words, for each eigenvalue, there are infinitely many valid eigenvectors. You can imagine them as all pointing along the same straight line and just having different lengths, or norms. In practice, you will choose just one eigenvector, and it is common to choose the eigenvector which has a norm of 1.

1.2 - Finding Eigenvalues and Eigenvectors with Python

In Python eigenvalues and eigenvectors can be found using the NumPy function np.linalg.eig(). It returns a tuple consisting of a vector and an array. The vector contains the eigenvalues. The array contains the corresponding eigenvectors, one eigenvector per column. Note that this function chooses the eigenvectors so that they have a norm of 1.

With the following code you can find an eigenvalues and eigenvectors for the previously defined matrix AA:

A_eig = np.linalg.eig(A)
 
print("\n")
 
print(f"Matrix A:\n{A} \n\nEigenvalues of matrix A:\n{A_eig[0]}\n\nEigenvectors of matrix A:\n{A_eig[1]}")

Remember that the first element of the tuple contains the eigenvalues, and the second one has the eigenvectors, one in each column. This means that the first eigenvector can be extrancted with the code A_eig[1][:,0] and second eigenvector with the code A_eig[1][:,1].

Let's visualize the result of the transformation on the eigenvectors:

utils.plot_transformation(A, A_eig[1][:,0], A_eig[1][:,1]);

As you can see, v1v_1 is being streched by a factor of 4, while v2v_2 shows a change of the direction, which is equivalent to a factor of -1. Both vectors, however, are still parallel to the direction they were originally pointing and so meet the definition of an eigenvector.

2 - Eigenvalues and Eigenvectors of some Standard Transformations in a Plane

# Define transformation matrix A_reflection_yaxis as a numpy array.
A_reflection_yaxis = np.array([[-1,0],[0,1]])
# Find eigenvalues and eigenvectors of matrix A_reflection_yaxis.
A_reflection_yaxis_eig = np.linalg.eig(A_reflection_yaxis)
 
print(f"Matrix A:\n {A_reflection_yaxis} \n\nEigenvalues of matrix A:\n {A_reflection_yaxis_eig[0]}",
        f"\n\nEigenvectors of matrix A:\n {A_reflection_yaxis_eig[1]}")
 
utils.plot_transformation(A_reflection_yaxis, A_reflection_yaxis_eig[1][:,0],A_reflection_yaxis_eig[1][:,1]);

In the examples you've seen so far, you've considered 2 ×\times 2 matrices, and each of them have had 2 distinct eigenvalues, and 2 distinct eigenvectors. A natural question arises: is it always possible to find two different eigenvectors for any linear transformation in the plane? As you already learned in the lectures, the answer is unfortunately no. You'll see a case of this happening in the following example.

A shear transformation looks like the image below. This transformation displaces each point in a fixed direction by an amount proportional to its signed distance from a given line parallel to that direction. You can imagine it as slicing the plane into layers and then sliding those layers past one another. Let's explore how many eigenvectors this kind of transformation has.

To create a matrix transformation that shears in the x-direction, you want to displace the component in the y-direction by some factor, say 0.5. This can be done with the following matrix:

Ashear_x=(10.501)A_{\text{shear\_x}} = \begin{pmatrix} 1 & 0.5 \\ 0 & 1 \end{pmatrix}

Note that vector e1=[10]e_1=\begin{bmatrix}1 \\ 0\end{bmatrix} will remain the same, and vector e2=[01]e_2=\begin{bmatrix}0 \\ 1\end{bmatrix} will transform into a vector [0.51]\begin{bmatrix}0.5 \\ 1\end{bmatrix}.

In the next cell, you will define the shear matrix, find the eigenvalues and eigenvectors, and visualize the transformation applied to the eigenvectors you find.

# Define transformation matrix A_shear_x as a numpy array.
A_shear_x = np.array([[1, 0.5],[0, 1]])
# Find eigenvalues and eigenvectors of matrix A_shear_x.
A_shear_x_eig = np.linalg.eig(A_shear_x)
 
print(f"Matrix A_shear_x:\n {A_shear_x}\n\nEigenvalues of matrix A_shear_x:\n {A_shear_x_eig[0]}",
      f"\n\nEigenvectors of matrix A_shear_x \n {A_shear_x_eig[1]}")
 
utils.plot_transformation(A_shear_x, A_shear_x_eig[1][:,0], A_shear_x_eig[1][:,1])
# Define transformation matrix A_shear_x as a numpy array.
A_shear_x = np.array([[1, 0.5],[0, 1]])
# Find eigenvalues and eigenvectors of matrix A_shear_x.
A_shear_x_eig = np.linalg.eig(A_shear_x)
 
print(f"Matrix A_shear_x:\n {A_shear_x}\n\nEigenvalues of matrix A_shear_x:\n {A_shear_x_eig[0]}",
      f"\n\nEigenvectors of matrix A_shear_x \n {A_shear_x_eig[1]}")
 
utils.plot_transformation(A_shear_x, A_shear_x_eig[1][:,0], A_shear_x_eig[1][:,1]);

As you can see, there are two eigenvalues in the output, but they are actually complex numbers. Note in Python the imaginary part of the complex numbers is indicated with a j instead of the ii you see more commonly in mathematics.

This matrix has two complex eigenvalues and two corresponding complex eigenvectors. Since there are no real eigenvectors, however, we can interpret this result as saying there are no vectors on the plane that will keep their direction after a 90 degree rotation. And think about it, that makes sense. If you rotate the plane, every vector will now be facing in a new direction.

If you're less familiar with real vs. complex numbers don't worry. The main point here is that some 2 ×\times 2 matrices will only have one or zero real eigenvectors, and hopefully you're developing intuition for why that's the case. If there are no vectors that point in the same direction after the matrix transformation is applied, we wouldn't expect to find any eigenvectors. With that in mind, let's look at another interesting example.

2.4 - Example 4: Identity Matrix and Scaling in All Directions

What happens if we transform the plane using the identity matrix? This means that there will be no change to any vector in the plane. Since every point and vector does not move at all, in this case every vector is still facing in the same direction, and every vectors meets the definition of an eigenvector.

In the next cell, you will explore what kinds of output you get from NumPy when you try to calculate the eigenvalues and eigenvectors of the identity matrix.

A_identity = np.array([[1, 0],[0, 1]])
A_identity_eig = np.linalg.eig(A_identity)
 
utils.plot_transformation(A_identity, A_identity_eig[1][:,0], A_identity_eig[1][:,1]);
 
print(f"Matrix A_identity:\n {A_identity}\n\nEigenvalues of matrix A_identity:\n {A_identity_eig[0]}",
      f"\n\nEigenvectors of matrix A_identity\n {A_identity_eig[1]}")

As you can see, the out of the np.linalg.eig() function shows that there are two eigenvalues that are equal to each other λ=1\lambda = 1, which is true. But the list of eigenvectors does not cover all of them. It can be shown algebraically that all of the vectors will be eigenvectors for identity matrix. Using software, you can't see it sometimes, so be careful! That's why understanding of mathematical objects behind your code and models is so important.

Check that the same will happen finding eigenvectors for the scaling (dilation) in both directions x and y by factor of 22. In this case every vector is facing the same direction as it was before, but twice as long. Once again, every vector meets the definition of an eigenvector, but NumPy will only provide two.

A_scaling = np.array([[2, 0],[0, 2]])
A_scaling_eig = np.linalg.eig(A_scaling)
 
utils.plot_transformation(A_scaling, A_scaling_eig[1][:,0], A_scaling_eig[1][:,1]);
 
print(f"Matrix A_scaling:\n {A_scaling}\n\nEigenvalues of matrix A_scaling:\n {A_scaling_eig[0]}",
      f"\n\nEigenvectors of matrix A_scaling\n {A_scaling_eig[1]}")

2.5 - Example 5: Projection onto x-axis

Let's investigate one last interesting example: projection onto the x-axis. This transformation keeps only the x component of the vector and sets all y-values to 0.

The transformation that projects onto the x-axis can be defined by the matrix

Aprojection=[1000].A_{\text{projection}}=\begin{bmatrix}1 & 0 \\ 0 & 0 \end{bmatrix} .
A_projection = np.array([[1, 0],[0, 0]])
A_projection_eig = np.linalg.eig(A_projection)
 
utils.plot_transformation(A_projection, A_projection_eig[1][:,0], A_projection_eig[1][:,1]);
 
 
print(f"Matrix A_projection:\n {A_projection}\n\nEigenvalues of matrix A_projection:\n {A_projection_eig[0]}",
      f"\n\nEigenvectors of matrix A_projection\n {A_projection_eig[1]}")

This matrix has two real eigenvalues, and one of them is equal to 00. There is nothing wrong with this, λ\lambda can be equal to 00! In this case, this just means that anything that lies on the y-axis will be sent to zero, since it has no component in the x-direction. Since there are two distinct eigenvalues, the transformation still has two eigenvectors.

Conclusion

Congratulations! You have reached the end of this lab. Hopefully by now you have a clearer intuition about what eigenvalues and eigenvectors represent, and why different 2 ×\times 2 matrices have different numbers of eigenvectors.

On this page

Edit on Github Question? Give us feedback