You Could Have Come Up With Eigenvectors - Here's How
In the last post, we developed an intuition for matrices. We found that they are just compact representations of linear maps and that adding and multiplying matrices are just ways of combining the underlying linear maps.
In this post, we’re going to dive deeper into the world of linear algebra and cover eigenvectors. Eigenvectors are central to Linear Algebra and help us understand many interesting properties of linear maps including:
The effect of applying the linear map repeatedly on an input.
How the linear map rotates the space. In fact eigenvectors were first derived to study the axis of rotation of planets!
Eigenvectors helped early mathematicians study how the planets rotate. Image Source: Wikipedia.
For a more modern example, eigenvectors are at the heart of one of the most important algorithms of all time - the original Page Rank algorithm that powers Google Search.
Our Goals
In this post we’re going to try and derive eigenvectors ourselves. To really create a strong motivation, we’re going to explore basis vectors, matrices in different bases, and matrix diagonalization. So hang in there and wait for the big reveal - I promise it will be really exciting when it all comes together!
Everything we’ll be doing is going to be in the 2D space R2 - the standard coordinate plane over real numbers you’re probably already used to.
Basis Vectors
We saw in the last post how we can derive the matrix for a given linear map f:
f(x) (as we defined it in the previous section) can be represented by the notation
[f([10])f([01])]=[3005]
This is extremely cool - we can describe the entire function and how it operates on an infinite number of points by a little 4 value table.
But why did we choose [10] and [01] to define the columns of the matrix? Why not some other pair like [33] and [00]?
Intuitively, we think of [10] and [01] as units that we can use to create other vectors. In fact, we can break down every vector in R2 into some combination of these two vectors.
We can reach any point in the coordinate plan by combining our two vectors.
More formally, when two vectors are able to combine in different ways to create all other vectors in R2, we say that those vectors span the space. The minimum number of vectors you need to span R2 is 2. So when we have 2 vectors that span R2, we call those vectors a basis.
[10] and [01] are basis vectors for R2.
You can think of basis vectors as the minimal building blocks for the space. We can combine them in different amounts to reach all vectors we could care about.
We can think of basis vectors as the building blocks of the space - we can combine them to create all possible vectors in the space. Image Source: instructables.com.
Other Basis Vectors for R2
Now are there other pairs of vectors that also form a basis for R2?
Let’s start with an example that definitely won’t work.
Bad Example
[10] and [−10].
Can you combine these vectors to create [23]? Clearly you can’t - we don’t have any way to move in the y direction.
No combination of these two vectors could possible get us the vector P.
Good Example
What about [10] and [11]?
Our new basis vectors.
Surprisingly, you can! The below image shows how we can reach our previously unreachable point P.
Note we can can combine 3 units of [11] and −1 units of [10] to get us the vector P.
I’ll leave a simple proof of this as an appendix at the end of this post so we can keep moving - but it’s not too complicated so if you’re up for it, give it a go! The main thing we’ve learned here is that:
There are multiple valid bases for R2.
Bases as New Coordinate Axes
In many ways, choosing a new basis is like choosing a new set of axes for the coordinate plane. When we when we switch our basis to say B={[10],[11]}, our axes just rotate as shown below:
As our second basis vector changed from [01] to [11], our y axis rotates to be in line with [11].
As a result of this, the same notation for a vector means different things in different bases.
In the original basis, [34] meant:
The vector you get when you compute 3⋅[10]+4⋅[01].
Or just 3⋅first basis vector plus 4⋅second basis vector.
In our usual notation, [34] means 3 units of [10] and 4 units of [01]
Now when we use a different basis , the meaning of this notation actually changes.
For the basis is B={[10],[11]}, the vector [34]B means:
The vector you get from: 3⋅[10]+4⋅[11].
You can see this change below:
In the notation of basis B, [34]B means 3 units of [10] and 4 units of [11] giving us point PB.
By changing the underlying axes, we changed the location of P even though it’s still called (3,4). You can see this below:
The point P also changes position when we change the basis. It is still 3 parts first basis vector, 4 parts second basis vector. But since the underlying basis vectors have changed, it also changes.
So the vectors [34] and [34]B refer to different actual vectors based on basis B.
Matrix Notation Based on Bases
Similarly the same notation also means different things for matrices based on the basis. Earlier, the matrix F for the function f was represented by:
F=[f([10])f([01])]
When I use the basis B={[10],[11]}, the matrix FB in basis B becomes:
FB=[f([10])Bf([11])B]
More generally, for a basis B={b1,b2}, the matrix is:
FB=[f(b1)Bf(b2)B]
The Power of Diagonals
We took this short detour into notation for a very specific reason - rewriting a matrix in a different basis is actually a neat trick that allows us to reconfigure the matrix to make it easier to use. How? Let’s find out with a quick example.
Let’s say I have a matrix F (representing a linear function) that I need to apply again and again (say 5 times) on a vector v.
This would be:
F⋅F⋅F⋅F⋅F⋅v.
Usually, calculating this is really cumbersome.
Can you imagine doing this 5 times in a row? Yeesh. Image Source: Wikipedia.
But let’s imagine for a moment that F was a diagonal matrix (i.e. something like F=[a00b]). If this were the case, then this multiplication would be EASY.
More generally,
Fn=[an00bn]
This is way easier to work with!
So how can we get F to be a diagonal matrix?
Which Basis makes a Matrix Diagonal?
Earlier, we saw that choosing a new basis makes us change how we write down the matrix. So can we find a basis B={b1,b2} that converts F into a diagonal matrix?
From earlier, we know that FB, the matrix F in the basis B, is written as:
FB=[f(b1)Bf(b2)B]
For this to be diagonal, we must have:
FB=[f(b1)Bf(b2)B]=[λ100λ2]B
for some λ1 and λ2 (i.e. the the top-right and bottom-left elements are 0).
This implies:
f(b1)B=[λ10]B.
f(b2)B=[0λ2]B.
Recall our discussion on vector notation in a different basis:
Say my basis is B={[10],[11]}.
Then the vector [34]B means:
The vector you get when you compute: 3⋅[10]+4⋅[11].
So, we know the following additional information:
f(b1)=[λ10]B=λ1⋅b1+0⋅b2
f(b1)=λ1⋅b1
Similarly,
f(b2)=[0λ2]B=0⋅b1+λ2⋅b2
f(b2)=λ2⋅b2
Seeing this Visually
What do these vectors look like on our coordinate axis?
We saw earlier that choosing a new basis B={b1,b2} creates a new coordinate axis for R2 like below:
A new basis B={b1,b2} gives us new coordinate axis.
Let’s plot f(b1)B=[λ10]B:
In the graph above, we can see that [λ10]B=λ1b1, so
f(b1)B=λ1b1
Similarly, let’s plot f(b2)B=[0λ2]B:
From the above, we see clearly that [0λ2]B=λ2b2, so
f(b2)B=λ2b2
Rules For Getting a Diagonal
So if we can find a basis B formed by b1 and b2 such that:
f(b1)=λ1b1 and
f(b2)=λ2b2,
then, F can be rewritten as FB, where
FB=[λ100λ2]
A nice diagonal matrix!
Enter Eigenvectors
Is there a special name for the vectors above b1 and b2 that magically let us rewrite a matrix as a diagonal? Yes! These vectors are the eigenvectors of f. That’s right - you derived eigenvectors all by yourself.
You the real MVP.
More formally, we define an eigenvector of f as any non-zero vector v such that:
f(v)=λv
or
F⋅v=λv
The basis formed by the eigenvectors is known as the eigenbasis. Once we switch to using the eigenbasis, our original problem of finding f∘f∘f∘f∘f(v) becomes:
FB⋅FB⋅FB⋅FB⋅FB⋅vB=[λ1500λ25]B
So. Much. Easier.
An Example
Well this has all been pretty theoretical with abstract vectors like b and v - let’s make this concrete with real vectors and matrices to see it in action.
Imagine we had the matrix F=[2112]. Since the goal of this post is not learning how to find eigenvectors, I’m just going to give you the eigenvectors for this matrix. They are:
b1=[1−1]b2=[11]
The eigenbasis is just B={b1,b2}.
What is FB, the matrix F written in the eigenbasis B?
Since FB=[f(b1)Bf(b2)B], we need to find :
f(b1)B and f(b2)B
We’ll break this down by first finding f(b1) and f(b2), and rewrite them in the notation of the eigenbasis B to get f(b1)B and f(b2)B.
Finding f(b1)
f(b1) is:
f(b1)=F⋅b1=[2112]⋅[1−1]f(b1)=[1−1]
Finding f(b2)
Similarly,
f(b2)=F⋅b2=[2112]⋅[11]f(b2)=[33]
Rewriting the vectors in the basis B
We’ve now found f(b1) and f(b2). We need to rewrite these vectors in the notation for our new basis B.
Putting this all together,
FB=[f(b1)Bf(b2)B]FB=[1003]
So we get the nice diagonal we wanted!
Geometric Interpretation of Eigenvectors
Eigenvectors also have extremely interesting geometric properties worth understanding. To see this, let’s go back to the definition for an eiegenvector of a linear map f and its matrix F.
An eigenvector, is a vector v such that:
F⋅v=λv
How are λv and v related? λv is just a scaling of v in the same direction - it can’t be rotated in any way.
Notice how λv is in the same direction as v. Image Source: Wikipedia.
In this sense, the eigenvectors of a linear map f show us the axes along which the map simply scales or stretches its inputs.
The single best visualization I’ve seen of this is by 3Blue1Brown who has a fantastic youtube channel on visualizing math in general.
I’m embedding his video on eigenvectors and their visualizations below as it is the best geometric intuition out there:
Like we saw at the beginning of this post, eigenvectors are not just an abstract concept used by eccentric mathematicians in dark rooms - they underpin some of the most useful technology in our lives including Google Search. For the brave, here’s Larry Page and Sergey’s original paper on PageRank, the algorithm that makes it possible for us to type in a few letters on a search box and instantly find every relevant website on the internet.
In the next post, we’re going to actually dig through this paper and see how eigenvectors are applied in Google search!
Stay tuned.
Appendix
Proof that [10] and [11] span R2:
We know already that [10] and [01] can be used to reach every coordinate.
We can create [01] by computing:
[11]−[10]=[01]
Thus we can combine our vectors to obtain both [10] and [01]. By point 1, this means every vector in R2 is reachable by combining [01] and [11].