
# Basis transformations

## Change-of-basis matrix

Let $e = \{e_1, \ldots, e_n\}$ and $f = \{f_1, \ldots, f_n\}$ be two bases for a finite-dimensional real vector space $X$. Pick any element $x \in X$. Then $x = \sum_{j=1}^n x_j e_j$ has coordinates $(x_1, \ldots, x_n)_e$ in the basis $e$. Since $f$ is also a basis, we may express $e_j = \sum_{k=1}^n c_{k,j}\, f_k, \quad j = 1, \ldots, n, \qquad\text{ and }\qquad x = \sum_{j=1}^n x_j \sum_{k=1}^n c_{k,j}\, f_k = \sum_{k=1}^n \bigg(\underbrace{\sum_{j=1}^n c_{k,j}\, x_j}_{\text{coord. in } f}\bigg) f_k.$ Put differently, $\begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_n \end{bmatrix} = \begin{bmatrix} c_{1,1} & c_{1,2} & \ldots & c_{1,n} \\ c_{2,1} & c_{2,2} & \ldots & c_{2,n} \\ \vdots & \vdots & \ddots & \vdots \\ c_{n,1} & c_{n,2} & \ldots & c_{n,n} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ \vdots \\ x_n \end{bmatrix}$ defines the coordinates $(y_1, \ldots, y_n)$ of $x$ in the basis $f$: $x_f = C x_e.$ The $n\times n$ scalar-valued matrix $C \in M_{n \times n}(\R)$ is called a change-of-basis matrix.

• The change-of-basis matrix from $e = \{(1,0,0),(0,1,0), (0,0,1)\}$ to $f = \{(1,0,0), (1,1,0),(1,1,1)\}$ in $\R^n$: \begin{aligned} e_1 &= \sum_{k=1}^3 c_{k,1} f_k \quad\Longleftrightarrow\quad \begin{bmatrix} c_{1,1} \\ c_{2,1} \\ c_{3,1}\end{bmatrix} = \begin{bmatrix} 1 \\ 0 \\0 \end{bmatrix} \\ e_2 &= \sum_{k=1}^3 c_{k,2} f_k \quad\Longleftrightarrow\quad \begin{bmatrix} c_{1,2} \\ c_{2,2} \\ c_{3,2}\end{bmatrix} = \begin{bmatrix} -1 \\ 1 \\0 \end{bmatrix} \\ e_3 &= \sum_{k=1}^3 c_{k,3} f_k \quad\Longleftrightarrow\quad \begin{bmatrix} c_{1,3} \\ c_{2,3} \\ c_{3,3}\end{bmatrix} = \begin{bmatrix} 0 \\ -1 \\1 \end{bmatrix} \end{aligned}. \tag{1} The change-of-basis matrix is $C = \begin{bmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}.$ In particular,$\begin{bmatrix} 2 \\ -1 \\ 1 \end{bmatrix} = \begin{bmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 2 \\ 0 \\ 1 \end{bmatrix} \qquad\text{ yields that }\qquad (2,0,1)_e = (2-1,1)_f.$

## Change-of-basis matrix as an inverse

If we write (1) in column form, we get: $\begin{vmatrix} \begin{bmatrix} . \\ e_1 \\ . \end{bmatrix} \begin{bmatrix} . \\ e_2 \\ . \end{bmatrix} \begin{bmatrix} . \\ e_3 \\ . \end{bmatrix} \end{vmatrix} = \begin{vmatrix} \begin{bmatrix} . \\ f_1 \\ . \end{bmatrix} \begin{bmatrix} . \\ f_2 \\ . \end{bmatrix} \begin{bmatrix} . \\ f_3 \\ . \end{bmatrix} \end{vmatrix} \: \begin{bmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix} \quad\Longleftrightarrow\quad \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \begin{bmatrix} 1 & -1 & 0 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{bmatrix}.$ Thus $I = [f]C$ and $C = [f]^{-1}$, where $[f]$ is the matrix with the basis vectors $f_1, \ldots, f_n$ as column vectors.

N.b. Matrices of this form—with zeros below the main diagonal—are called upper triangular. More precisely, $(a_{ij})_{ij}$ is upper triangular if $a_{ij} = 0$ for $i > j.$ Lower triangular matrices are defined in a similar fashion ($a_{ij} = 0$ for $j > i$).

### › The inverse of a basis matrix is its inverse change-of-basis matrix

Let $[f] = [f_1, \ldots, f_n] \in M_{n \times n}(\mathbb C)$ denote a matrix with column basis vectors $f_1, \ldots, f_n \in \mathbb C^n$ expressed in the standard basis $e$. Then $\begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}_e = \underbrace{\begin{vmatrix} \begin{bmatrix} . \\ f_1 \\ . \end{bmatrix} \ldots \begin{bmatrix} . \\ f_n \\ . \end{bmatrix} \end{vmatrix}}_{[f]} \: \begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}_f \quad \text{ expresses } (y_1, \ldots, y_n)_f \text{ in the basis } e,$ and $\begin{bmatrix} y_1 \\ \vdots \\ y_n \end{bmatrix}_f = \underbrace{\begin{vmatrix} \begin{bmatrix} . \\ f_1 \\ . \end{bmatrix} \ldots \begin{bmatrix} . \\ f_n \\ . \end{bmatrix} \end{vmatrix}^{-1}}_{[f]^{-1}} \: \begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix}_e \quad \text{ expresses } (x_1, \ldots, x_n)_e \text{ in the basis } f.$

### › Any basis in a finite-dimensional vector space corresponds to an invertible matrix

Proof

Proof

Consider $X \cong \mathbb F^n$, $\mathbb F \in \{\mathbb R, \mathbb C\}$. We know from linear algebra that: $A \in M_{n \times n}(\mathbb F) \text{ invertible } \quad\Longleftrightarrow\quad \text{ the columns } A_1, \ldots, A_n \text{ of } A \text{ are lin.ind.} \quad\Longleftrightarrow\quad \{A_1, \ldots, A_n\} \text{ is a basis for } \mathbb F^n.$