8.9 Inverse Matrices
- Page ID
- 14779
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)Two numbers are multiplicative inverses if their product is 1. Every number besides the number 0 has a multiplicative inverse. For matrices, two matrices are inverses of each other if they multiply to be the identity matrix.
What kinds of matrices do not have inverses?
Inverses of Matrices
Multiplicative inverses are two numbers or matrices whose product is one or the identity matrix. Consider a matrix \(A\) that has inverse \(A^{-1}\). How do you find matrix \(A^{-1}\) if you just have matrix \(A\) ?
A=\left[\begin{array}{ccc}
1 & 2 & 3 \\
1 & 0 & 1 \\
0 & 2 & -1
\end{array}\right], A^{-1}=?
The answer is that you augment matrix \(A\) with the identity matrix and row reduce.
\(\left[\begin{array}{ccc|ccc}1 & 2 & 3 & 1 & 0 & 0 \\ 1 & 0 & 1 & 0 & 1 & 0 \\ 0 & 2 & -1 & 0 & 0 & 1\end{array}\right]\)
\(\begin{aligned} R_{1} \cdot-1+R_{2} & \rightarrow\left[\begin{array}{ccc|ccc}1 & 2 & 3 & 1 & 0 & 0 \\ 0 & -2 & -2 & -1 & 1 & 0 \\ 0 & 2 & -1 & 0 & 0 & 1\end{array}\right] \\ R_{2}+R_{3} & \rightarrow\left[\begin{array}{ccc|ccc}1 & 2 & 3 & 1 & 0 & 0 \\ 0 & -2 & -2 & -1 & 1 & 0 \\ 0 & 0 & -3 & -1 & 1 & 1\end{array}\right] \\ R_{2} \div-2 & \rightarrow\left[\begin{array}{ccc|ccc}1 & 2 & 3 & 1 & 0 & 0 \\ 0 & 1 & 1 & \frac{1}{2} & -\frac{1}{2} & 0 \\ 0 & 0 & -3 & -1 & 1 & 1\end{array}\right] \end{aligned}\)
\(R_{3} \div-3 \rightarrow\left[\begin{array}{ccc|ccc}1 & 2 & 3 & 1 & 0 & 0 \\ 0 & 1 & 1 & \frac{1}{2} & -\frac{1}{2} & 0 \\ 0 & 0 & 1 & \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right]\)
\(R_{3} \cdot-3+R_{1} \rightarrow\left[\begin{array}{ccc|ccc}1 & 2 & 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & \frac{1}{2} & -\frac{1}{2} & 0 \\ 0 & 0 & 1 & \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right]\)
\(R_{3} \cdot-1+R_{2} \rightarrow\left[\begin{array}{ccc|ccc}1 & 2 & 0 & 0 & 1 & 1 \\ 0 & 1 & 0 & \frac{1}{6} & -\frac{1}{6} & \frac{1}{3} \\ 0 & 0 & 1 & \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right]\)
\(R_{2} \cdot-2+R_{1} \rightarrow\left[\begin{array}{ccc|ccc}1 & 0 & 0 & -\frac{1}{3} & \frac{4}{3} & \frac{1}{3} \\ 0 & 1 & 0 & \frac{1}{6} & -\frac{1}{6} & \frac{1}{3} \\ 0 & 0 & 1 & \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right]\)
The matrix on the right is the inverse matrix \(A^{-1}\).
\(A^{-1}=\left[\begin{array}{ccc}-\frac{1}{3} & \frac{4}{3} & \frac{1}{3} \\ \frac{1}{6} & -\frac{1}{6} & \frac{1}{3} \\ \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right]\)
Fractions are usually unavoidable when computing inverses.
One reason why inverses are so powerful is because they allow you to solve systems of equations with the same logic as you would solve a single linear equation. Consider the following system based on the coefficients of matrix \(A\) from above.
\(\begin{aligned} x+2 y+3 z &=96 \\ x+0 y+z &=36 \\ 0 x+2 y-z &=-12 \end{aligned}\)
By writing this system as a matrix equation you get:
\(\left[\begin{array}{lll}1 & 2 & 3 \\ 1 & 0 & 1 \\ 0 & 2 & -1\end{array}\right] \cdot\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
\(A \cdot\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
If this were a normal linear equation where you had a constant times the variable equals a constant, you would multiply both sides by the multiplicative inverse of the coefficient. Do the same in this case.
\(A^{-1} \cdot A \cdot\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=A^{-1} \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
\(\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=A^{-1} \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
All that is left is for you to substitute in and to perform the matrix multiplication to get the solution.
\(\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=A^{-1} \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
\(\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=\left[\begin{array}{ccc}-\frac{1}{3} & \frac{4}{3} & \frac{1}{3} \\ \frac{1}{6} & -\frac{1}{6} & \frac{1}{3} \\ \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right] \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
\(\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=\left[\begin{array}{c}-\frac{1}{3} \cdot 96+\frac{4}{3} \cdot 36+\frac{1}{3} \cdot(-12) \\ \frac{1}{6} \cdot 96-\frac{1}{6} \cdot 36+\frac{1}{3} \cdot(-12) \\ \frac{1}{3} \cdot 96-\frac{1}{3} \cdot 36-\frac{1}{3} \cdot(-12)\end{array}\right]\)
\(\left[\begin{array}{l}x \\ y \\ z\end{array}\right]=\left[\begin{array}{c}12 \\ 6 \\ 24\end{array}\right]\)
Examples
Earlier, you were asked what types of matrices do not have inverses. Non-square matrices do not generally have inverses. Square matrices that have determinants equal to zero do not have inverses.
Find the inverse of the following matrix.
\(\left[\begin{array}{cc}1 & 6 \\ 4 & 24\end{array}\right]\)
\(\left[\begin{array}{cc|cc}1 & 6 & 1 & 0 \\ 4 & 24 & 0 & 1\end{array}\right]\)
\(R_{1} \cdot-4+R_{2} \rightarrow\left[\begin{array}{ll|ll}1 & 6 & 1 & 0 \\ 0 & 0 & -4 & 1\end{array}\right]\)
This matrix is not invertible because its rows are not linearly independent. To test to see if a square matrix is invertible, check whether or not the determinant is zero. If the determinant is zero then the matrix is not invertible because the rows are not linearly independent.
Confirm matrix \(A\) and \(A^{-1}\) are inverses by computing \(A^{-1} \cdot A\) and \(A \cdot A^{-1}\).
\(A=\left[\begin{array}{ccc}1 & 2 & 3 \\ 1 & 0 & 1 \\ 0 & 2 & -1\end{array}\right], A^{-1}=\left[\begin{array}{ccc}-\frac{1}{3} & \frac{4}{3} & \frac{1}{3} \\ \frac{1}{6} & -\frac{1}{6} & \frac{1}{3} \\ \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right]\)
\(A^{-1} \cdot A=\left[\begin{array}{ccc}-\frac{1}{3} & \frac{4}{3} & \frac{1}{3} \\ \frac{1}{6} & -\frac{1}{6} & \frac{1}{3} \\ \frac{1}{3} & -\frac{1}{3} & -\frac{1}{3}\end{array}\right] \cdot\left[\begin{array}{ccc}1 & 2 & 3 \\ 1 & 0 & 1 \\ 0 & 2 & -1\end{array}\right]\)
\(a_{11}=-\frac{1}{3} \cdot 1+\frac{4}{3} \cdot 1+\frac{1}{3} \cdot 0=1\)
\(a_{22}=\frac{1}{6} \cdot 2-\frac{1}{6} \cdot 0+\frac{1}{3} \cdot 2=1\)
\(a_{33}=\frac{1}{3} \cdot 3-\frac{1}{3} \cdot 1-\frac{1}{3}(-1)=1\)
Note that the rest of the entries turn out to be zero. This is left for you to confirm.
Use a calculator to compute \(A^{-1}\), compute \(A^{-1} \cdot A\), compute \(A \cdot A^{-1}\) and compute
\(A^{-1} \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
Start by entering just matrix \(A\) into the calculator.
To compute matrix \(A^{-1}\) use the inverse button programmed into the calculator. Do not try to raise the matrix to the negative one exponent. This will not work.
Note that the calculator may return decimal versions of the fractions and will not show the entire matrix on its limited display. You will have to scroll to the right to confirm that \(A^{-1}\) matches what you have already found. Once you have found \(A^{-1}\) go ahead and store it as matrix \(B\) so you do not need to type in the entries.
\(A^{-1} \cdot A=B \cdot A\)
\(A \cdot A^{-1}=A \cdot B\)
\(A^{-1} \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]=B \cdot\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]=B \cdot C\)
You need to create matrix \(C=\left[\begin{array}{c}96 \\ 36 \\ -12\end{array}\right]\)
Being able to effectively use a calculator should improve your understanding of matrices and allow you to check all the work you do by hand.
The identity matrix happens to be its own inverse. Find another matrix that is its own inverse.
Helmert came up with a very clever matrix that happens to be its own inverse. Here are the \(2 \times 2\) and the \(3 \times 3\) versions.
\(\left[\begin{array}{cc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{array}\right],\left[\begin{array}{ccc}\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{2}{\sqrt{6}}\end{array}\right]\)
Find the inverse of each of the following matrices, if possible. Make sure to do some by hand and some with your calculator.
1. \(\left[\begin{array}{ll}4 & 5 \\ 2 & 3\end{array}\right]\)
2. \(\left[\begin{array}{cc}-3 & 6 \\ 2 & 5\end{array}\right]\)
3. \(\left[\begin{array}{cc}-1 & 2 \\ 2 & 0\end{array}\right]\)
4. \(\left[\begin{array}{ll}1 & 6 \\ 0 & 1\end{array}\right]\)
5. \(\left[\begin{array}{cc}6 & 5 \\ 2 & -2\end{array}\right]\)
6. \(\left[\begin{array}{ll}4 & 2 \\ 6 & 3\end{array}\right]\)
7. \(\left[\begin{array}{ccc}-1 & 3 & -4 \\ 4 & 2 & 1 \\ 1 & 2 & 5\end{array}\right]\)
8. \(\left[\begin{array}{ccc}4 & 5 & 8 \\ 9 & 0 & 1 \\ 0 & 3 & -2\end{array}\right]\)
9. \(\left[\begin{array}{ccc}0 & 7 & -1 \\ 2 & -3 & 1 \\ 6 & 8 & 0\end{array}\right]\)
10. \(\left[\begin{array}{ccc}4 & 2 & -3 \\ 2 & 4 & 5 \\ 1 & 8 & 0\end{array}\right]\)
11. \(\left[\begin{array}{ccc}-2 & -6 & -12 \\ -1 & -5 & -2 \\ 2 & 3 & 4\end{array}\right]\)
12. \(\left[\begin{array}{ccc}-2 & 6 & 3 \\ 2 & 4 & 0 \\ -8 & 2 & 1\end{array}\right]\)
13. Show that Helmert's \(2 \times 2\) matrix is its own inverse: \(\left[\begin{array}{ll}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\end{array}\right]\)
14. Show that Helmert's \(3 \times 3\) matrix is its own inverse: \(\left[\begin{array}{ccc}\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} & 0 \\ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{2}{\sqrt{6}}\end{array}\right]\)
15. Non-square matrices sometimes have left inverses, where \(A^{-1} \cdot A=I\), or right inverses, where \(A \cdot A^{-1}=I\). Why can't non-square matrices have "regular" inverses?