6. Systems of 3 Variables & Intro to Matrices
Before you start
- Solve 2-variable systems by substitution and elimination
- Track multi-step elimination work in writing without losing terms
- Distribute scalars across equations and update both sides consistently
- Read a 2x2 matrix and identify its rows, columns, and entries
By the end you'll be able to
- Solve a 3-variable system by reducing to 2-variable systems via consistent elimination
- Identify the dimensions of a matrix and locate any entry by row and column
- Apply the three legal row operations (swap, scale, add multiple of one row) without changing the solution set
- Recognize an upper-triangular matrix and back-substitute from the bottom row
- Multiply a row vector by a column vector to compute a dot product
Systems of 3 Variables & Intro to Matrices
When you go from 2 to 3 variables, the geometric picture changes from “lines in a plane” to “planes in space.” Three planes generally meet at a single point, but they can also miss each other entirely or share a line.
Solving a 3-variable system by elimination
Strategy: eliminate one variable using two of the equations, then eliminate the same variable using a different pair. You’re left with a 2-variable system, which you already know how to solve.
Example:
Add (1) and (2):
The pattern is mechanical, but the bookkeeping gets tedious fast. Matrices give us a cleaner way to track it.
Matrices — the bookkeeping device
A matrix is a rectangular grid of numbers. The system above can be written as:
| 1 1 1 | | x | | 6 |
| 1 −1 1 | | y | = | 2 |
| 1 1 −1 | | z | | 0 |
This is
Gaussian elimination
The algorithm that powers every linear-algebra library:
- Write the augmented matrix
. - Use row operations to convert
into upper-triangular form (zeros below the diagonal). - Back-substitute to read off the solution.
Allowed row operations:
- Swap two rows.
- Scale a row by a nonzero constant.
- Add a multiple of one row to another row.
These operations preserve the solution set.
Worked example — Gaussian elimination on the same system
Start from the augmented matrix:
| 1 1 1 | 6 |
| 1 −1 1 | 2 |
| 1 1 −1 | 0 |
R2 ← R2 − R1 (zero out the first entry of row 2):
| 1 1 1 | 6 |
| 0 −2 0 | −4 |
| 1 1 −1 | 0 |
R3 ← R3 − R1 (zero out the first entry of row 3):
| 1 1 1 | 6 |
| 0 −2 0 | −4 |
| 0 0 −2 | −6 |
The matrix is now upper-triangular. Back-substitute from the bottom:
- Row 3:
−2z = −6 → z = 3. - Row 2:
−2y = −4 → y = 2. - Row 1:
x + y + z = 6 → x + 2 + 3 = 6 → x = 1.
Solution (x, y, z) = (1, 2, 3), matching the elimination pass above.
The identity matrix
Why ML uses matrices, not loops
Every neural network layer is for loop over rows would be orders of magnitude
slower. Matrix notation isn’t just compact — it unlocks vectorized hardware acceleration.
Common mistakes
These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.
Eliminating the same variable inconsistently
To reduce 3 equations to 2, you must eliminate the same variable from both new pairings. Eliminating
from one pair and from the other leaves an unsolvable mix. Dropping a row when you scale it
When you multiply a row by 5, every term in that row gets multiplied — coefficients AND the right-hand side. Same trap as 2-variable systems but easier to forget with more terms.
Multiplying matrices in the wrong order
in general. Order matters. The dimensions must align: if is and is , then is .
Practice problems
Try each on paper first. Click Show solution only after you've made a real attempt.
- Problem 1Solve:
, , . Show solution
Subtracting pairs gives
and ; then . Answer: (1, 2, 3).
- Problem 2What does the identity matrix
look like? Show solution
- Problem 3Multiply:
. Show solution
. - Problem 4Why are matrix operations preferred over explicit loops in numerical libraries?
Show solution
Matrix operations dispatch to highly optimized linear-algebra backends that use SIMD and cache-friendly layouts. The speedup over interpreted loops is 100x to 10000x. Always vectorize.
- Problem 5What does Gaussian elimination produce?
Show solution
Upper-triangular form (zeros below the diagonal). Back-substitution then reads off variables starting from the bottom row.
- Problem 6Find the dimensions of
. Show solution
A is 2 by 3 (2 rows, 3 columns).
- Problem 7What entry sits in row 2, column 3 of
? Show solution
Row 2 = [4, 5, 6]. Column 3 of that row = 6.
Practice quiz
- Question 1How many equations are needed to uniquely solve a 3-variable system (in general)?
- Question 2What does row reduction (Gaussian elimination) produce?
- Question 3For the matrix [[1, 2], [3, 4]], what’s the entry in row 2 column 1?
- Question 4Identity matrix I3 has:
- Question 5Which row operation is allowed?
- Question 6If A · I = A for every A, what’s I?
- Question 7Multiply: [[1, 2]] · [[3], [4]]. Give the scalar result.
- Question 8Solve x + y + z = 6, x - y + z = 2, x + y - z = 0. Give x as a number.
- Question 9ML connection: A neural-net layer’s affine transform is:
- Reflection 10Why is matrix form preferred over scalar systems for solving large problems?
Week 6 recap
You extended elimination to 3 variables, met your first matrices, learned the three legal row operations, and previewed Gaussian elimination — the algorithm every linear-algebra library uses internally. Three trap families fell: the inconsistent-elimination trap (eliminating different variables from different pairs), the scale-and-drop trap (forgetting to multiply the right-hand side when scaling a row), and the index-order trap (writing column before row when identifying entries). Each outcome maps forward: row reduction is the literal algorithm in numerical solvers; the identity matrix becomes the target of Gauss-Jordan reduction; row-by-column multiplication is the building block of every neural-network forward pass. Matrix notation compresses 1000 equations into one expression — that compression is the source of computational speed.
Coming next: Week 7 — Polynomial Operations
Next week leaves linear systems for polynomial arithmetic. You will multiply polynomials with FOIL and the distributive property, drill the special-product patterns (perfect squares, difference of squares, sum and difference of cubes), and identify a polynomial’s degree and leading coefficient. These mechanical operations are the foundation for factoring quadratics, characteristic polynomials in linear algebra, and polynomial-time complexity arguments later.
Saved in your browser only — no account, no server.