Week 6 · Intermediate Algebra

6. Systems of 3 Variables & Intro to Matrices

120 min

Before you start

  • Solve 2-variable systems by substitution and elimination
  • Track multi-step elimination work in writing without losing terms
  • Distribute scalars across equations and update both sides consistently
  • Read a 2x2 matrix and identify its rows, columns, and entries

By the end you'll be able to

  • Solve a 3-variable system by reducing to 2-variable systems via consistent elimination
  • Identify the dimensions of a matrix and locate any entry by row and column
  • Apply the three legal row operations (swap, scale, add multiple of one row) without changing the solution set
  • Recognize an upper-triangular matrix and back-substitute from the bottom row
  • Multiply a row vector by a column vector to compute a dot product
Week 6 video coming soon
Read the lesson body below in the meantime.

Systems of 3 Variables & Intro to Matrices

When you go from 2 to 3 variables, the geometric picture changes from “lines in a plane” to “planes in space.” Three planes generally meet at a single point, but they can also miss each other entirely or share a line.

Solving a 3-variable system by elimination

Strategy: eliminate one variable using two of the equations, then eliminate the same variable using a different pair. You’re left with a 2-variable system, which you already know how to solve.

Example:

Add (1) and (2): . Add (1) and (3): . Subtract (2) from (1): . Substitute back: , .

The pattern is mechanical, but the bookkeeping gets tedious fast. Matrices give us a cleaner way to track it.

Matrices — the bookkeeping device

A matrix is a rectangular grid of numbers. The system above can be written as:

| 1  1  1 | | x |   | 6 |
| 1 −1  1 | | y | = | 2 |
| 1  1 −1 | | z |   | 0 |

This is . The system’s solution is when is invertible.

Gaussian elimination

The algorithm that powers every linear-algebra library:

  1. Write the augmented matrix .
  2. Use row operations to convert into upper-triangular form (zeros below the diagonal).
  3. Back-substitute to read off the solution.

Allowed row operations:

  • Swap two rows.
  • Scale a row by a nonzero constant.
  • Add a multiple of one row to another row.

These operations preserve the solution set.

Worked example — Gaussian elimination on the same system

Start from the augmented matrix:

| 1   1   1 | 6 |
| 1  −1   1 | 2 |
| 1   1  −1 | 0 |

R2 ← R2 − R1 (zero out the first entry of row 2):

| 1   1   1 |  6 |
| 0  −2   0 | −4 |
| 1   1  −1 |  0 |

R3 ← R3 − R1 (zero out the first entry of row 3):

| 1   1   1 |  6 |
| 0  −2   0 | −4 |
| 0   0  −2 | −6 |

The matrix is now upper-triangular. Back-substitute from the bottom:

  • Row 3: −2z = −6 → z = 3.
  • Row 2: −2y = −4 → y = 2.
  • Row 1: x + y + z = 6 → x + 2 + 3 = 6 → x = 1.

Solution (x, y, z) = (1, 2, 3), matching the elimination pass above.

The identity matrix

has 1s on the diagonal and 0s elsewhere. It plays the same role for matrix multiplication that plays for ordinary multiplication: for any matrix . Multiplying a matrix by its inverse gives .

Why ML uses matrices, not loops

Every neural network layer is — matrix multiplication plus a bias. A numerical computing library’s matrix-multiply call runs in optimized C code (BLAS / LAPACK). The same operation written as an interpreted for loop over rows would be orders of magnitude slower. Matrix notation isn’t just compact — it unlocks vectorized hardware acceleration.

Common mistakes

These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.

  • Eliminating the same variable inconsistently

    To reduce 3 equations to 2, you must eliminate the same variable from both new pairings. Eliminating from one pair and from the other leaves an unsolvable mix.

  • Dropping a row when you scale it

    When you multiply a row by 5, every term in that row gets multiplied — coefficients AND the right-hand side. Same trap as 2-variable systems but easier to forget with more terms.

  • Multiplying matrices in the wrong order

    in general. Order matters. The dimensions must align: if is and is , then is .

Practice problems

Try each on paper first. Click Show solution only after you've made a real attempt.

  1. Problem 1
    Solve: , , .
    Show solution

    Subtracting pairs gives and ; then .

    Answer: (1, 2, 3).

  2. Problem 2
    What does the identity matrix look like?
    Show solution
  3. Problem 3
    Multiply: .
    Show solution

    .

  4. Problem 4
    Why are matrix operations preferred over explicit loops in numerical libraries?
    Show solution

    Matrix operations dispatch to highly optimized linear-algebra backends that use SIMD and cache-friendly layouts. The speedup over interpreted loops is 100x to 10000x. Always vectorize.

  5. Problem 5
    What does Gaussian elimination produce?
    Show solution

    Upper-triangular form (zeros below the diagonal). Back-substitution then reads off variables starting from the bottom row.

  6. Problem 6
    Find the dimensions of .
    Show solution

    A is 2 by 3 (2 rows, 3 columns).

  7. Problem 7
    What entry sits in row 2, column 3 of ?
    Show solution

    Row 2 = [4, 5, 6]. Column 3 of that row = 6.

Practice quiz

  1. Question 1
    How many equations are needed to uniquely solve a 3-variable system (in general)?
  2. Question 2
    What does row reduction (Gaussian elimination) produce?
  3. Question 3
    For the matrix [[1, 2], [3, 4]], what’s the entry in row 2 column 1?
  4. Question 4
    Identity matrix I3 has:
  5. Question 5
    Which row operation is allowed?
  6. Question 6
    If A · I = A for every A, what’s I?
  7. Question 7
    Multiply: [[1, 2]] · [[3], [4]]. Give the scalar result.
  8. Question 8
    Solve x + y + z = 6, x - y + z = 2, x + y - z = 0. Give x as a number.
  9. Question 9
    ML connection: A neural-net layer’s affine transform is:
  10. Reflection 10
    Why is matrix form preferred over scalar systems for solving large problems?

Week 6 recap

You extended elimination to 3 variables, met your first matrices, learned the three legal row operations, and previewed Gaussian elimination — the algorithm every linear-algebra library uses internally. Three trap families fell: the inconsistent-elimination trap (eliminating different variables from different pairs), the scale-and-drop trap (forgetting to multiply the right-hand side when scaling a row), and the index-order trap (writing column before row when identifying entries). Each outcome maps forward: row reduction is the literal algorithm in numerical solvers; the identity matrix becomes the target of Gauss-Jordan reduction; row-by-column multiplication is the building block of every neural-network forward pass. Matrix notation compresses 1000 equations into one expression — that compression is the source of computational speed.

Coming next: Week 7 — Polynomial Operations

Next week leaves linear systems for polynomial arithmetic. You will multiply polynomials with FOIL and the distributive property, drill the special-product patterns (perfect squares, difference of squares, sum and difference of cubes), and identify a polynomial’s degree and leading coefficient. These mechanical operations are the foundation for factoring quadratics, characteristic polynomials in linear algebra, and polynomial-time complexity arguments later.

Saved in your browser only — no account, no server.