Week 4 · Intermediate Algebra

4. Absolute Value Equations & Inequalities

120 min

Before you start

  • Translate freely between inequality, interval, and graph notation
  • Apply the flip rule for inequalities when multiplying by a negative
  • Solve compound inequalities and combine using AND or OR
  • Define absolute value as distance from zero on the number line

By the end you'll be able to

  • Solve absolute-value equations by splitting into the positive and negative cases
  • Recognize when an absolute-value equation has no solution because distance cannot be negative
  • Convert |expr| <= k into a single bounded interval (intersection)
  • Convert |expr| >= k into the union of two unbounded rays
  • Connect absolute-value structure to L1 regularization and Mean Absolute Error
Week 4 video coming soon
Read the lesson body below in the meantime.

Absolute Value Inequalities

Absolute value measures distance from zero. An absolute value inequality bounds how far a quantity can be from zero. Two cases — and — produce structurally different solutions.

Definition (piecewise)

For any real number :

The bars strip the sign — they never produce a negative value. This piecewise form is the reason every absolute-value equation or inequality splits into two cases: one where the inside is non-negative (and the bars do nothing) and one where the inside is negative (and the bars flip the sign).

The two cases

Case 1: |expression| ≤ k

This is an intersection (AND). The expression must be within distance of zero, in either direction. Rewrite as a compound inequality:

Solve all three sections together to isolate the variable.

Case 2: |expression| ≥ k

This is a union (OR). The expression is at least away from zero, in some direction. Rewrite as two separate inequalities joined by OR:

The solution is two disjoint rays.

Worked example

Solve and graph: .

Step 1 — recognize the structure. This is (intersection). Rewrite as a compound inequality:

Step 2 — add 5 to all three sections to isolate the variable term:

Step 3 — divide all three sections by 2:

In interval notation: .

Common mistakes

  1. Dropping the absolute value bars and only solving the positive case. This gives you — you’ve lost the lower bound entirely.
  2. Forgetting to flip the inequality when multiplying or dividing by a negative number. This rule is non-negotiable; if you divide by , the inequality direction reverses.
  3. Confusing and structures. They produce different solution shapes — one bounded interval vs. two unbounded rays.

Machine learning relevance

Absolute value mechanics are the direct mathematical foundation for two cornerstone ML techniques:

  • L1 regularization (Lasso): Adds as a penalty term in the cost function, encouraging weights to shrink to exactly zero. This produces sparse models — most features end up with zero weight, leaving you with a small, interpretable subset.
  • Mean Absolute Error (MAE): . Unlike MSE, MAE doesn’t square errors, so it’s more robust to outliers. The “absolute value” intuition you build here shapes how you reason about loss functions later.

Understanding the bounds of absolute values is also required for defining margin boundaries in Support Vector Machines and confidence intervals in statistical inference.

Common mistakes

These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.

  • Dropping the absolute-value bars and solving only one case

    has TWO solutions: AND . Solving only the positive case loses half the answer.

  • Setting an absolute value equal to a negative

    has no solution — distance can’t be negative. Stop and write ‘no solution’ rather than forcing through.

  • Confusing ≤ with ≥ in inequality structure

    is one bounded interval . is two disjoint rays . Memorize the shapes.

Practice problems

Try each on paper first. Click Show solution only after you've made a real attempt.

  1. Problem 1
    Solve: .
    Show solution
    • Case 1: .
    • Case 2: .

    Answer: or .

  2. Problem 2
    Solve and graph: .
    Show solution
    1. .
    2. Subtract 1: .

    Interval: .

  3. Problem 3
    Solve: .
    Show solution
    • .
    • .

    Interval: .

  4. Problem 4
    Solve: .
    Show solution
    1. .

    Distance is never negative. No solution.

  5. Problem 5
    Solve: .
    Show solution
    1. .
    2. .
    3. .

    Interval: .

  6. Problem 6
    Solve: .
    Show solution
    • .
    • .

    Interval: .

  7. Problem 7
    Why does L1 regularization in ML use absolute values?
    Show solution

    L1 = . The corner at zero produces a discontinuous derivative that lets gradient descent push weights to exactly zero, producing sparse models. L2 (using ) lacks the corner, so weights only approach zero asymptotically.

Practice quiz

  1. Question 1
    Solve: |x| = 7
  2. Question 2
    Solve: |x - 3| = 5
  3. Question 3
    Solve: |x| = -4
  4. Question 4
    Solve: |2x - 5| ≤ 7. Give the interval.
  5. Question 5
    Solve: |x + 2| > 3. Use interval notation.
  6. Question 6
    |3x| = 12 implies:
  7. Question 7
    Solve: |x - 1| + 2 = 6
  8. Question 8
    Which ML loss is built on absolute value?
  9. Question 9
    Which regularizer corresponds to summing absolute values of weights?
  10. Reflection 10
    Why does L1 regularization produce sparse models, while L2 doesn’t?

Week 4 recap

You translated absolute-value equations into the positive and negative cases, decoded the structural pattern that <= produces a single bounded interval and

= produces a union of two unbounded rays, and connected absolute-value mechanics to L1 regularization and Mean Absolute Error in modeling. The takeaway is that |x| means distance from zero, and distance is never negative — that single fact diagnoses no-solution cases instantly. Three trap families fell this week: the single-case trap (writing only the positive split), the force-it-through trap (assigning a value to |x| = -k instead of declaring no solution), and the structural-pattern confusion (mixing up the interval shape for <= versus >=). The cumulative review reinforced PEMDAS, multi-step isolation, the inequality flip rule, distribution with negatives, and identity-vs-contradiction diagnosis from weeks one through three.

Coming next: Week 5 — Systems of Linear Equations (2 Variables)

Next week you broaden from one variable to two: pairs of linear equations forming a system. You will learn substitution and elimination, the algorithmic precursors to matrix inversion that powers closed-form linear regression and computer graphics. Solutions become points in the plane rather than numbers on a line, and you will classify systems by whether the lines meet at one point, run parallel, or coincide.

Saved in your browser only — no account, no server.