Week 16 · Intermediate Algebra

16. Logarithms & Properties

120 min

Before you start

  • Evaluate exponential expressions for integer and rational exponents
  • Apply rational-exponent and inverse-function notation
  • Recognize the natural log $\ln$ as $\log_e$
  • Convert simple exponential equations using base-matching

By the end you'll be able to

  • Convert between logarithmic form and exponential form in both directions
  • Apply the product rule, quotient rule, and power rule of logarithms to expand or condense expressions
  • Solve exponential equations by taking the log of both sides
  • Solve logarithmic equations by converting to exponential form
  • Identify domain restrictions for log expressions (argument must be positive)
Week 16 video coming soon
Read the lesson body below in the meantime.

Logarithms & Properties

A logarithm is the inverse of exponentiation. means . Logs answer the question: “what power do I raise the base to in order to get this number?”

Because for every real (any positive base raised to any power stays positive), logarithms are defined only for . and have no real value — they are outside the domain.

Two special bases

  • Common log is base 10 (often written ).
  • Natural log is base (i.e., ).

Most ML uses natural log, because — the cleanest possible derivative.

Inverse properties

Internalize these. They show up everywhere.

The three core rules

Rule Identity
Product
Quotient
Power

Logs convert multiplication into addition, division into subtraction, and exponentiation into multiplication. This is why logs are useful for any problem where multiplying many numbers would cause precision issues.

Solving exponential equations

Take the log of both sides (any consistent base; usually ):

Change of base

Useful when your calculator only does (base 10) and (base ).

ML connection — log-likelihood

Most ML algorithms maximize a likelihood — the probability the model assigns to the observed data. For a dataset of independent samples, the likelihood is a product of small numbers:

If each and is large, this product underflows to zero in floating-point arithmetic. So ML always works with the log-likelihood instead:

A sum of log-probabilities is numerically stable even for .

ML connection — cross-entropy

The cross-entropy loss for classification:

Comes directly from the log-likelihood of the model being correct on the training data. The log here uses the same product/quotient rules you just drilled — when you simplify for cross-entropy, the algebra cancels beautifully, giving the famously simple gradient .

You finished math-1

The 16-week journey from −(−6) to log-likelihood is over. Every concept you touched is load-bearing for the math you’ll meet in trig, calculus, linear algebra, and statistics. Take a break. The next stop on the math track (math-3, Trigonometry) builds on the function and graph intuition you developed here.

Common mistakes

These are the traps learners hit most often on this topic. Knowing them in advance is half the fix.

  • Treating $\log$ like a variable

    . Logs only split products, not sums. The product rule is — note multiplication on the inside.

  • Applying the power rule to bases instead of exponents

    — the exponent comes out as a coefficient. You can’t pull a base out the same way: is a different (change-of-base) story.

  • Forgetting the domain

    is defined only for . After solving an equation, discard any candidate that makes the log argument zero or negative.

Practice problems

Try each on paper first. Click Show solution only after you've made a real attempt.

  1. Problem 1
    Convert to exponential form: .
    Show solution

    .

  2. Problem 2
    Evaluate: .
    Show solution

    .

  3. Problem 3
    Use the product rule: expand .
    Show solution

    .

  4. Problem 4
    Solve for : .
    Show solution

    .

  5. Problem 5
    Solve: .
    Show solution

    Candidate violates the log domain. .

  6. Problem 6
    Expand: .
    Show solution

    .

  7. Problem 7
    Solve: .
    Show solution

    .

Practice quiz

  1. Question 1
    log_2(8) =?
  2. Question 2
    ln(e) =?
  3. Question 3
    Product rule: log(xy) =?
  4. Question 4
    Power rule: log(x^k) =?
  5. Question 5
    Solve: log_2(x) = 5
  6. Question 6
    Expand: log(x²y) using log rules
  7. Question 7
    log_a(1) =?
  8. Question 8
    Solve: 5e^(0.2t) + 10 = 85
  9. Question 9
    Why use log-likelihood instead of likelihood in ML?
  10. Reflection 10
    How is cross-entropy related to logarithms?

Week 16 recap

You converted between logarithmic and exponential forms, applied the product, quotient, and power rules to expand and condense expressions, solved exponential equations using natural log, and connected log-likelihood and cross-entropy to the log rules you just drilled. Three trap families fell: the log-of-sum trap (writing as ), the power-rule misapplication (pulling a base out instead of an exponent), and the domain-violation trap (accepting candidates that make the log argument non-positive). Logs are how ML talks about multiplicative quantities. Each outcome maps directly forward: log-likelihood is the standard objective for maximum-likelihood estimation; cross-entropy is one log identity applied to softmax probabilities; numerical stability through log-domain arithmetic is the defense against underflow that every classification trainer needs.

Saved in your browser only — no account, no server.