Big O Notation

Big O describes how an algorithm’s runtime or memory usage grows as the input size grows. It strips away constants and lower-order terms to focus on the dominant factor.

Why It Matters

Without Big O, you can’t compare algorithms or predict whether code will be fast enough for production data. An O(n^2) solution might work for 100 items but choke on 100,000.

Common Complexities

Big ONameExample
O(1)ConstantHash table lookup
O(log n)LogarithmicBinary search
O(n)LinearScanning an array
O(n log n)LinearithmicMerge sort
O(n^2)QuadraticNested loops (bubble sort)
O(2^n)ExponentialBrute-force subsets
O(n!)FactorialBrute-force permutations

Key Rules

  1. Drop constants — O(2n) is just O(n)
  2. Drop lower terms — O(n^2 + n) is O(n^2)
  3. Worst case by default — unless stated otherwise, Big O means the worst case
  4. Multiply nested loops — a loop inside a loop over the same input is O(n^2)

Code Example

# O(n) — linear scan
def find_max(nums: list[int]) -> int:
    best = nums[0]
    for x in nums:       # runs n times
        if x > best:
            best = x
    return best
 
# O(n^2) — nested loops
def has_duplicate_pair(nums: list[int]) -> bool:
    for i in range(len(nums)):
        for j in range(i + 1, len(nums)):  # inner loop depends on outer
            if nums[i] == nums[j]:
                return True
    return False
 
# O(n) using a hash set — much better
def has_duplicate(nums: list[int]) -> bool:
    seen = set()
    for x in nums:
        if x in seen:
            return True
        seen.add(x)
    return False

Space Complexity

Big O also applies to memory. has_duplicate above uses O(n) extra space for the set. The nested-loop version uses O(1) extra space. Often there’s a time-space tradeoff.