Complexity Analysis

How to measure and compare algorithm efficiency using Big O notation.

Why Complexity Matters

Imagine you have two algorithms that solve the same problem. How do you decide which one is better? You could run them and measure the time—but that depends on your computer's speed, the input size, and many other factors.

Complexity analysis gives us a way to compare algorithms independent of hardware. It describes how an algorithm's resource usage grows as the input size increases.

Key insight: The difference between O(n) and O(n²) can mean the difference between 1 second and 11 days for processing 1 million items!

Big O Notation

Big O notation describes the upper bound of an algorithm's growth rate. It focuses on the dominant term and ignores constants.

📐 What Big O Tells Us

  • O(1) — Constant: Same time regardless of input size
  • O(log n) — Logarithmic: Halves the problem each step
  • O(n) — Linear: Time grows proportionally with input
  • O(n log n) — Linearithmic: Efficient sorting algorithms
  • O(n²) — Quadratic: Nested loops over the input
  • O(2ⁿ) — Exponential: Doubles with each additional element

Time Complexity

Time complexity measures how the running time of an algorithm changes with input size.

O(1)

Constant Time

The algorithm takes the same time regardless of input size.

// Accessing array element by index
function getFirst(arr) {
    return arr[0];  // Always one operation
}

// Hash table lookup
const value = hashMap.get("key");  // O(1) average
O(log n)

Logarithmic Time

The algorithm halves the problem size with each step. Very efficient!

// Binary search - find target in sorted array
function binarySearch(arr, target) {
    let left = 0, right = arr.length - 1;

    while (left <= right) {
        const mid = Math.floor((left + right) / 2);
        if (arr[mid] === target) return mid;
        if (arr[mid] < target) left = mid + 1;
        else right = mid - 1;
    }
    return -1;
}
O(n)

Linear Time

Time grows proportionally with input size. Must look at each element once.

// Find maximum value in array
function findMax(arr) {
    let max = arr[0];
    for (let i = 1; i < arr.length; i++) {
        if (arr[i] > max) max = arr[i];
    }
    return max;
}
O(n²)

Quadratic Time

Nested loops over the input. Gets slow very quickly for large inputs!

// Bubble sort - compare every pair
function bubbleSort(arr) {
    for (let i = 0; i < arr.length; i++) {
        for (let j = 0; j < arr.length - i - 1; j++) {
            if (arr[j] > arr[j + 1]) {
                [arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
            }
        }
    }
    return arr;
}

Complexity Comparison

How long does each complexity take for different input sizes? (assuming 1 operation = 1 microsecond)

Complexityn = 10n = 100n = 1,000n = 10,000
O(1)1 μs1 μs1 μs1 μs
O(log n)3 μs7 μs10 μs13 μs
O(n)10 μs100 μs1 ms10 ms
O(n log n)33 μs664 μs10 ms132 ms
O(n²)100 μs10 ms1 sec1.6 min
O(2ⁿ)1 ms10²² years

Space Complexity

Space complexity measures how much additional memory an algorithm needs as input size grows.

O(1) Space

Uses a fixed amount of extra memory (e.g., a few variables)

sum = a + b
O(n) Space

Memory grows with input size (e.g., copying an array)

copy = [...arr]

💡 Trade-off: Sometimes you can trade space for time. For example, caching results (memoization) uses more memory but avoids recalculating values.

Quick Analysis Tips

1️⃣
Count the loops

One loop = O(n), nested loops = O(n²), triple nested = O(n³)

2️⃣
Drop constants

O(2n) = O(n), O(n + 100) = O(n)

3️⃣
Keep only dominant term

O(n² + n) = O(n²), O(n + log n) = O(n)

4️⃣
Halving = logarithmic

If the problem size halves each step → O(log n)

When Complexity Analysis Helps Most

This page is most useful when you are comparing two candidate solutions, reviewing an algorithm during an interview, or trying to predict what will happen when input size grows from hundreds of items to millions. Complexity analysis is also a fast way to explain to teammates why a seemingly simple nested loop can become the dominant production bottleneck.

In practice, you usually combine this with real profiling. Big O gives the scaling behavior; measurement tells you whether the current bottleneck is CPU, memory, disk, or network.

Common Mistakes

Focusing only on worst-case notation and ignoring constants, memory pressure, cache effects, or I/O costs when evaluating real systems.
Treating an O(n) algorithm as automatically fine without checking how many full passes, network calls, or allocations happen inside each iteration.
Assuming O(1) always means instant. Hash lookups, for example, can still be dominated by poor locality, collisions, or surrounding work.

Related

Content adapted from Hello 算法 - Complexity Analysis by Krahets, licensed under CC BY-NC-SA 4.0.