What is Big-O?
Big-O is a way to describe how fast a program grows as the input gets bigger. Think: “If I give it more data, how much slower does it get?” We don’t count exact milliseconds; we count how the work scales.
Why do we use it?
- To compare solutions without running them.
- To predict scalability (will this still work when data is 100× bigger?).
- To communicate clearly in interviews: “This runs in
O(n)time.”
How to read it (tiny glossary)
n= size of the input (items in an array, nodes in a graph, etc.).O(1)— constant: takes roughly the same time no matter the input.O(n)— linear: time grows directly withn(double input → ~double time).O(log n)— logarithmic: grows slowly (binary search halves the work each step).O(n log n)— “sort time”: sort + a linear pass.O(n^2)— quadratic: nested loops overnitems (gets slow fast).- Bigger ones (like
O(2^n)) explode quickly — usually from exhaustive search.
How to estimate Big-O (quick rules)
- One loop over data → usually
O(n). - Nested loops over the same data → usually
O(n^2). - Binary search / halving →
O(log n). - Sorting →
O(n log n)(then add any extra linear work). - HashMap/Set lookups →
O(1)on average → one pass isO(n). - Graph/tree traversal visiting each item once →
O(n)(orO(V+E)for graphs).
Space complexity (memory)
Same idea, but for extra memory you use:
- Using a few variables →
O(1)space. - Using a map/set that may store many items → up to
O(n)space. - Recursion depth (call stack) also counts.
Concrete examples
Arrays & Two Pointers
- Reverse array, check palindrome, two-sum on sorted: one pass →
O(n)time,O(1)space.
Sliding Window
- Longest substring w/o repeat; min window substring: move left/right pointers across the string →
O(n)time (each char enters/leaves the window once), plus a small map →O(Σ)space.
Hash Map / Set
- Two-sum (unsorted): store seen numbers →
O(n)time, O(n) space. - Group anagrams: If you sort each word as a key → per word
O(L log L)→ totalO(N·L log L). If you use character counts as key → per wordO(L)→ totalO(N·L)(faster).
Sorting + Intervals
- Merge intervals: sort by start
O(n log n), then one sweepO(n)→O(n log n)total.
Binary Search
- Lower bound in sorted array: halves the range each step →
O(log n). - “Search the answer” (e.g., ship capacity) =
O(log R)checks × linear feasibility check →O(n log R).
Prefix Sums
- Build prefix array:
O(n). Each range sum query:O(1). - Count subarrays sum = k: one pass with a map →
O(n)time,O(n)space.
Stack (Monotonic)
- Valid parentheses: push/pop once per char →
O(n)time, O(n) space. - Next greater element: each index pushed/popped ≤ 1 →
O(n)time.
Deque (Monotonic Queue)
- Sliding window max: each index in/out ≤ 1 →
O(n)time, O(k) space.
Graphs & Grids
- BFS/DFS over all nodes/edges once →
O(V+E)(linear in graph size).
Trees
- Traversals (inorder, level order): touch each node once →
O(n)time, space is height/width.
Backtracking
- Subsets/permutations/combination sum: output can be huge → exponential (e.g.,
O(2^n)). That’s expected.