Loading learning content...
You understand the problem. You've identified the category. You've chosen appropriate data structures. Now comes the critical question: which algorithmic technique should you apply?
This decision often determines whether you solve the problem at all. Apply dynamic programming to a greedy problem, and you'll write unnecessary complexity. Apply greedy to a DP problem, and you'll get wrong answers. Apply brute force to a problem requiring clever insight, and you'll timeout.
This page maps problem characteristics to algorithmic techniques—giving you a systematic framework for selecting the right approach before writing a single line of code.
You will master the art of selecting algorithmic techniques based on problem signals—understanding when to apply greedy, dynamic programming, divide and conquer, backtracking, binary search, graph algorithms, and more. You'll develop the intuition to recognize which technique applies before attempting implementation.
Before diving into specific mappings, let's survey the major algorithmic techniques and understand their fundamental nature. Each technique embodies a different philosophy about how to approach problems.
| Technique | Core Philosophy | When It Excels | Typical Complexity |
|---|---|---|---|
| Brute Force | Try everything systematically | Small inputs, verification, baseline | Exponential or worse |
| Greedy | Local optimal leads to global optimal | Optimization with independent choices | O(n log n) or O(n) |
| Dynamic Programming | Build from overlapping subproblems | Optimization with dependent choices | Polynomial (varied) |
| Divide & Conquer | Split, solve subproblems, merge | Independent subproblems | O(n log n) typically |
| Backtracking | Build incrementally, prune invalid | Constraint satisfaction, enumeration | Exponential (pruned) |
| Binary Search | Eliminate half the search space | Monotonic predicates | O(log n) |
| Two Pointers | Traverse from multiple positions | Sorted/ordered data analysis | O(n) |
| Sliding Window | Maintain window over sequence | Contiguous subarray/substring | O(n) |
| Graph Algorithms | Model as nodes and edges | Connectivity, paths, networks | Varies by algorithm |
| Bit Manipulation | Operate on binary representation | Subsets, XOR properties, flags | O(n) or O(2^n) |
Real solutions frequently combine techniques. Binary search on the answer uses binary search to drive optimization. BFS on implicit graphs combines graph traversal with dynamic state generation. DP with bitmasks combines dynamic programming with bit manipulation for subset problems. Don't think of techniques as mutually exclusive.
Greedy algorithms make the locally optimal choice at each step, hoping these choices lead to a globally optimal solution. When the greedy approach works, it typically yields the simplest and most efficient solution. The challenge is recognizing when it works.
12345678910111213141516171819202122232425262728293031323334353637
// Problem: Select maximum number of non-overlapping activities// Greedy strategy: Always pick the activity that ends earliest interface Activity { start: number; end: number;} function maxActivities(activities: Activity[]): Activity[] { // Sort by end time - key greedy insight const sorted = [...activities].sort((a, b) => a.end - b.end); const selected: Activity[] = []; let lastEnd = -Infinity; for (const activity of sorted) { if (activity.start >= lastEnd) { selected.push(activity); lastEnd = activity.end; } } return selected;} // WHY DOES GREEDY WORK HERE?// // Exchange argument proof sketch:// 1. Let OPT be an optimal solution// 2. Let G be our greedy solution// 3. Consider first activity where G differs from OPT// 4. Greedy picked activity ending earlier than OPT's choice// 5. Swapping OPT's choice for greedy's choice can't reduce solution size// (because greedy's ends earlier, leaving more room for future activities)// 6. Therefore, greedy is at least as good as optimal//// This exchange argument is the gold standard for proving greedy correctnessMany problems look greedy but aren't. The coin change problem with coins [1, 3, 4] and target 6: greedy gives 4+1+1 (3 coins), but optimal is 3+3 (2 coins). Always verify greedy works—ideally with a proof or at minimum with careful counterexample checking.
Dynamic programming applies when a problem has optimal substructure (optimal solutions contain optimal sub-solutions) AND overlapping subproblems (the same sub-solutions are needed multiple times). DP transforms exponential brute force into polynomial time by storing and reusing subproblem solutions.
The DP Design Process:
123456789101112131415161718192021222324
1. DEFINE THE STATE - What information do we need to describe a subproblem? - State must capture enough to solve subproblem independently - Common patterns: dp[i], dp[i][j], dp[i][j][k], dp[mask] 2. DEFINE THE RECURRENCE - How does dp[current_state] relate to dp[smaller_states]? - This is the heart of DP - the formula that expresses optimal substructure 3. IDENTIFY BASE CASES - What are the smallest subproblems with known answers? - Usually: empty sequences, single elements, trivial configurations 4. DETERMINE COMPUTATION ORDER - In what order should states be computed? - Must compute smaller states before states that depend on them - For bottom-up: typically left-to-right, or smaller indices first 5. EXTRACT THE ANSWER - Which state(s) contain the final answer? - Sometimes: dp[n], or max/min over dp table, or dp[n][target] 6. OPTIMIZE SPACE (if needed) - If only recent rows/values needed, reduce space from O(n²) to O(n)| Pattern | State Shape | Example Problems | Recurrence Flavor |
|---|---|---|---|
| Linear DP | dp[i] | House robber, Climb stairs, Fibonacci | dp[i] = f(dp[i-1], dp[i-2], ...) |
| Two-sequence DP | dp[i][j] | LCS, Edit distance, Interleaving strings | dp[i][j] = f(dp[i-1][j], dp[i][j-1], dp[i-1][j-1]) |
| Interval DP | dp[i][j] | Matrix chain, Burst balloons, Palindrome partition | dp[i][j] = f(dp[i][k], dp[k][j]) for k in [i,j] |
| Knapsack DP | dp[i][w] | 0/1 Knapsack, Subset sum, Coin change | dp[i][w] = max(dp[i-1][w], dp[i-1][w-wi] + vi) |
| Grid DP | dp[i][j] | Unique paths, Minimum path sum, Dungeon game | dp[i][j] = f(dp[i-1][j], dp[i][j-1]) |
| Bitmask DP | dp[mask] | TSP, Assignment, Subset with constraints | dp[mask] = f(dp[mask ^ bit]) |
| Digit DP | dp[pos][tight][...] | Count numbers with property | Process digit by digit with constraints |
12345678910111213141516171819202122232425262728293031323334353637
// Problem: Minimum operations to transform word1 into word2// Operations: insert, delete, replace (each costs 1) function minDistance(word1: string, word2: string): number { const m = word1.length; const n = word2.length; // STATE: dp[i][j] = min operations to transform word1[0..i-1] to word2[0..j-1] const dp: number[][] = Array(m + 1).fill(null) .map(() => Array(n + 1).fill(0)); // BASE CASES: transforming to/from empty string for (let i = 0; i <= m; i++) dp[i][0] = i; // delete all for (let j = 0; j <= n; j++) dp[0][j] = j; // insert all // RECURRENCE for (let i = 1; i <= m; i++) { for (let j = 1; j <= n; j++) { if (word1[i-1] === word2[j-1]) { // Characters match: no operation needed dp[i][j] = dp[i-1][j-1]; } else { dp[i][j] = 1 + Math.min( dp[i-1][j], // delete from word1 dp[i][j-1], // insert into word1 dp[i-1][j-1] // replace in word1 ); } } } // ANSWER: transform entire word1 to entire word2 return dp[m][n];} // SPACE OPTIMIZATION: Only need previous row// Can reduce from O(m*n) to O(min(m,n)) spaceDivide and conquer splits a problem into independent subproblems, solves them recursively, and combines the results. Unlike DP, subproblems don't overlap—each subproblem is solved exactly once. This leads to efficient algorithms with elegant recursive structure.
The Master Theorem for D&C Complexity:
For recurrences of the form T(n) = aT(n/b) + O(n^d):
Most D&C algorithms have a = 2, b = 2and merge step O(n), giving O(n log n).
| Algorithm | Split | Conquer | Combine | Complexity |
|---|---|---|---|---|
| Merge Sort | Two halves | Sort each half | Merge sorted halves | O(n log n) |
| Quick Sort | Partition around pivot | Sort each part | Already in place | O(n log n) avg |
| Binary Search | Eliminate half | Search appropriate half | N/A | O(log n) |
| Closest Pair | Split by x-coordinate | Find closest in each half | Check strip between | O(n log n) |
| Strassen's Multiply | Split matrices into quadrants | 7 recursive multiplies | Combine with adds | O(n^2.807) |
| Karatsuba Multiply | Split number into halves | 3 recursive multiplies | Combine with shifts/adds | O(n^1.585) |
| Count Inversions | Two halves | Count in each half | Count cross inversions | O(n log n) |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
// Problem: Count pairs (i,j) where i < j but arr[i] > arr[j]// Insight: During merge sort, we naturally find these pairs function countInversions(arr: number[]): number { const temp = new Array(arr.length); return mergeSort(arr, temp, 0, arr.length - 1);} function mergeSort(arr: number[], temp: number[], left: number, right: number): number { let inversions = 0; if (left < right) { const mid = Math.floor((left + right) / 2); // DIVIDE: count inversions in each half inversions += mergeSort(arr, temp, left, mid); inversions += mergeSort(arr, temp, mid + 1, right); // COMBINE: count split inversions during merge inversions += merge(arr, temp, left, mid, right); } return inversions;} function merge(arr: number[], temp: number[], left: number, mid: number, right: number): number { let i = left; // Left subarray pointer let j = mid + 1; // Right subarray pointer let k = left; // Merged array pointer let inversions = 0; while (i <= mid && j <= right) { if (arr[i] <= arr[j]) { temp[k++] = arr[i++]; } else { // arr[i] > arr[j]: all remaining elements in left half // form inversions with arr[j] inversions += (mid - i + 1); temp[k++] = arr[j++]; } } // Copy remaining elements while (i <= mid) temp[k++] = arr[i++]; while (j <= right) temp[k++] = arr[j++]; // Copy back to original array for (let i = left; i <= right; i++) arr[i] = temp[i]; return inversions;}Backtracking builds solutions incrementally, abandoning a candidate ("backtracking") as soon as it determines the candidate cannot lead to a valid solution. It's essentially smart brute force—exploring the solution space systematically while pruning invalid paths.
12345678910111213
function backtrack(currentState, choices): if isComplete(currentState): processSolution(currentState) // Found valid solution return for choice in choices: if isValid(currentState, choice): // Pruning step makeChoice(currentState, choice) backtrack(currentState, remainingChoices) undoChoice(currentState, choice) // Backtrack Key insight: The pruning in isValid() is what makes backtracking efficient.Without pruning, it's just brute force enumeration.| Problem | State | Choices | Pruning Condition |
|---|---|---|---|
| N-Queens | Partial board placement | Column for next row | No queen attacks exist |
| Sudoku | Partial grid | 1-9 for next empty cell | No row/col/box conflict |
| Permutations | Partial permutation | Unused elements | Element not already used |
| Subsets | Current subset | Include/exclude next element | None (or sum <= target) |
| Combination Sum | Current combination, remaining sum | Numbers >= last used | Remaining sum >= 0 |
| Word Search | Path in grid, matched prefix | Adjacent unvisited cells | Cell matches next character |
| Palindrome Partition | Current partition, remaining string | All prefix cuts | Prefix is palindrome |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
// Problem: Place N queens on NxN board such that no two attack each other function solveNQueens(n: number): string[][] { const solutions: string[][] = []; const board: string[][] = Array(n).fill(null) .map(() => Array(n).fill('.')); // Track which columns and diagonals are under attack const cols = new Set<number>(); const diag1 = new Set<number>(); // row - col const diag2 = new Set<number>(); // row + col function backtrack(row: number): void { // Base case: all queens placed if (row === n) { solutions.push(board.map(r => r.join(''))); return; } // Try each column in current row for (let col = 0; col < n; col++) { // PRUNING: check if position is safe if (cols.has(col) || diag1.has(row - col) || diag2.has(row + col)) { continue; } // Make choice board[row][col] = 'Q'; cols.add(col); diag1.add(row - col); diag2.add(row + col); // Recurse to next row backtrack(row + 1); // Undo choice (backtrack) board[row][col] = '.'; cols.delete(col); diag1.delete(row - col); diag2.delete(row + col); } } backtrack(0); return solutions;} // Time complexity: O(N!) due to pruning// Without pruning would be O(N^N)Binary search is far more powerful than simple sorted-array lookup. The core insight: any monotonic function over an ordered domain can be binary searched. This enables binary search on answers, on capacities, on time values, and many other creative applications.
Binary search on answer pattern:
When asked to minimize or maximize something, consider: can I check if value X is achievable? If this check has monotonic behavior (once achievable, always achievable for larger/smaller values), use binary search.
1234567891011121314151617181920212223242526272829303132333435363738
// Problem: Koko has n piles of bananas. Guards return in h hours.// What's the minimum eating speed (bananas/hour) to finish in time?// Each hour, Koko can eat at most k bananas from one pile. function minEatingSpeed(piles: number[], h: number): number { // INSIGHT: If speed k works, any speed > k also works // This is the monotonic property enabling binary search! const canFinish = (speed: number): boolean => { let hoursNeeded = 0; for (const pile of piles) { hoursNeeded += Math.ceil(pile / speed); } return hoursNeeded <= h; }; // Binary search on answer (eating speed) // Range: [1, max pile size] let lo = 1; let hi = Math.max(...piles); while (lo < hi) { const mid = lo + Math.floor((hi - lo) / 2); if (canFinish(mid)) { // mid works, try smaller hi = mid; } else { // mid doesn't work, need larger lo = mid + 1; } } return lo;} // Key insight: Don't search through bananas.// Search through possible speeds, using feasibility check.| Variation | What You're Searching | Feasibility Check | Example |
|---|---|---|---|
| Classic | Position of element | Compare with target | Find element in sorted array |
| Lower bound | First position ≥ target | Compare with target | First occurrence of value |
| Upper bound | First position > target | Compare with target | Last occurrence of value |
| On answer | Minimum/maximum satisfying value | Custom feasibility function | Min days to ship, Koko's bananas |
| On rotated array | Actual position after rotation | Compare with endpoints | Search in rotated array |
| On 2D matrix | Value in sorted matrix | Row and column reasoning | Search 2D matrix |
| Peak finding | Local maximum | Compare with neighbors | Find peak element |
Two pointers and sliding window are fundamental techniques for processing sequences efficiently. They transform O(n²) brute force into O(n) by cleverly maintaining state as pointers move.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
// Problem: Find minimum window in s that contains all characters of t function minWindow(s: string, t: string): string { if (t.length > s.length) return ""; // Count characters we need const need = new Map<string, number>(); for (const c of t) { need.set(c, (need.get(c) || 0) + 1); } const window = new Map<string, number>(); let have = 0; // Characters with satisfied count const required = need.size; // Distinct characters needed let left = 0; let minLen = Infinity; let minStart = 0; for (let right = 0; right < s.length; right++) { // Expand window const c = s[right]; window.set(c, (window.get(c) || 0) + 1); // Check if this character's requirement is now satisfied if (need.has(c) && window.get(c) === need.get(c)) { have++; } // Contract window while valid while (have === required) { // Update answer if (right - left + 1 < minLen) { minLen = right - left + 1; minStart = left; } // Remove from left const leftChar = s[left]; window.set(leftChar, window.get(leftChar)! - 1); if (need.has(leftChar) && window.get(leftChar)! < need.get(leftChar)!) { have--; } left++; } } return minLen === Infinity ? "" : s.substring(minStart, minStart + minLen);}Graph problems require matching the question to the right specialized algorithm. The choice depends on what you're computing (paths, components, ordering) and the graph's properties (weighted, directed, DAG).
| Problem Type | Graph Type | Algorithm | Complexity |
|---|---|---|---|
| Shortest path (unweighted) | Any | BFS | O(V + E) |
| Shortest path (non-negative weights) | Weighted | Dijkstra | O((V+E) log V) |
| Shortest path (negative weights) | Weighted | Bellman-Ford | O(VE) |
| All-pairs shortest path | Any | Floyd-Warshall | O(V³) |
| Minimum spanning tree | Weighted undirected | Prim or Kruskal | O(E log V) |
| Cycle detection | Directed | DFS with colors | O(V + E) |
| Cycle detection | Undirected | DFS or Union-Find | O(V + E) or O(Eα(V)) |
| Topological sort | DAG | Kahn's BFS or DFS | O(V + E) |
| Strongly connected components | Directed | Kosaraju or Tarjan | O(V + E) |
| Articulation points/bridges | Undirected | Tarjan's algorithm | O(V + E) |
| Bipartite check | Any | BFS/DFS 2-coloring | O(V + E) |
| Maximum flow | Flow network | Ford-Fulkerson / Edmonds-Karp | O(VE²) |
Many problems are graph problems in disguise. States are nodes, transitions are edges. 'Minimum button presses' = shortest path. 'Can we reach configuration X?' = reachability. 'Order of tasks with dependencies' = topological sort. Once you see the graph structure, standard algorithms apply.
123456789101112131415161718192021222324252627282930313233343536373839
// Decision tree for shortest path problems function chooseShortestPathAlgorithm(graph: GraphProperties): string { // Step 1: How many source-destination pairs? if (graph.allPairs) { return "Floyd-Warshall O(V³)"; } // Step 2: Single source - check edge weights if (!graph.hasWeights) { return "BFS O(V+E)"; // All edges weight 1 } if (graph.hasNegativeWeights) { if (graph.needNegativeCycleDetection) { return "Bellman-Ford O(VE)"; } // For DAG with negative weights: topological sort + relaxation if (graph.isDAG) { return "Topological sort + relaxation O(V+E)"; } return "Bellman-Ford O(VE)"; } // Positive weights only if (graph.isDense) { return "Dijkstra with array O(V²)"; } return "Dijkstra with heap O((V+E) log V)";} interface GraphProperties { allPairs: boolean; hasWeights: boolean; hasNegativeWeights: boolean; needNegativeCycleDetection: boolean; isDAG: boolean; isDense: boolean; // E ≈ V²}Advanced problems often require combining multiple techniques. Recognizing these combination patterns dramatically expands your problem-solving toolkit.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
// Problem: Split array into m subarrays to minimize maximum sum// This is "Painter's Partition" / "Book Allocation" pattern function splitArray(nums: number[], m: number): number { // INSIGHT: If we can achieve max sum X with m parts, we can achieve // any max sum > X too. Monotonic property -> Binary Search! // Binary search on the answer (maximum subarray sum) let lo = Math.max(...nums); // At minimum, largest element let hi = nums.reduce((a, b) => a + b); // At maximum, entire array // Greedy feasibility check: can we split with max sum <= limit? const canSplit = (maxSum: number): boolean => { let parts = 1; let currentSum = 0; for (const num of nums) { if (currentSum + num > maxSum) { // Start new part parts++; currentSum = num; if (parts > m) return false; } else { currentSum += num; } } return true; }; // Binary search for minimum achievable max sum while (lo < hi) { const mid = lo + Math.floor((hi - lo) / 2); if (canSplit(mid)) { hi = mid; // Try smaller max sum } else { lo = mid + 1; // Need larger max sum } } return lo;} // The combination: Binary Search (outer) + Greedy (inner check)// Binary search provides O(log(sum)) iterations// Each greedy check is O(n)// Total: O(n log(sum))We've developed a comprehensive framework for selecting algorithmic techniques based on problem characteristics. This skill, combined with data structure selection from the previous page, forms the core of systematic problem-solving.
What's next:
The final page of this module focuses on building pattern recognition muscle—developing the intuition through practice that makes these mappings automatic rather than deliberate.
You now have a systematic framework for mapping problem types to algorithmic techniques. The key is practice: as you solve more problems, these mappings become intuitive pattern matches rather than deliberate analysis.