Loading content...
A salesman needs to visit n cities exactly once each, then return home. Which route minimizes total travel distance?
This simple question—the Traveling Salesman Problem (TSP)—has captivated mathematicians, computer scientists, and operations researchers for over a century. It appears trivially describable: just find the shortest tour. Yet despite its simplicity, TSP has resisted all attempts at efficient exact solution.
TSP occupies a unique place in computer science. It was among the first problems proven NP-hard. It has driven the development of combinatorial optimization, approximation algorithms, and metaheuristics. It serves as a benchmark for new algorithmic techniques. And it appears, often in disguise, in applications far beyond route planning—from circuit board manufacturing to DNA sequencing to telescope scheduling.
Understanding TSP means understanding the fundamental limits of efficient computation.
By the end of this page, you will understand: (1) The formal definition of TSP and its variants, (2) Why TSP is NP-hard and what this means practically, (3) Exact algorithms including dynamic programming, (4) Approximation algorithms with provable guarantees, (5) Heuristics used for large instances, and (6) How to recognize TSP-like problems in real applications.
The Traveling Salesman Problem (Decision Version):
Given a set of n cities, distances between each pair of cities, and a budget B, is there a tour that visits each city exactly once, returns to the starting city, and has total length ≤ B?
The Traveling Salesman Problem (Optimization Version):
Given a set of n cities and distances between each pair, find the tour that visits each city exactly once, returns to start, and minimizes total travel distance.
Graph-Theoretic Formulation:
Notice: TSP asks for a minimum-weight Hamiltonian cycle. Finding whether a Hamiltonian cycle exists is already NP-complete (as we saw). Requiring minimum weight doesn't make it easier!
1234567891011121314151617181920212223242526272829303132333435363738
Consider 4 cities with the following distance matrix: A B C D ┌────────────────────────────┐ A │ 0 10 15 20 │ B │ 10 0 35 25 │ C │ 15 35 0 30 │ D │ 20 25 30 0 │ └────────────────────────────┘ This is a SYMMETRIC TSP: distance(u,v) = distance(v,u) Possible tours (starting and ending at A): Tour 1: A → B → C → D → A = 10 + 35 + 30 + 20 = 95 Tour 2: A → B → D → C → A = 10 + 25 + 30 + 15 = 80 ✓ Optimal! Tour 3: A → C → B → D → A = 15 + 35 + 25 + 20 = 95 Tour 4: A → C → D → B → A = 15 + 30 + 25 + 10 = 80 ✓ Same as Tour 2 (reversed) Tour 5: A → D → B → C → A = 20 + 25 + 35 + 15 = 95 Tour 6: A → D → C → B → A = 20 + 30 + 35 + 10 = 95 For n = 4 cities, there are (n-1)!/2 = 3!/2 = 3 distinct tours (symmetric TSP - each tour equals its reverse) For n = 10: 181,440 tours For n = 20: ~60 quadrillion tours For n = 50: more than atoms in the observable universeImportant Variants:
1. Symmetric vs. Asymmetric:
2. Metric vs. Non-Metric:
3. Special Cases:
The distinction between metric and non-metric TSP is crucial for approximation. Metric TSP admits a 1.5-approximation algorithm (Christofides). Non-metric TSP cannot be approximated within ANY constant factor unless P = NP. Always identify which variant your problem represents!
The fundamental difficulty of TSP lies in the explosive growth of possible tours.
Counting Tours:
For n cities, a tour is a cyclic permutation. Starting city is irrelevant (we can always relabel). So we have (n-1)! orderings of the remaining cities.
For symmetric TSP, each tour equals its reverse, giving (n-1)!/2 distinct tours.
Growth Rate:
| Cities (n) | Distinct Tours (Symmetric) | Brute Force Time @ 10⁹ tours/sec |
|---|---|---|
| 5 | 12 | < 1 microsecond |
| 10 | 181,440 | < 1 millisecond |
| 15 | 43,589,145,600 | 44 seconds |
| 20 | 6.08 × 10¹⁶ | 1,929 years |
| 25 | 3.1 × 10²³ | 9.8 billion years |
| 30 | 4.4 × 10³⁰ | 1.4 × 10¹⁴ years |
| 50 | 3.0 × 10⁶² | Age of universe × 10⁵² |
The Impossibility of Brute Force:
At n = 20, brute force would take millennia on the fastest supercomputer. At n = 30, the age of the universe is insufficient. This isn't a matter of faster hardware—the exponential growth overwhelms any conceivable speedup.
Yet TSP instances with thousands of cities have been solved optimally using clever algorithms. The gap between brute force and clever algorithms is vast.
Why Can't We Just Check Promising Tours?
The challenge is that 'promising' tours are not easily identifiable. A tour that looks good locally (each edge is short) might be globally poor (overall structure is wrong). Every local decision has complex global consequences. This is the essence of NP-hardness.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
/** * Brute force TSP - generates all permutations * * Time Complexity: O(n! × n) * Space Complexity: O(n) * * Only feasible for n ≤ 12 or so */function tspBruteForce(distances: number[][]): { tour: number[]; cost: number;} { const n = distances.length; // Generate all permutations of cities 1 to n-1 // (city 0 is fixed as start/end) const cities = Array.from({ length: n - 1 }, (_, i) => i + 1); let bestTour: number[] = []; let bestCost = Infinity; // Heap's algorithm for generating permutations function permute(arr: number[], size: number): void { if (size === 1) { // Evaluate this tour: 0 → arr[0] → arr[1] → ... → 0 let cost = distances[0][arr[0]]; // Start to first city for (let i = 0; i < arr.length - 1; i++) { cost += distances[arr[i]][arr[i + 1]]; } cost += distances[arr[arr.length - 1]][0]; // Last city to start if (cost < bestCost) { bestCost = cost; bestTour = [0, ...arr, 0]; } return; } for (let i = 0; i < size; i++) { permute(arr, size - 1); // Swap based on parity of size if (size % 2 === 0) { [arr[i], arr[size - 1]] = [arr[size - 1], arr[i]]; } else { [arr[0], arr[size - 1]] = [arr[size - 1], arr[0]]; } } } permute(cities, cities.length); return { tour: bestTour, cost: bestCost };} // Example usageconst distances = [ [0, 10, 15, 20], [10, 0, 35, 25], [15, 35, 0, 30], [20, 25, 30, 0]]; const result = tspBruteForce(distances);console.log("Best tour:", result.tour); // [0, 1, 3, 2, 0] or equivalentconsole.log("Cost:", result.cost); // 80 // Time this for educational purposesfunction timeIt(n: number) { // Create random distance matrix const d: number[][] = Array.from({ length: n }, () => Array.from({ length: n }, () => Math.floor(Math.random() * 100)) ); // Make symmetric for (let i = 0; i < n; i++) { d[i][i] = 0; for (let j = i + 1; j < n; j++) { d[j][i] = d[i][j]; } } const start = performance.now(); tspBruteForce(d); const end = performance.now(); console.log(`n = ${n}: ${(end - start).toFixed(2)} ms`);} // Try: timeIt(8), timeIt(10), timeIt(12)// Watch the exponential growth!While brute force takes O(n!) time, dynamic programming with bitmask representation achieves O(n² × 2ⁿ)—still exponential, but dramatically faster for medium-sized instances.
The Held-Karp Algorithm (1962):
Key Insight: Instead of considering all n! orderings, consider all 2ⁿ subsets of cities and the minimum cost to visit exactly that subset ending at each city.
State Definition:
dp[S][j] = minimum cost to:
Recurrence:
dp[S][j] = min over all i ∈ S, i ≠ j of: dp[S \ {j}][i] + distance(i, j)
Base Case: dp[{0}][0] = 0 (start at city 0, cost 0)
Answer: min over all j ≠ 0 of dp[{0,1,...,n-1}][j] + distance(j, 0)
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133
/** * Held-Karp Algorithm for TSP * * Time Complexity: O(n² × 2ⁿ) * Space Complexity: O(n × 2ⁿ) * * Feasible for n ≤ 20-25 depending on memory */function tspHeldKarp(distances: number[][]): { tour: number[]; cost: number;} { const n = distances.length; const INF = Infinity; // dp[mask][j] = min cost to visit cities in mask, ending at j // mask is a bitmask where bit i is set if city i is visited const dp: number[][] = Array.from( { length: 1 << n }, () => Array(n).fill(INF) ); // parent[mask][j] = previous city before j in optimal path const parent: number[][] = Array.from( { length: 1 << n }, () => Array(n).fill(-1) ); // Base case: start at city 0 dp[1][0] = 0; // mask = 1 means only city 0 is visited // Process subsets in order of size for (let mask = 1; mask < (1 << n); mask++) { // mask must include city 0 (our start) if ((mask & 1) === 0) continue; for (let last = 0; last < n; last++) { // 'last' must be in the current set if ((mask & (1 << last)) === 0) continue; // Skip if this is just the starting point if (mask === 1 && last === 0) continue; // prevMask = mask without 'last' const prevMask = mask ^ (1 << last); // Try all possible previous cities for (let prev = 0; prev < n; prev++) { if ((prevMask & (1 << prev)) === 0) continue; const newCost = dp[prevMask][prev] + distances[prev][last]; if (newCost < dp[mask][last]) { dp[mask][last] = newCost; parent[mask][last] = prev; } } } } // Find the best final city before returning to 0 const fullMask = (1 << n) - 1; // All cities visited let bestCost = INF; let lastCity = -1; for (let j = 1; j < n; j++) { const totalCost = dp[fullMask][j] + distances[j][0]; if (totalCost < bestCost) { bestCost = totalCost; lastCity = j; } } // Reconstruct the tour const tour: number[] = [0]; // End at 0 let mask = fullMask; let current = lastCity; while (current !== 0) { tour.push(current); const prev = parent[mask][current]; mask ^= (1 << current); current = prev; } tour.push(0); // Start at 0 tour.reverse(); return { tour, cost: bestCost };} // Example usageconst dist = [ [0, 10, 15, 20], [10, 0, 35, 25], [15, 35, 0, 30], [20, 25, 30, 0]]; const result = tspHeldKarp(dist);console.log("Optimal tour:", result.tour); // [0, 1, 3, 2, 0]console.log("Optimal cost:", result.cost); // 80 // Performance comparisonfunction compareTimes(n: number): void { // Generate random symmetric distance matrix const d: number[][] = Array.from({ length: n }, () => Array.from({ length: n }, () => Math.floor(Math.random() * 100)) ); for (let i = 0; i < n; i++) { d[i][i] = 0; for (let j = i + 1; j < n; j++) { d[j][i] = d[i][j]; } } console.log(`\nTesting n = ${n}:`); // Held-Karp const startHK = performance.now(); const resultHK = tspHeldKarp(d); const endHK = performance.now(); console.log(` Held-Karp: ${(endHK - startHK).toFixed(2)} ms, cost = ${resultHK.cost}`); // Brute force (only for small n) if (n <= 10) { const startBF = performance.now(); const resultBF = tspBruteForce(d); const endBF = performance.now(); console.log(` Brute Force: ${(endBF - startBF).toFixed(2)} ms, cost = ${resultBF.cost}`); }} // Try: compareTimes(10), compareTimes(15), compareTimes(18)Held-Karp improves from O(n!) to O(n² × 2ⁿ). For n = 20: factorial is 2.4 × 10¹⁸, but 20² × 2²⁰ ≈ 420 million. That's a factor of 5.7 trillion improvement! This makes n = 20-25 cities solvable exactly, though larger instances still require heuristics.
Since exact algorithms are impractical for large instances, we turn to approximation algorithms—polynomial-time algorithms with provable bounds on solution quality.
Definition: An algorithm is an α-approximation for minimization problem P if it:
For Metric TSP (triangle inequality holds), we have remarkable approximation results.
Algorithm 1: Nearest Neighbor (Heuristic, No Guarantee)
Greedy approach: always go to the nearest unvisited city. Fast but can be arbitrarily bad (no approximation guarantee).
Algorithm 2: Double-Tree Algorithm (2-approximation)
Guarantee: Solution ≤ 2 × OPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163
/** * Nearest Neighbor Heuristic * * Simple greedy approach with no worst-case guarantee. * Often produces decent solutions quickly. * * Time Complexity: O(n²) */function nearestNeighborTSP(distances: number[][]): { tour: number[]; cost: number;} { const n = distances.length; const visited: boolean[] = new Array(n).fill(false); const tour: number[] = [0]; visited[0] = true; let totalCost = 0; for (let count = 1; count < n; count++) { const current = tour[tour.length - 1]; let nearestCity = -1; let nearestDist = Infinity; for (let j = 0; j < n; j++) { if (!visited[j] && distances[current][j] < nearestDist) { nearestDist = distances[current][j]; nearestCity = j; } } tour.push(nearestCity); visited[nearestCity] = true; totalCost += nearestDist; } // Return to start totalCost += distances[tour[tour.length - 1]][0]; tour.push(0); return { tour, cost: totalCost };} /** * Double-Tree Algorithm (2-approximation for Metric TSP) * * Uses MST to build approximate tour. * Guarantee: Solution ≤ 2 × OPT (if triangle inequality holds) * * Time Complexity: O(n² log n) for MST + O(n) for tour */function doubleTreeTSP(distances: number[][]): { tour: number[]; cost: number;} { const n = distances.length; // Step 1: Build MST using Prim's algorithm const inMST: boolean[] = new Array(n).fill(false); const key: number[] = new Array(n).fill(Infinity); const parent: number[] = new Array(n).fill(-1); key[0] = 0; for (let count = 0; count < n; count++) { // Find minimum key vertex not in MST let u = -1; let minKey = Infinity; for (let v = 0; v < n; v++) { if (!inMST[v] && key[v] < minKey) { minKey = key[v]; u = v; } } inMST[u] = true; // Update keys of adjacent vertices for (let v = 0; v < n; v++) { if (!inMST[v] && distances[u][v] < key[v]) { key[v] = distances[u][v]; parent[v] = u; } } } // Build adjacency list from MST (doubled edges) const adj: number[][] = Array.from({ length: n }, () => []); for (let v = 1; v < n; v++) { adj[v].push(parent[v]); adj[parent[v]].push(v); // Double the edges (for Eulerian graph) adj[v].push(parent[v]); adj[parent[v]].push(v); } // Step 2: Find Eulerian tour via DFS and shortcut const visited: boolean[] = new Array(n).fill(false); const tour: number[] = []; function dfs(u: number): void { if (!visited[u]) { visited[u] = true; tour.push(u); } while (adj[u].length > 0) { const v = adj[u].pop()!; // Remove reverse edge const idx = adj[v].indexOf(u); if (idx !== -1) adj[v].splice(idx, 1); dfs(v); } } dfs(0); tour.push(0); // Complete the cycle // Calculate actual tour cost let totalCost = 0; for (let i = 0; i < tour.length - 1; i++) { totalCost += distances[tour[i]][tour[i + 1]]; } return { tour, cost: totalCost };} // Christofides algorithm sketch (1.5-approximation)// Full implementation requires minimum-weight perfect matching// which is complex - here's the conceptual outline:/*function christofidesTSP(distances: number[][]): { tour: number[], cost: number } { // Step 1: Build MST const mst = computeMST(distances); // Step 2: Find vertices with odd degree in MST const oddVertices = findOddDegreeVertices(mst); // Step 3: Find minimum weight perfect matching on odd vertices // This is the complex part - requires Blossom algorithm O(n³) const matching = minimumWeightPerfectMatching(distances, oddVertices); // Step 4: Combine MST and matching to get Eulerian multigraph const combined = combineGraphs(mst, matching); // Step 5: Find Eulerian tour const eulerTour = findEulerianTour(combined); // Step 6: Shortcut to Hamiltonian cycle return shortcutToTour(eulerTour, distances);}*/ // Example comparisonconst cities = [ [0, 10, 15, 20, 25], [10, 0, 35, 25, 30], [15, 35, 0, 30, 20], [20, 25, 30, 0, 15], [25, 30, 20, 15, 0]]; console.log("Nearest Neighbor:", nearestNeighborTSP(cities));console.log("Double Tree:", doubleTreeTSP(cities));Christofides Algorithm (1.5-approximation):
The best known polynomial-time approximation for metric TSP:
Why 1.5?
Hardness of Improvement:
Despite 50+ years of research, Christofides' 1.5-approximation remains unbeaten for general metric TSP! Only recently (2011-2020) has there been progress on special cases, and the general problem remains at 1.5.
For TSP without the triangle inequality, no polynomial-time algorithm can achieve ANY constant approximation ratio (unless P = NP). The reduction involves creating distances where shortcuts don't preserve solution quality. This is why metric vs. non-metric classification is critical!
For instances with thousands of cities, even polynomial-time approximation algorithms may be too slow, or we may want better-than-guaranteed solutions. Metaheuristics fill this gap.
Local Search: 2-opt
Start with any tour. Repeatedly improve it by removing two edges and reconnecting in the other possible way. Continue until no 2-opt improvement exists.
How 2-opt Works:
Given edges (a,b) and (c,d) in the tour a→b→...→c→d→...→a:
Accept the change if it reduces total cost.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177
/** * 2-opt Local Search for TSP * * Iteratively improves a tour by swapping edge pairs. * Often produces near-optimal solutions. * * Time Complexity: O(n²) per iteration, total varies * Typically converges quickly on random instances. */function twoOptImprove(tour: number[], distances: number[][]): { tour: number[]; cost: number;} { const n = tour.length - 1; // tour includes return to start let improved = true; // Calculate initial cost const tourCost = (t: number[]) => { let cost = 0; for (let i = 0; i < t.length - 1; i++) { cost += distances[t[i]][t[i + 1]]; } return cost; }; while (improved) { improved = false; for (let i = 0; i < n - 1; i++) { for (let j = i + 2; j < n; j++) { if (j === n - 1 && i === 0) continue; // Skip if adjacent // Current edges: (tour[i], tour[i+1]) and (tour[j], tour[j+1]) // Proposed: (tour[i], tour[j]) and (tour[i+1], tour[j+1]) const currentCost = distances[tour[i]][tour[i + 1]] + distances[tour[j]][tour[j + 1]]; const newCost = distances[tour[i]][tour[j]] + distances[tour[i + 1]][tour[j + 1]]; if (newCost < currentCost) { // Reverse the segment between i+1 and j const newTour = [ ...tour.slice(0, i + 1), ...tour.slice(i + 1, j + 1).reverse(), ...tour.slice(j + 1) ]; tour = newTour; improved = true; } } } } return { tour, cost: tourCost(tour) };} /** * Full 2-opt TSP solver * Start with nearest neighbor, then apply 2-opt */function twoOptTSP(distances: number[][]): { tour: number[]; cost: number;} { // Get initial tour using nearest neighbor const initial = nearestNeighborTSP(distances); console.log("Initial (NN) cost:", initial.cost); // Improve with 2-opt const improved = twoOptImprove(initial.tour, distances); console.log("After 2-opt cost:", improved.cost); return improved;} /** * Or-opt: move a sequence of 1, 2, or 3 consecutive cities * to another position in the tour. */function orOptImprove(tour: number[], distances: number[][]): { tour: number[]; cost: number;} { const n = tour.length - 1; let improved = true; const tourCost = (t: number[]) => { let cost = 0; for (let i = 0; i < t.length - 1; i++) { cost += distances[t[i]][t[i + 1]]; } return cost; }; while (improved) { improved = false; // Try moving segments of length 1, 2, 3 for (let segLen = 1; segLen <= 3; segLen++) { for (let i = 1; i < n - segLen; i++) { // Segment: tour[i] to tour[i + segLen - 1] for (let j = 0; j < n; j++) { if (j >= i - 1 && j <= i + segLen) continue; // Try inserting segment after position j const segment = tour.slice(i, i + segLen); const before = tour.slice(1, i); const after = tour.slice(i + segLen, n); // Reconstruct based on j's position let newMiddle: number[]; if (j < i) { newMiddle = [ ...before.slice(0, j), ...segment, ...before.slice(j), ...after ]; } else { const adjustedJ = j - segLen; newMiddle = [ ...before, ...after.slice(0, adjustedJ - before.length + 1), ...segment, ...after.slice(adjustedJ - before.length + 1) ]; } const newTour = [0, ...newMiddle, 0]; const newCost = tourCost(newTour); if (newCost < tourCost(tour)) { tour = newTour; improved = true; } } } } } return { tour, cost: tourCost(tour) };} // Example: Large random instancefunction solveRandomTSP(n: number) { // Generate random cities in 2D const cities = Array.from({ length: n }, () => ({ x: Math.random() * 1000, y: Math.random() * 1000 })); // Euclidean distances const dist: number[][] = Array.from({ length: n }, () => Array(n).fill(0)); for (let i = 0; i < n; i++) { for (let j = 0; j < n; j++) { const dx = cities[i].x - cities[j].x; const dy = cities[i].y - cities[j].y; dist[i][j] = Math.sqrt(dx * dx + dy * dy); } } console.log(`\nSolving TSP for ${n} random cities:`); const start = performance.now(); const result = twoOptTSP(dist); const end = performance.now(); console.log(`Time: ${(end - start).toFixed(2)} ms`); console.log(`Final cost: ${result.cost.toFixed(2)}`);} // solveRandomTSP(100); // Fast// solveRandomTSP(500); // Still reasonable// solveRandomTSP(1000); // May take a few secondsOther Important Metaheuristics:
Simulated Annealing: Accept worse moves probabilistically (probability decreases over time). Escapes local optima.
Genetic Algorithms: Maintain a population of tours. Combine (crossover) good tours and mutate. Evolve over generations.
Ant Colony Optimization: Simulated ants build tours probabilistically based on pheromone trails. Good edges accumulate more pheromone.
Lin-Kernighan (LKH): Sophisticated variable-depth search. Considers k-opt moves of varying k. State-of-the-art for practical TSP solving.
State of the Art:
Modern TSP solvers like Concorde (exact) and LKH (heuristic) can solve instances with millions of cities. Concorde has optimally solved instances with 85,900 cities. LKH finds solutions within 0.01% of optimal for most instances.
TSP appears far beyond the obvious route planning context. Recognizing TSP structure in problems is a valuable skill.
Manufacturing: Printed Circuit Board (PCB) Drilling
A drill must visit thousands of hole locations on a PCB. Minimizing travel distance reduces production time. This is directly TSP with Euclidean distances.
Logistics: Vehicle Routing Problem (VRP)
Multiple vehicles serve multiple customers. Each vehicle solves a TSP-like subproblem. Efficient routing saves fuel and time.
Genomics: DNA Sequencing
Given DNA fragments, find their correct order. Overlapping fragments define a distance metric. Finding the assembly is related to shortest superstring, which reduces to TSP variants.
| Domain | Problem | TSP Formulation |
|---|---|---|
| Manufacturing | Robotic arm path planning | Minimize arm movement between operations |
| Astronomy | Telescope scheduling | Minimize slew time between observations |
| Materials | 3D printing path | Minimize print head travel (non-extruding) |
| Warehouse | Order picking | Minimize picker travel in warehouse |
| Healthcare | MRI imaging | Minimize coil movement between slices |
| Networks | Packet routing | Minimize latency in multi-hop paths |
| Entertainment | Rock band tour routing | Minimize travel between concert venues |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100
/** * Warehouse Pick Path Optimization * * A picker must collect items from various locations. * Minimize total walking distance. */ interface Location { aisle: number; position: number; // Position along the aisle} interface PickOrder { itemId: string; location: Location;} /** * Calculate distance between two warehouse locations * Simplified model: aisles are parallel, travel at ends */function warehouseDistance(a: Location, b: Location): number { if (a.aisle === b.aisle) { // Same aisle: direct distance return Math.abs(a.position - b.position); } else { // Different aisles: go to end, cross, go to position const aisleWidth = 3; // meters between aisles const aisleLength = 50; // meters per aisle // Strategy: go to nearest end of aisle const crossingCost = Math.abs(b.aisle - a.aisle) * aisleWidth; // Either go to front (position 0) or back (position 50) const viaFront = (a.position) + crossingCost + (b.position); const viaBack = (aisleLength - a.position) + crossingCost + (aisleLength - b.position); return Math.min(viaFront, viaBack); }} /** * Optimize pick path using TSP */function optimizePickPath( orders: PickOrder[], startLocation: Location): { sequence: PickOrder[]; totalDistance: number;} { const n = orders.length + 1; // +1 for start location // Build distance matrix const locations: Location[] = [startLocation, ...orders.map(o => o.location)]; const distances: number[][] = Array.from( { length: n }, (_, i) => Array.from( { length: n }, (_, j) => warehouseDistance(locations[i], locations[j]) ) ); // Solve TSP (for small orders, exact; for large, heuristic) let result: { tour: number[], cost: number }; if (n <= 12) { result = tspHeldKarp(distances); } else { // Use nearest neighbor + 2-opt for larger orders const nn = nearestNeighborTSP(distances); result = twoOptImprove(nn.tour, distances); } // Remove start/end from tour, map back to orders const sequence = result.tour .slice(1, -1) // Remove start and end (both are location 0) .map(i => orders[i - 1]); return { sequence, totalDistance: result.cost };} // Example usageconst pickList: PickOrder[] = [ { itemId: "SKU001", location: { aisle: 1, position: 10 } }, { itemId: "SKU002", location: { aisle: 3, position: 45 } }, { itemId: "SKU003", location: { aisle: 1, position: 40 } }, { itemId: "SKU004", location: { aisle: 5, position: 20 } }, { itemId: "SKU005", location: { aisle: 2, position: 5 } },]; const start = { aisle: 0, position: 0 }; // Dispatch area const optimized = optimizePickPath(pickList, start);console.log("Optimal pick sequence:");optimized.sequence.forEach((item, i) => { console.log(` ${i + 1}. ${item.itemId} at aisle ${item.location.aisle}, pos ${item.location.position}`);});console.log(`Total walking distance: ${optimized.totalDistance.toFixed(1)} meters`);The Traveling Salesman Problem embodies the fundamental challenge of combinatorial optimization. Its simple statement belies its profound difficulty.
What We've Learned:
You now understand the Traveling Salesman Problem—its definition, complexity, exact algorithms, approximation guarantees, practical heuristics, and real-world applications. You can recognize TSP structures in diverse problems and choose appropriate solution strategies. Next, we'll synthesize these lessons to learn how to recognize intractable problems in general.