Loading content...
Imagine you're solving a puzzle where forward checking tells you: "Variable Y can only be {2} and variable Z can also only be {2}." It sounds fine—each variable has a valid value. But there's a hidden constraint: Y ≠ Z. Both cannot be 2 simultaneously. Forward checking misses this because it only looks at constraints between assigned and unassigned variables, not between pairs of unassigned variables.
Arc consistency addresses this fundamental limitation. It doesn't just propagate from the current assignment—it examines ALL pairs of constrained variables and eliminates values that have no viable partner in the related variable's domain. This transitive propagation continues until no more values can be removed: a fixed point.
The Power of Arc Consistency:
Arc consistency (AC) can solve many constraint satisfaction problems entirely through propagation, with no search at all! For other problems, it dramatically reduces domain sizes before search begins and after each assignment, pruning far more of the search space than forward checking.
This page explores arc consistency in depth: its formal definition, the classic AC-3 algorithm, why it works, when it's worth the cost, and its relationship to stronger consistency notions.
By the end of this page, you'll understand what arc consistency means formally, how the AC-3 algorithm achieves it efficiently, how to combine arc consistency with search (MAC), when arc consistency is and isn't sufficient, and how it relates to stronger consistency concepts like path consistency and k-consistency.
Arc consistency is a property of constraint satisfaction problems that ensures every value in every domain is "supported" by the constraints. Formally:
Definition (Arc Consistency):
A CSP is arc consistent if for every binary constraint C(X, Y) and for every value x in the domain of X, there exists at least one value y in the domain of Y such that C(x, y) is satisfied.
Equivalently: a value x ∈ D(X) is arc consistent with respect to constraint C(X, Y) if there exists a support y ∈ D(Y) such that (x, y) satisfies C.
Key Insight:
If a value x has no support in some related domain, then x cannot possibly appear in any solution. We can safely remove it—and this removal might trigger further removals in a cascade.
1234567891011121314151617181920212223242526272829303132
ARC CONSISTENCY DEFINITION:═══════════════════════════════════════════════════════════ For a binary CSP with: - Variables: X₁, X₂, ..., Xₙ - Domains: D₁, D₂, ..., Dₙ - Binary Constraints: C(Xᵢ, Xⱼ) for various pairs The CSP is ARC CONSISTENT if and only if: ∀ constraint C(X, Y): ∀ value x ∈ D(X): ∃ value y ∈ D(Y): C(x, y) is satisfied AND ∀ value y ∈ D(Y): ∃ value x ∈ D(X): C(x, y) is satisfied In words:"For every constraint, and for every value in either variable'sdomain, there is at least one compatible value in the othervariable's domain." TERMINOLOGY: - "Arc": A directed edge (X → Y) representing the need for each value in X to have support in Y - "Support": A value in Y that is compatible with a value in X - "Revise": Check an arc and remove unsupported values - "Propagate": Revise arcs until no changes occur (fixed point)Why "Arc" Consistency?
In the constraint graph, each binary constraint is an edge. We consider directed arcs: (X, Y) represents checking that every value in X has support in Y, while (Y, X) represents the reverse. A constraint creates two arcs that must both be consistent.
Node Consistency vs Arc Consistency:
Arc consistency subsumes node consistency when we model unary constraints as binary constraints with a dummy variable.
A CSP being arc consistent does NOT guarantee a solution exists. Arc consistency can reduce domains to non-empty but still unsolvable configurations. It's a filter that removes obviously bad values, but subtle conflicts between 3+ variables may remain. This is why we combine AC with search.
AC-3 (Arc Consistency Algorithm #3) is the most widely used algorithm for enforcing arc consistency. It's simple to understand, implement, and reasonably efficient for most practical purposes.
Algorithm Overview:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
interface Arc { x: string; // Variable to revise y: string; // Variable providing support} /** * AC-3: Achieve arc consistency for a binary CSP * Returns true if the CSP is arc consistent (possibly with reduced domains) * Returns false if a domain becomes empty (no solution exists) */function ac3<T>( csp: CSP<T>, domains: Map<string, Set<T>>): boolean { // Initialize queue with all arcs const queue: Arc[] = []; for (const constraint of csp.constraints) { const [x, y] = constraint.variables; queue.push({ x, y }); queue.push({ x: y, y: x }); // Both directions! } while (queue.length > 0) { const arc = queue.shift()!; if (revise(arc.x, arc.y, domains, csp)) { // Domain of arc.x was reduced if (domains.get(arc.x)!.size === 0) { return false; // Domain wipeout - no solution! } // Add all arcs (Z, arc.x) where Z ≠ arc.y for (const constraint of csp.constraints) { if (!constraint.variables.includes(arc.x)) continue; for (const z of constraint.variables) { if (z !== arc.x && z !== arc.y) { // Z might have lost support for some values in arc.x queue.push({ x: z, y: arc.x }); } } } } } return true; // Arc consistency achieved!} /** * Revise: Remove values from D(x) that have no support in D(y) * Returns true if any values were removed */function revise<T>( x: string, y: string, domains: Map<string, Set<T>>, csp: CSP<T>): boolean { let revised = false; const xDomain = domains.get(x)!; const yDomain = domains.get(y)!; // Find the constraint between x and y const constraint = csp.constraints.find(c => c.variables.includes(x) && c.variables.includes(y) ); if (!constraint) return false; // No constraint between these variables // Check each value in x's domain const toRemove: T[] = []; for (const xVal of xDomain) { // Look for a supporting value in y's domain let hasSupport = false; for (const yVal of yDomain) { const testAssignment = new Map([[x, xVal], [y, yVal]]); if (constraint.isSatisfied(testAssignment)) { hasSupport = true; break; // Found support, no need to check more } } if (!hasSupport) { // No support found - this value must be removed toRemove.push(xVal); revised = true; } } // Remove unsupported values for (const val of toRemove) { xDomain.delete(val); } return revised;}Complexity Analysis:
Total: O(c × d³)
This is polynomial in the size of the problem, making AC-3 tractable even for large CSPs.
Let's trace AC-3 on a simple example to see how domain reductions cascade through the constraint network.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485
PROBLEM: Graph Coloring with 3 Variables and 2 Colors═══════════════════════════════════════════════════════════ Variables: A, B, CDomains (initial): A = {R, G}, B = {R, G}, C = {R, G}Constraints: A ≠ B, B ≠ C, A ≠ C (triangle graph) Initial Queue: [(A,B), (B,A), (B,C), (C,B), (A,C), (C,A)] TRACE:═══════════════════════════════════════════════════════════ Step 1: Process (A, B) - "Does each value in A have support in B?" A = R: Support in B? B = G works (R ≠ G ✓). Has support. A = G: Support in B? B = R works (G ≠ R ✓). Has support. No change to A's domain. Step 2: Process (B, A) - "Does each value in B have support in A?" B = R: Support in A? A = G works. Has support. B = G: Support in A? A = R works. Has support. No change to B's domain. Step 3: Process (B, C) - Similar analysis No change. Step 4: Process (C, B) - Similar analysis No change. Step 5: Process (A, C) - "Does each value in A have support in C?" A = R: Support in C? C = G works. Has support. A = G: Support in C? C = R works. Has support. No change. Step 6: Process (C, A) - Similar No change. Queue is empty! CSP is arc consistent. RESULT: A = {R, G}, B = {R, G}, C = {R, G}Arc consistency achieved, but domains not reduced.(This is because the problem HAS solutions with any starting color.) ═══════════════════════════════════════════════════════════NEW EXAMPLE: Now let's constrain further═══════════════════════════════════════════════════════════ Add unary constraint: A = R (A is pre-colored Red)This reduces: A = {R} Run AC-3 again with A = {R}: Step 1: Process (A, B) - Does R in A have support in B? A = R: Check B = R? No (R ≠ R is false). Check B = G? Yes (R ≠ G ✓). A = R has support. No change. Step 2: Process (B, A) - Does each value in B have support in A? B = R: Check A = R? No (R ≠ R false). No other A values! B = R has NO SUPPORT! Remove R from B. B = G: Check A = R? Yes (G ≠ R ✓). Has support. B changed from {R, G} to {G}! Add to queue: (A, B) and (C, B) [all arcs pointing to B, except (A,B) was source] Step 3: Process (A, C) - Does R in A have support in C? A = R: Check C = R? No. Check C = G? Yes. Has support. No change. Step 4: Process (C, A) - Does each value in C have support in A = {R}? C = R: Check A = R? No (R ≠ R false). No support! Remove R from C. C = G: Check A = R? Yes (G ≠ R ✓). Has support. C changed from {R, G} to {G}! Add to queue: (A, C) and (B, C) Step 5: Process (C, B) - Does G in C have support in B = {G}? C = G: Check B = G? No (G ≠ G false). NO SUPPORT! Must remove G from C, but C = {G}! C becomes {} - EMPTY DOMAIN! ═══════════════════════════════════════════════════════════FAILURE DETECTED: Domain wipeout for CNo solution exists with A = R (triangle can't be 2-colored)AC-3 detected this WITHOUT any search!═══════════════════════════════════════════════════════════Notice how the initial constraint A = R triggered a cascade: B lost R, C lost R, then C lost G because its only remaining value conflicted with B's only remaining value. This cascade, impossible to detect with forward checking alone, revealed the unsolvability through pure propagation.
While running AC-3 once as preprocessing helps, the real power comes from maintaining arc consistency (MAC) throughout the search process. After each assignment, we run arc consistency again to propagate the implications.
MAC Algorithm:
MAC combines the propagation power of AC-3 with systematic search.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123
/** * MAC: Maintaining Arc Consistency solver * Combines backtracking search with AC-3 propagation after each assignment */function macSearch<T>(csp: CSP<T>): Map<string, T> | null { // Initialize domains const domains = new Map<string, Set<T>>(); for (const v of csp.variables) { domains.set(v, new Set(csp.domains.get(v) || [])); } // Initial AC-3 to reduce domains if (!ac3(csp, domains)) { return null; // No solution exists } return macBacktrack(new Map(), domains, csp);} function macBacktrack<T>( assignment: Map<string, T>, domains: Map<string, Set<T>>, csp: CSP<T>): Map<string, T> | null { // Check if all variables are assigned (or have singleton domains) if (assignment.size === csp.variables.length) { return new Map(assignment); } // All domains are singletons? Extract the solution const unassigned = csp.variables.filter(v => !assignment.has(v)); const allSingletons = unassigned.every(v => domains.get(v)!.size === 1); if (allSingletons) { const solution = new Map(assignment); for (const v of unassigned) { solution.set(v, [...domains.get(v)!][0]); } return solution; } // Select variable with MRV const variable = selectMRV(assignment, domains, csp); const currentDomain = [...domains.get(variable)!]; for (const value of currentDomain) { // Save domains for backtracking const savedDomains = cloneDomains(domains); // Make assignment assignment.set(variable, value); domains.set(variable, new Set([value])); // Reduce to singleton // Run AC-3 starting from arcs involving 'variable' const acSuccess = ac3Incremental(variable, domains, csp); if (acSuccess) { const result = macBacktrack(assignment, domains, csp); if (result !== null) { return result; } } // Restore domains on backtrack for (const [v, d] of savedDomains) { domains.set(v, d); } assignment.delete(variable); } return null;} /** * Incremental AC-3: Start from arcs affected by a new assignment */function ac3Incremental<T>( assignedVar: string, domains: Map<string, Set<T>>, csp: CSP<T>): boolean { const queue: Arc[] = []; // Only add arcs pointing TO the assigned variable for (const constraint of csp.constraints) { if (!constraint.variables.includes(assignedVar)) continue; for (const other of constraint.variables) { if (other !== assignedVar) { queue.push({ x: other, y: assignedVar }); } } } // Run normal AC-3 from this initial queue while (queue.length > 0) { const arc = queue.shift()!; if (revise(arc.x, arc.y, domains, csp)) { if (domains.get(arc.x)!.size === 0) { return false; } for (const constraint of csp.constraints) { if (!constraint.variables.includes(arc.x)) continue; for (const z of constraint.variables) { if (z !== arc.x && z !== arc.y) { queue.push({ x: z, y: arc.x }); } } } } } return true;} function cloneDomains<T>(domains: Map<string, Set<T>>): Map<string, Set<T>> { const clone = new Map<string, Set<T>>(); for (const [v, d] of domains) { clone.set(v, new Set(d)); } return clone;}MAC vs Forward Checking:
MAC is strictly more powerful than forward checking:
When MAC Wins:
The Cost:
MAC does more work per node (O(cd²) vs O(cd) for FC). Whether this extra work pays off depends on how much additional pruning it achieves. Empirically, MAC outperforms FC on most benchmark problems, sometimes dramatically.
| Aspect | Forward Checking | MAC (with AC-3) |
|---|---|---|
| Propagation Scope | One hop from assignment | Transitive closure (fixed point) |
| Time per Node | O(c × d) | O(c × d²) worst case |
| Pruning Power | Medium | High |
| Implementation | Simpler | More complex |
| Best For | Sparse constraints, easy problems | Dense constraints, hard problems |
| Detects FC-missed failures | No | Yes |
While AC-3's O(cd³) complexity is polynomial, the inner loop (checking all pairs of values for support) can be expensive. More sophisticated algorithms precompute supportinformation to avoid redundant work.
AC-4: Optimal Worst-Case Complexity
AC-4 achieves O(cd²) complexity—optimal for arc consistency—by maintaining explicit support counts. For each value x in D(X) and constraint C(X,Y), AC-4 tracks:
When a value y is removed from D(Y), we decrement the counter for each x it supported. When a counter hits zero, x loses all support and is removed.
Trade-off: AC-4 has better worst-case complexity but higher overhead for setup and maintenance. AC-3 is often faster in practice for small to medium problems.
AC-2001 / AC-3.1:
These variants optimize AC-3 by remembering the last support found for each (value, constraint) pair. When re-checking, they start from the last known support rather than searching from scratch. This achieves O(cd²) time while being simpler than AC-4.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
// AC-2001: Remember last support to avoid redundant searchinginterface SupportCache<T> { // For each (variable, value, constraint), store the last known support lastSupport: Map<string, T>; // Key: "var:value:constraintId"} function reviseWithLastSupport<T>( x: string, y: string, domains: Map<string, Set<T>>, csp: CSP<T>, cache: SupportCache<T>): boolean { let revised = false; const xDomain = domains.get(x)!; const yDomain = domains.get(y)!; const constraint = findConstraint(x, y, csp); if (!constraint) return false; const toRemove: T[] = []; for (const xVal of xDomain) { const cacheKey = `${x}:${xVal}:${constraint.id}`; // Start from last known support if it still exists const lastKnown = cache.lastSupport.get(cacheKey); if (lastKnown !== undefined && yDomain.has(lastKnown)) { // Last support still valid - skip expensive search continue; } // Last support gone - search for new support let newSupport: T | undefined = undefined; for (const yVal of yDomain) { const testAssignment = new Map([[x, xVal], [y, yVal]]); if (constraint.isSatisfied(testAssignment)) { newSupport = yVal; break; // Found new support } } if (newSupport !== undefined) { // Cache the new support cache.lastSupport.set(cacheKey, newSupport); } else { // No support found toRemove.push(xVal); revised = true; } } for (const val of toRemove) { xDomain.delete(val); } return revised;} // Key insight: If the last support is still in the domain,// we don't need to search at all! This often reduces// the work from O(d) to O(1) per value check.| Algorithm | Worst-Case Time | Space | Practical Performance |
|---|---|---|---|
| AC-3 | O(c × d³) | O(c × d) | Good for small/medium problems |
| AC-4 | O(c × d²) | O(c × d²) | Better for large problems, more setup |
| AC-2001/AC-3.1 | O(c × d²) | O(c × d) | Often best practical choice |
| AC-6 | O(c × d²) | O(c × d) | Bidirectional, even smarter caching |
| AC-7 | O(c × d²) | O(c × d) | Adds use of residues from both directions |
Real-world CSPs often have constraints involving more than two variables—sum constraints, all-different constraints, table constraints. Generalized Arc Consistency (GAC) extends arc consistency to these higher-arity constraints.
Definition (GAC):
A value x in D(X) is GAC with respect to constraint C(X, Y₁, Y₂, ..., Yₖ) if there exist values y₁ ∈ D(Y₁), y₂ ∈ D(Y₂), ..., yₖ ∈ D(Yₖ) such that C(x, y₁, y₂, ..., yₖ) is satisfied.
In words: every value must have a complete supporting tuple from the other variables' domains.
The Challenge:
For an n-ary constraint with domain size d, finding support naively requires checking d^(n-1) tuples. This is exponential in constraint arity.
The Solution: Specialized Propagators
For common global constraints, we develop specialized propagation algorithms that achieve GAC efficiently:
These specialized propagators are why modern constraint solvers are so effective—they exploit constraint structure rather than treating all constraints generically.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990
/** * AllDifferent propagation using value-variable bipartite matching * * Key insight: If a value v can only be taken by one variable X, * then X must take v (eliminate all other values from X's domain). * This is called a "Hall set" detection. */interface AllDifferentPropagator<T> { variables: string[]; // Propagate: remove values that violate the all-different constraint propagate(domains: Map<string, Set<T>>): boolean; // false if wipeout} function createAllDiffPropagator<T>(variables: string[]): AllDifferentPropagator<T> { return { variables, propagate(domains: Map<string, Set<T>>): boolean { // Build value-to-variables mapping const valueToVars = new Map<T, Set<string>>(); for (const v of variables) { for (const val of domains.get(v)!) { if (!valueToVars.has(val)) { valueToVars.set(val, new Set()); } valueToVars.get(val)!.add(v); } } // Simple propagation: if value appears in only one domain, fix it let changed = true; while (changed) { changed = false; for (const [val, vars] of valueToVars) { if (vars.size === 1) { const onlyVar = [...vars][0]; const domain = domains.get(onlyVar)!; if (domain.size > 1) { // This variable must take this value domain.clear(); domain.add(val); changed = true; // Remove this value from all other variables' consideration for (const v of variables) { if (v !== onlyVar) { const otherDomain = domains.get(v)!; if (otherDomain.delete(val)) { changed = true; if (otherDomain.size === 0) { return false; // Domain wipeout } } } } } } } // Also: if a variable's domain has only one value, remove from others for (const v of variables) { const domain = domains.get(v)!; if (domain.size === 1) { const val = [...domain][0]; for (const other of variables) { if (other !== v) { if (domains.get(other)!.delete(val)) { changed = true; if (domains.get(other)!.size === 0) { return false; } } } } } } } return true; } };} // Note: Full GAC for AllDifferent uses Hopcroft-Karp matching// to detect Hall sets, achieving O(n * sqrt(n) * d) complexity.// The above is a simplified version that catches easy cases.The power of modern constraint programming comes largely from specialized propagators for common constraint types. When you use AllDifferent instead of n(n-1)/2 binary ≠ constraints, you get much stronger propagation. Always prefer global constraints when available—they encode domain knowledge that generic arc consistency can't exploit.
Arc consistency considers pairs of variables. But what about triples? What about considering the entire problem at once? There's a hierarchy of consistency levels, each more powerful (and more expensive) than the last.
Path Consistency (PC):
A CSP is path consistent if for every pair of values (x, y) that's consistent with constraint C(X, Y), there exists a value z for every third variable Z such that both C(X, Z) and C(Y, Z) are satisfied.
Path consistency can detect failures that arc consistency misses—cases where two values are pairwise consistent but cannot be extended to any third variable.
k-Consistency:
Generalizing: a CSP is k-consistent if for any consistent assignment to k-1 variables, there exists a consistent assignment to any k-th variable.
Strong k-Consistency:
A CSP is strongly k-consistent if it's j-consistent for all j ≤ k. This is strictly stronger than just k-consistency.
| Level | Checks | Cost | Power |
|---|---|---|---|
| Node (1) | Single variables vs unary constraints | O(n × d) | Minimal |
| Arc (2) | Pairs vs binary constraints | O(c × d²) | Essential |
| Path (3) | Triples | O(n³ × d³) | Usually overkill |
| k-consistency | k-tuples | O(n^k × d^k) | Exponential for large k |
| Strong n | Complete problem | Exponential | Solves without search |
When to Use Higher Consistency:
The Practical Sweet Spot:
For most problems, the combination of:
provides the best performance. Going to higher consistency levels increases per-node cost more than it reduces search for typical problems.
Exception: Tree-Structured CSPs
If the constraint graph is a tree (no cycles), then arc consistency is sufficient to solve the problem in polynomial time! After running AC, we can construct a solution by assigning variables in a topological order—each variable will have at least one consistent value remaining.
There's a fundamental trade-off between propagation (consistency enforcement) and search. More propagation reduces the search space but costs more per node. Less propagation is cheaper per node but explores more nodes. The optimal balance depends on problem structure. Empirically, MAC (arc consistency at each node) is often the sweet spot for hard combinatorial problems.
Arc consistency represents a fundamental technique in constraint satisfaction—ensuring every value has support across all constraints. Let's consolidate the key insights from this page and the entire module:
Module 8 Complete: Constraint Satisfaction & Pruning
Across this module, we've built a comprehensive understanding of how to optimize backtracking through constraint-aware techniques:
These techniques transform backtracking from brute-force enumeration into intelligent, constraint-driven search. Problems that would take years to solve naively can often be solved in milliseconds with proper pruning and propagation.
You now understand the full spectrum of CSP techniques: from the theoretical framework through practical pruning strategies, forward checking, and arc consistency. These concepts form the foundation of modern constraint solving and appear throughout optimization, AI planning, scheduling, and beyond. You're equipped to recognize CSP-suitable problems and apply appropriate solving techniques.
What's Next:
The next module explores Common Backtracking Patterns—specific problem types like word search, combination sum, palindrome partitioning, and parentheses generation. You'll see how the CSP and pruning concepts from this module apply to concrete algorithm design challenges.