Loading learning content...
Imagine standing in a vast open field with scattered cell towers. At any location, your phone connects to the nearest tower—the one whose signal arrives first. Now imagine drawing invisible boundaries across the entire field, where each boundary marks the exact points equidistant from two adjacent towers. These boundaries naturally partition the entire plane into regions, each "owned" by exactly one tower.
This simple scenario describes one of the most elegant geometric structures in mathematics: the Voronoi diagram. Named after Russian mathematician Georgy Voronoi (1908), this tessellation appears everywhere—from the patterns on giraffe skin to the structure of foam bubbles, from cellular network coverage to crystalline grain boundaries.
For machine learning, Voronoi diagrams are far more than mathematical curiosity. They provide the exact geometric characterization of 1-NN decision boundaries. Understanding Voronoi tessellations unlocks deep intuition about how KNN partitions feature space, why decision boundaries have their particular shapes, and how the geometry fundamentally determines classification behavior.
By the end of this page, you will understand the formal definition of Voronoi diagrams, their fundamental properties and construction, the duality relationship with Delaunay triangulations, and why this geometry is essential for understanding nearest neighbor classification.
Let us establish the precise mathematical framework. A Voronoi diagram is a partition of space based on distance to a discrete set of points.
Definition (Voronoi Diagram): Given a set of n distinct points P = {p₁, p₂, ..., pₙ} in ℝᵈ (called sites or generators), the Voronoi cell (or Voronoi region) associated with site pᵢ is defined as:
$$V(p_i) = {x \in \mathbb{R}^d : |x - p_i| \leq |x - p_j| \text{ for all } j \neq i}$$
In words, V(pᵢ) consists of all points in space that are at least as close to pᵢ as to any other site. The collection of all Voronoi cells forms the Voronoi diagram of P, denoted Vor(P).
This definition immediately yields a fundamental insight: every point in space belongs to exactly one Voronoi cell (with ties at boundaries). The Voronoi diagram thus provides a complete, non-overlapping partition of the entire space.
The Voronoi diagram depends critically on the choice of distance metric. While we focus on Euclidean distance (L₂ norm), Voronoi diagrams can be defined for any metric. Manhattan distance (L₁) produces cells with 45° boundaries; Chebyshev distance (L∞) produces rectangular cells. The choice of metric fundamentally reshapes the entire tessellation.
Key Terminology:
The geometric structure arises from perpendicular bisectors. Consider just two sites p₁ and p₂. The set of points equidistant from both forms a hyperplane (in 2D, a line) that perpendicularly bisects the line segment connecting p₁ and p₂. This perpendicular bisector divides space into two half-spaces.
| Dimension | Cell Shape | Edge/Face | Vertex |
|---|---|---|---|
| 1D (line) | Intervals on the line | Points (bisectors) | N/A |
| 2D (plane) | Convex polygons | Line segments (perpendicular bisectors) | Points where ≥3 edges meet |
| 3D (space) | Convex polyhedra | Planar faces | Points where ≥4 faces meet |
| d-dimensional | Convex polytopes | (d-1)-dimensional faces | Points where ≥(d+1) faces meet |
Understanding how Voronoi diagrams are constructed reveals their fundamental geometric nature. We examine three perspectives: the half-plane intersection method, Fortune's sweep line algorithm, and the Delaunay triangulation approach.
Method 1: Half-Plane Intersection
The most intuitive construction method directly implements the definition. For each site pᵢ:
$$V(p_i) = \bigcap_{j \neq i} H_{ij}$$
Since the intersection of half-planes is always convex, this immediately proves that every Voronoi cell is convex. This is a fundamental property with important implications for classification boundaries.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899
import numpy as npfrom scipy.spatial import HalfspaceIntersection, ConvexHullfrom scipy.optimize import linprog def construct_voronoi_cell_halfplane(sites: np.ndarray, site_idx: int, bounds: tuple = (-10, 10)) -> np.ndarray: """ Construct a Voronoi cell using half-plane intersection. This method directly implements the definition: V(p_i) = intersection of all half-planes H_ij where H_ij = {x : ||x - p_i|| <= ||x - p_j||} Args: sites: (n, d) array of site coordinates site_idx: Index of site whose cell to construct bounds: Bounding box limits for finite representation Returns: vertices: Array of cell vertex coordinates """ n_sites, dim = sites.shape p_i = sites[site_idx] halfspaces = [] # For each other site, create a half-plane inequality for j in range(n_sites): if j == site_idx: continue p_j = sites[j] # The half-plane ||x - p_i|| <= ||x - p_j|| expands to: # x·x - 2·p_i·x + p_i·p_i <= x·x - 2·p_j·x + p_j·p_j # Simplifying: 2(p_j - p_i)·x <= p_j·p_j - p_i·p_i # This is: a·x <= b where: # a = 2(p_j - p_i) # b = ||p_j||² - ||p_i||² a = 2 * (p_j - p_i) # Normal vector b = np.dot(p_j, p_j) - np.dot(p_i, p_i) # Offset # scipy.spatial.HalfspaceIntersection expects format: [a, -b] # for inequality a·x + (-b) <= 0, i.e., a·x <= b halfspaces.append(np.append(a, -b)) # Add bounding box constraints for d in range(dim): # x_d >= bounds[0] => -x_d <= -bounds[0] lower = np.zeros(dim + 1) lower[d] = -1 lower[-1] = -bounds[0] halfspaces.append(lower) # x_d <= bounds[1] upper = np.zeros(dim + 1) upper[d] = 1 upper[-1] = -bounds[1] halfspaces.append(upper) halfspaces = np.array(halfspaces) # Find a feasible interior point (the site itself works) interior_point = p_i # Compute half-space intersection try: hs = HalfspaceIntersection(halfspaces, interior_point) vertices = hs.intersections # Order vertices for polygon (2D case) if dim == 2: center = np.mean(vertices, axis=0) angles = np.arctan2(vertices[:, 1] - center[1], vertices[:, 0] - center[0]) vertices = vertices[np.argsort(angles)] return vertices except Exception as e: print(f"Half-space intersection failed: {e}") return None # Demonstrationif __name__ == "__main__": # Simple example: 4 sites forming a square sites = np.array([ [0.0, 0.0], [2.0, 0.0], [0.0, 2.0], [2.0, 2.0] ]) print("Voronoi cell vertices for each site:") for i in range(len(sites)): vertices = construct_voronoi_cell_halfplane(sites, i, bounds=(-1, 3)) print(f"Site {i} at {sites[i]}: {len(vertices)} vertices")Method 2: Fortune's Sweep Line Algorithm
The naive half-plane intersection has O(n²) complexity per cell, giving O(n³) total. Fortune's algorithm (1987) achieves the optimal O(n log n) complexity using a sweep line approach.
The key insight is to process sites in sorted order (by one coordinate) while maintaining the beach line—a piecewise parabolic curve representing the boundary of the known Voronoi region. As the sweep line advances:
The algorithm maintains: (1) the beach line as a balanced binary tree, (2) a priority queue of upcoming events. Each site generates O(1) events on average, giving O(n log n) total complexity.
In practice, you'll rarely implement Voronoi construction from scratch. Libraries like scipy.spatial.Voronoi, CGAL, or Qhull provide robust, efficient implementations. Understanding the construction process, however, is essential for comprehending edge cases, numerical issues, and why certain configurations produce unexpected results.
Voronoi diagrams possess remarkable structural properties that directly impact their behavior in classification. Understanding these properties builds intuition for KNN decision surfaces.
Property 1: Convexity of Cells
Every Voronoi cell is a convex polytope (in 2D, a convex polygon). This follows directly from the half-plane intersection construction: the intersection of convex sets (half-planes) is always convex.
Implication: 1-NN decision regions are always convex. This limits expressiveness—1-NN cannot capture non-convex class regions with a single training point. However, collections of cells from the same class can form arbitrarily complex shapes.
Property 2: Adjacency and Neighbors
Two Voronoi cells V(pᵢ) and V(pⱼ) are adjacent (share an edge) if and only if there exists a point equidistant from pᵢ and pⱼ that is closer to these two sites than to any other. This shared edge lies on the perpendicular bisector of pᵢpⱼ.
Implication: In KNN, class boundaries occur where Voronoi cells of different classes meet. The "smoothness" of boundaries depends on how cells of the same class cluster together.
Property 3: Euler's Formula Constraint
For a connected planar graph: V - E + F = 2 (vertices, edges, faces). Voronoi diagrams satisfy:
For sites in general position (no four cocircular): v = 2n - 5, e = 3n - 6 (approximately, for n ≥ 3).
This linear relationship means the Voronoi diagram's complexity is O(n), regardless of how the sites are distributed—a crucial property for algorithmic efficiency.
Property 4: Degeneracy and General Position
Sites are in general position if no d+2 sites are cospheric (in 2D: no four points concircular). Degeneracies cause:
Practical implementations must handle degeneracies through symbolic perturbation or careful numerical methods.
| Dimension d | Vertices | Edges/Faces | Cells | Total Complexity |
|---|---|---|---|---|
| 2 | O(n) | O(n) | n | Θ(n) |
| 3 | O(n²) | O(n²) | n | Θ(n²) |
| 4 | O(n²) | O(n²) | n | Θ(n²) |
| d (general) | O(n^⌈d/2⌉) | O(n^⌈d/2⌉) | n | Θ(n^⌈d/2⌉) |
One of the most beautiful relationships in computational geometry is the duality between Voronoi diagrams and Delaunay triangulations. Understanding this duality provides an alternative perspective on nearest-neighbor structures.
Definition (Delaunay Triangulation): The Delaunay triangulation DT(P) of a point set P connects two sites pᵢ and pⱼ with an edge if and only if their Voronoi cells share an edge. Equivalently, pᵢpⱼ is a Delaunay edge if there exists a circle passing through pᵢ and pⱼ with no other sites in its interior.
The duality manifests precisely:
| Voronoi Diagram | Delaunay Triangulation |
|---|---|
| Voronoi vertex (degree k) | Delaunay face (k vertices) |
| Voronoi edge | Perpendicular Delaunay edge |
| Voronoi cell | Delaunay vertex (site) |
In 2D, the Delaunay triangulation is unique (for points in general position) and maximizes the minimum angle across all triangulations—it produces the "most equilateral" triangles possible.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
import numpy as npimport matplotlib.pyplot as pltfrom scipy.spatial import Voronoi, Delaunay, voronoi_plot_2dfrom matplotlib.collections import LineCollection def visualize_voronoi_delaunay_duality(sites: np.ndarray): """ Visualize the duality between Voronoi diagram and Delaunay triangulation. Key observations: 1. Voronoi edges are perpendicular to corresponding Delaunay edges 2. Voronoi vertices are circumcenters of Delaunay triangles 3. Each Voronoi cell corresponds to one site (Delaunay vertex) """ fig, axes = plt.subplots(1, 3, figsize=(15, 5)) # Compute both structures vor = Voronoi(sites) tri = Delaunay(sites) # --- Plot 1: Voronoi Diagram --- ax1 = axes[0] ax1.set_title("Voronoi Diagram", fontsize=12, fontweight='bold') # Plot Voronoi regions with fill for region_idx, region in enumerate(vor.regions): if not region or -1 in region: # Skip unbounded continue polygon = vor.vertices[region] ax1.fill(polygon[:, 0], polygon[:, 1], alpha=0.3, edgecolor='navy', linewidth=1.5) # Plot sites ax1.scatter(sites[:, 0], sites[:, 1], c='red', s=80, zorder=10, label='Sites (generators)') ax1.legend(loc='upper right') ax1.set_aspect('equal') ax1.set_xlim(-0.5, 3.5) ax1.set_ylim(-0.5, 3.5) # --- Plot 2: Delaunay Triangulation --- ax2 = axes[1] ax2.set_title("Delaunay Triangulation", fontsize=12, fontweight='bold') ax2.triplot(sites[:, 0], sites[:, 1], tri.simplices, color='navy', linewidth=1.5) ax2.scatter(sites[:, 0], sites[:, 1], c='red', s=80, zorder=10, label='Sites (vertices)') # Mark circumcenters (= Voronoi vertices) circumcenters = [] for simplex in tri.simplices: pts = sites[simplex] cc = compute_circumcenter(pts) circumcenters.append(cc) circumcenters = np.array(circumcenters) ax2.scatter(circumcenters[:, 0], circumcenters[:, 1], c='blue', s=40, marker='s', zorder=10, label='Circumcenters (Voronoi vertices)') ax2.legend(loc='upper right') ax2.set_aspect('equal') ax2.set_xlim(-0.5, 3.5) ax2.set_ylim(-0.5, 3.5) # --- Plot 3: Overlay --- ax3 = axes[2] ax3.set_title("Duality: Overlay", fontsize=12, fontweight='bold') # Voronoi edges (blue) for ridge_idx, (p1, p2) in enumerate(vor.ridge_vertices): if p1 >= 0 and p2 >= 0: ax3.plot(vor.vertices[[p1, p2], 0], vor.vertices[[p1, p2], 1], 'b-', linewidth=2, alpha=0.7) # Delaunay edges (orange, dashed) for simplex in tri.simplices: for i in range(3): p1, p2 = simplex[i], simplex[(i+1) % 3] ax3.plot(sites[[p1, p2], 0], sites[[p1, p2], 1], 'orange', linewidth=1.5, linestyle='--', alpha=0.8) ax3.scatter(sites[:, 0], sites[:, 1], c='red', s=80, zorder=10) ax3.scatter(vor.vertices[:, 0], vor.vertices[:, 1], c='blue', s=40, marker='s', zorder=10) # Annotate perpendicularity ax3.text(0.5, -0.3, "Voronoi edges ⟂ Delaunay edges", fontsize=10, ha='center', style='italic') ax3.set_aspect('equal') ax3.set_xlim(-0.5, 3.5) ax3.set_ylim(-0.5, 3.5) plt.tight_layout() return fig def compute_circumcenter(triangle: np.ndarray) -> np.ndarray: """Compute the circumcenter of a triangle (2D).""" ax, ay = triangle[0] bx, by = triangle[1] cx, cy = triangle[2] d = 2 * (ax * (by - cy) + bx * (cy - ay) + cx * (ay - by)) if abs(d) < 1e-10: return np.array([np.nan, np.nan]) ux = ((ax**2 + ay**2) * (by - cy) + (bx**2 + by**2) * (cy - ay) + (cx**2 + cy**2) * (ay - by)) / d uy = ((ax**2 + ay**2) * (cx - bx) + (bx**2 + by**2) * (ax - cx) + (cx**2 + cy**2) * (bx - ax)) / d return np.array([ux, uy]) # Example usageif __name__ == "__main__": sites = np.array([ [0.5, 0.5], [2.5, 0.5], [1.5, 1.5], [0.5, 2.5], [2.5, 2.5] ]) fig = visualize_voronoi_delaunay_duality(sites) plt.savefig("voronoi_delaunay_duality.png", dpi=150) plt.show()Why Delaunay Duality Matters for KNN
The Delaunay triangulation provides an efficient structure for nearest neighbor queries:
Neighbor lists: The Delaunay graph connects each site to its "natural neighbors"—those whose Voronoi cells touch. For KNN with small k, neighbors often come from this natural neighbor set.
Walking algorithms: To find the nearest neighbor of a query point q, one can "walk" the Delaunay triangulation, moving to adjacent sites that are closer to q. The convexity of Delaunay cells ensures this walk terminates at the correct answer.
Interpolation: Delaunay-based interpolation (natural neighbor interpolation) provides smooth value estimates that respect the Voronoi structure—useful for KNN regression.
The perpendicularity relationship also explains why decision boundaries in 1-NN are piecewise linear: they follow perpendicular bisectors, which are straight lines (in Euclidean space).
While 2D Voronoi and Delaunay structures are efficient (O(n) complexity), higher dimensions face exponential blowup: O(n^⌈d/2⌉). In 10 dimensions with 1000 points, the structures could have O(10¹⁵) elements. This is why exact Voronoi-based approaches are impractical for high-dimensional KNN—we must turn to approximate methods like KD-trees or LSH.
We now arrive at the central insight connecting Voronoi geometry to nearest neighbor classification: the 1-NN decision boundary is exactly the Voronoi diagram of the training points.
Theorem (1-NN Voronoi Equivalence): Let D = {(x₁, y₁), ..., (xₙ, yₙ)} be a training set with features xᵢ ∈ ℝᵈ and labels yᵢ. The 1-NN classifier assigns label ŷ(q) to query point q as:
$$\hat{y}(q) = y_{\text{arg}\min_i |q - x_i|}$$
The set of points receiving label y = c is precisely:
$$R_c = \bigcup_{i: y_i = c} V(x_i)$$
where V(xᵢ) is the Voronoi cell of training point xᵢ.
Proof: By definition of Voronoi cells, any query point q ∈ V(xᵢ) satisfies ‖q - xᵢ‖ ≤ ‖q - xⱼ‖ for all j. Thus xᵢ is the nearest neighbor, and q receives label yᵢ. ∎
This seemingly simple observation has profound implications:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129
import numpy as npimport matplotlib.pyplot as pltfrom scipy.spatial import Voronoifrom matplotlib.patches import Polygonfrom matplotlib.collections import PatchCollection def visualize_1nn_voronoi_equivalence(): """ Demonstrate that 1-NN decision regions exactly match Voronoi cells. We show: 1. Training points colored by class 2. Voronoi cells colored by the class of their generating point 3. 1-NN predictions on a grid match the Voronoi partition """ np.random.seed(42) # Generate training data: two classes with some overlap n_per_class = 15 # Class 0: Centered around (1, 1) X0 = np.random.randn(n_per_class, 2) * 0.8 + np.array([1, 1]) y0 = np.zeros(n_per_class, dtype=int) # Class 1: Centered around (3, 3) X1 = np.random.randn(n_per_class, 2) * 0.8 + np.array([3, 3]) y1 = np.ones(n_per_class, dtype=int) X = np.vstack([X0, X1]) y = np.hstack([y0, y1]) # Compute Voronoi diagram vor = Voronoi(X) # Create figure fig, axes = plt.subplots(1, 3, figsize=(15, 5)) colors = ['#3498db', '#e74c3c'] # Blue for class 0, Red for class 1 # --- Left: Training points only --- ax1 = axes[0] ax1.set_title("Training Data", fontsize=12, fontweight='bold') for cls in [0, 1]: mask = y == cls ax1.scatter(X[mask, 0], X[mask, 1], c=colors[cls], s=100, edgecolor='white', linewidth=1.5, label=f'Class {cls}', zorder=10) ax1.legend() ax1.set_xlim(-1, 5) ax1.set_ylim(-1, 5) ax1.set_aspect('equal') # --- Middle: Voronoi cells colored by class --- ax2 = axes[1] ax2.set_title("Voronoi Tessellation (= 1-NN Regions)", fontsize=12, fontweight='bold') # Plot colored Voronoi cells patches = [] patch_colors = [] for point_idx, region_idx in enumerate(vor.point_region): region = vor.regions[region_idx] if not region or -1 in region: # Skip unbounded regions continue polygon_vertices = vor.vertices[region] patches.append(Polygon(polygon_vertices)) patch_colors.append(colors[y[point_idx]]) pc = PatchCollection(patches, alpha=0.4, edgecolor='gray', linewidth=1) pc.set_facecolor(patch_colors) ax2.add_collection(pc) # Overlay training points for cls in [0, 1]: mask = y == cls ax2.scatter(X[mask, 0], X[mask, 1], c=colors[cls], s=100, edgecolor='white', linewidth=1.5, zorder=10) ax2.set_xlim(-1, 5) ax2.set_ylim(-1, 5) ax2.set_aspect('equal') # --- Right: 1-NN predictions on grid --- ax3 = axes[2] ax3.set_title("1-NN Predictions (Grid)", fontsize=12, fontweight='bold') # Create prediction grid xx, yy = np.meshgrid(np.linspace(-1, 5, 200), np.linspace(-1, 5, 200)) grid_points = np.c_[xx.ravel(), yy.ravel()] # Compute 1-NN predictions predictions = [] for q in grid_points: distances = np.linalg.norm(X - q, axis=1) nearest_idx = np.argmin(distances) predictions.append(y[nearest_idx]) predictions = np.array(predictions).reshape(xx.shape) # Plot decision regions ax3.contourf(xx, yy, predictions, levels=[-0.5, 0.5, 1.5], colors=colors, alpha=0.4) ax3.contour(xx, yy, predictions, levels=[0.5], colors='black', linewidths=2) # Overlay training points for cls in [0, 1]: mask = y == cls ax3.scatter(X[mask, 0], X[mask, 1], c=colors[cls], s=100, edgecolor='white', linewidth=1.5, zorder=10) ax3.set_xlim(-1, 5) ax3.set_ylim(-1, 5) ax3.set_aspect('equal') plt.tight_layout() # Annotate the equivalence fig.text(0.5, 0.02, "Observation: Middle and Right plots show identical decision regions", ha='center', fontsize=11, style='italic') return fig if __name__ == "__main__": fig = visualize_1nn_voronoi_equivalence() plt.savefig("1nn_voronoi_equivalence.png", dpi=150, bbox_inches='tight') plt.show()The Voronoi diagram's structure is intimately tied to the choice of distance metric. While we've focused on Euclidean distance, understanding how other metrics reshape the tessellation provides insight into how KNN behaves with different distance functions.
Euclidean Distance (L₂ norm): $$d(x, y) = \sqrt{\sum_i (x_i - y_i)^2}$$
Manhattan Distance (L₁ norm): $$d(x, y) = \sum_i |x_i - y_i|$$
Minkowski Distance (Lₚ norm):
$$d_p(x, y) = \left(\sum_i |x_i - y_i|^p\right)^{1/p}$$
As p varies from 1 to ∞:
For 1 < p < 2, the isodistance curves interpolate between diamonds and circles ("squircles"). For p > 2, they interpolate between circles and squares.
Implications for KNN:
Boundary shape: The distance metric determines boundary curvature. L₂ gives smooth curves; L₁ and L∞ give piecewise linear at restricted angles.
Feature importance: Different metrics weight features differently. L∞ focuses on the most discrepant feature; L₂ balances all features.
Robustness: L₁ is more robust to outliers in individual features; a single extreme value dominates L∞.
Computational cost: L₁ and L∞ can be faster (no square root); L₂ supports more efficient data structures (KD-trees).
Choose your distance metric based on the problem semantics: L₁ for sparse data or when outliers exist; L₂ for "natural" continuous features; Mahalanobis when features are correlated; Cosine similarity for text or high-dimensional normalized data. The Voronoi structure—and thus KNN behavior—follows directly from this choice.
While our focus is on understanding KNN decision surfaces, Voronoi diagrams have remarkably broad applications. Understanding these applications reinforces the geometric intuition and reveals unexpected connections.
Spatial Optimization:
Facility Location: Where should we place hospitals, fire stations, or warehouses to minimize maximum distance to any customer? The Voronoi diagram provides the partition of who-serves-whom, and the 1-center or k-center problem minimizes the maximum cell radius.
Coverage Planning: Cellular network tower placement naturally creates Voronoi-like coverage areas. The diagram reveals coverage gaps and overlap regions.
Computational Geometry:
Nearest Neighbor Preprocessing: Constructing the Voronoi diagram allows O(log n) nearest neighbor queries via point location. For static datasets with many queries, this beats linear scan.
Convex Hull: The unbounded Voronoi cells exactly correspond to convex hull points. This provides an alternative convex hull algorithm.
| Domain | Application | Voronoi Interpretation |
|---|---|---|
| Biology | Cell territory modeling | Each cell occupies its Voronoi region around its nucleus |
| Astronomy | Galaxy clustering | Void regions between galaxy clusters form Voronoi cells |
| Geography | Rainfall interpolation (Thiessen polygons) | Each weather station represents its Voronoi region |
| Computer Graphics | Mesh generation | Voronoi/Delaunay provide well-shaped triangulations |
| Materials Science | Grain boundaries in polycrystals | Crystal grains grow until meeting neighbors at Voronoi boundaries |
| Robotics | Motion planning | Voronoi skeleton provides maximum-clearance paths |
| Epidemiology | Disease spread modeling | John Snow's cholera map (1854) implicitly used Voronoi analysis |
Natural Neighbor Interpolation:
Voronoi diagrams enable a sophisticated interpolation technique called natural neighbor interpolation (Sibson, 1981). For a query point q:
$$f(q) = \sum_i w_i f(x_i), \quad w_i = \frac{|V(q) \cap V_i^{\text{old}}|}{|V(q)|}$$
This provides C¹ continuous interpolation that naturally respects the data distribution—dense regions contribute more precisely, sparse regions contribute broader estimates. This directly connects to weighted KNN regression.
Medial Axis and Skeleton:
The Voronoi diagram of boundary points of a shape approximates its medial axis—the set of points with multiple equidistant nearest boundary points. The medial axis provides a shape skeleton useful for shape matching, character recognition, and robotic path planning.
We have established the complete geometric foundation for understanding nearest neighbor classification. The Voronoi diagram is not merely a visualization tool—it is the precise mathematical structure underlying 1-NN decision boundaries.
What's Next:
With the foundational understanding of Voronoi tessellations in place, we turn to a critical question: what happens when K > 1? The next page explores how decision boundaries transform as we move from 1-NN to k-NN, revealing the smoothing effect of majority voting and the geometric implications of considering multiple neighbors.
You now understand Voronoi tessellations as the geometric foundation of nearest neighbor classification. These concepts—convex cells, piecewise linear boundaries, Delaunay duality—will resurface throughout machine learning whenever local structure and proximity-based decisions matter.