101 Logo
onenoughtone

Solution Approach

Solution Overview

There are 3 main approaches to solve this problem:

  1. Sorting Approach - Complex approach
  2. Max Heap Approach - Complex approach
  3. QuickSelect Approach - Complex approach

Approach 1: Sorting Approach

Let's start by understanding the problem: we need to find the k closest points to the origin (0, 0).

Thinking Process: The most straightforward approach is to calculate the distance of each point from the origin, sort all points by their distances, and return the first k points.

The Euclidean distance from a point (x, y) to the origin (0, 0) is √(x² + y²). Since we only care about comparing distances, we can use x² + y² instead of √(x² + y²) to avoid floating-point calculations.

Intuition: This approach directly follows the problem statement. We calculate distances, sort based on those distances, and then return the top k points.

Algorithm:

  1. Calculate the squared distance of each point from the origin: distance = x² + y².
  2. Sort all points by their squared distances in ascending order.
  3. Return the first k points from the sorted list.

Time Complexity:

O(n log n)

Where n is the number of points. Calculating distances takes O(n) time, and sorting takes O(n log n) time. The overall time complexity is dominated by the sorting step, which is O(n log n).

Space Complexity:

O(n)

We need O(n) space to store the distances and the sorted list of points.

Approach 2: Max Heap Approach

We can optimize the previous approach by using a max heap of size k to find the k closest points.

Thinking Process: Instead of sorting all points, we can use a max heap of size k to keep track of the k closest points. We iterate through all points and maintain a max heap of size k, where points are ordered by their distances from the origin.

For each point, we calculate its distance from the origin and add it to the heap. If the heap size exceeds k, we remove the point with the largest distance. After processing all points, the heap contains the k closest points to the origin.

Intuition: A max heap of size k allows us to efficiently find the k smallest elements in a collection. By maintaining a heap of size k, we avoid sorting all points, which is more efficient when k is much smaller than the number of points.

Algorithm:

  1. Create a max heap of size k, where points are ordered by their squared distances from the origin.
  2. Iterate through all points:
  3. a. Calculate the squared distance of the point from the origin: distance = x² + y².
  4. b. Add the point to the heap with its squared distance as the key.
  5. c. If the heap size exceeds k, remove the point with the largest squared distance.
  6. Extract all points from the heap to get the k closest points to the origin.

Time Complexity:

O(n log k)

Where n is the number of points and k is the number of closest points to return. Calculating distances takes O(n) time. Adding each point to the heap takes O(log k) time, and we do this for n points, resulting in O(n log k) time. Extracting k points from the heap takes O(k log k) time. The overall time complexity is O(n log k + k log k), which simplifies to O(n log k) since k ≤ n.

Space Complexity:

O(k)

We need O(k) space to store the heap of size k.

Approach 3: QuickSelect Approach

We can further optimize the solution using the QuickSelect algorithm to find the k closest points in O(n) average time.

Thinking Process: The QuickSelect algorithm is a selection algorithm to find the kth smallest element in an unordered list. We can use it to partition the points array such that the first k elements are the k closest points to the origin.

We first calculate the squared distance of each point from the origin. Then, we use the QuickSelect algorithm to find the kth smallest squared distance. Finally, we return all points with squared distances less than or equal to the kth smallest squared distance.

Intuition: The QuickSelect algorithm is more efficient than sorting when we only need to find the kth smallest element. It has an average time complexity of O(n), which is better than the O(n log n) time complexity of sorting.

Algorithm:

  1. Calculate the squared distance of each point from the origin: distance = x² + y².
  2. Use the QuickSelect algorithm to find the kth smallest squared distance.
  3. Return all points with squared distances less than or equal to the kth smallest squared distance.

Time Complexity:

O(n)

Where n is the number of points. The QuickSelect algorithm has an average time complexity of O(n), although its worst-case time complexity is O(n²). Calculating distances takes O(n) time. The overall average time complexity is O(n).

Space Complexity:

O(n)

We need O(n) space to store the distances and the partitioned array of points.

Pseudocode

solution.pseudo
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
function kClosest(points, k):
// Calculate squared distances
distances = []
for each point in points:
distance = point[0]^2 + point[1]^2
distances.append((distance, point))
// Sort by distance
sort distances by the first element (distance)
// Return k closest points
result = []
for i from 0 to k-1:
result.append(distances[i][1])
return result
ProblemSolutionCode
101 Logo
onenoughtone