Loading learning content...
We've established that the Fractional Knapsack problem allows partial item selection and that this capability fundamentally changes the optimization landscape. Now we formalize the solution: the greedy algorithm that sorts by value-to-weight ratio and selects from highest to lowest density.
This algorithm is not merely a heuristic that works 'pretty well' — it is provably optimal. Every Fractional Knapsack instance is solved to perfect optimality by this simple greedy procedure. Understanding why requires examining the algorithm closely and proving its correctness rigorously.
By the end of this page, you will: (1) fully understand the density-greedy algorithm, (2) be able to implement it efficiently, (3) know its time and space complexity, and (4) understand the exchange argument proof that establishes its optimality.
Let us formally specify the greedy algorithm for Fractional Knapsack.
Algorithm: FRACTIONAL-KNAPSACK-GREEDY
FRACTIONAL-KNAPSACK-GREEDY(items[], n, W)
──────────────────────────────────────────
Input:
items[] - array of n items, each with (weight, value)
n - number of items
W - knapsack capacity
Output:
fractions[] - array of n values in [0,1], fraction of each item to take
totalValue - maximum achievable value
1. FOR i = 1 TO n DO
density[i] ← items[i].value / items[i].weight
END FOR
2. SORT items by density in DESCENDING order
(maintain correspondence between items and densities)
3. remaining ← W
totalValue ← 0
FOR i = 1 TO n DO fractions[i] ← 0 END FOR
4. FOR each item i in sorted order DO
IF remaining = 0 THEN
BREAK // Knapsack is full
END IF
IF items[i].weight ≤ remaining THEN
// Take all of item i
fractions[i] ← 1
totalValue ← totalValue + items[i].value
remaining ← remaining - items[i].weight
ELSE
// Take fraction that fits
fractions[i] ← remaining / items[i].weight
totalValue ← totalValue + fractions[i] * items[i].value
remaining ← 0
END IF
END FOR
5. RETURN (fractions[], totalValue)
The algorithm has three main phases: (1) Compute densities - O(n), (2) Sort by density - O(n log n), (3) Greedy selection - O(n). The sorting phase dominates, giving overall O(n log n) complexity.
Let's translate the algorithm into working code. We'll implement it in multiple languages to show the universal pattern.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566
def fractional_knapsack(items, capacity): """ Solve Fractional Knapsack using greedy algorithm. Args: items: List of (weight, value) tuples capacity: Maximum weight capacity Returns: Tuple of (max_value, fractions) where fractions[i] is the fraction of item i to take (0 to 1) """ n = len(items) if n == 0 or capacity <= 0: return (0, []) # Create list of (density, weight, value, original_index) indexed_items = [] for i, (weight, value) in enumerate(items): if weight > 0: # Avoid division by zero density = value / weight indexed_items.append((density, weight, value, i)) # Sort by density in descending order indexed_items.sort(key=lambda x: x[0], reverse=True) # Initialize results fractions = [0.0] * n total_value = 0.0 remaining = capacity # Greedy selection for density, weight, value, original_idx in indexed_items: if remaining <= 0: break if weight <= remaining: # Take all of this item fractions[original_idx] = 1.0 total_value += value remaining -= weight else: # Take the fraction that fits fraction = remaining / weight fractions[original_idx] = fraction total_value += fraction * value remaining = 0 return (total_value, fractions) # Example usageif __name__ == "__main__": items = [ (10, 60), # Item A: weight=10, value=60 (density=6) (20, 100), # Item B: weight=20, value=100 (density=5) (30, 120), # Item C: weight=30, value=120 (density=4) ] capacity = 50 max_value, fractions = fractional_knapsack(items, capacity) print(f"Maximum value: {max_value}") print(f"Fractions: {fractions}") # Output: Maximum value: 240.0 # Fractions: [1.0, 1.0, 0.6666...]Let's rigorously analyze the computational complexity of the Fractional Knapsack greedy algorithm.
Time Complexity Analysis:
| Step | Operation | Time Complexity |
|---|---|---|
| 1 | Compute densities for n items | O(n) |
| 2 | Sort items by density | O(n log n) |
| 3 | Initialize fractions and variables | O(n) |
| 4 | Greedy selection loop | O(n) |
| Total | O(n log n) |
Sorting Dominates:
The sorting step (O(n log n)) dominates all other linear-time operations. This makes the overall algorithm O(n log n).
Can We Do Better?
If items were pre-sorted by density, the algorithm would be O(n). In some applications, items may naturally arrive in sorted order or be maintained in a sorted structure.
For the general case, O(n log n) is essentially optimal — we need to examine all items to make the decision, and the sorting lower bound for comparison-based sorting is Ω(n log n).
Space Complexity Analysis:
| Component | Space Used | Notes |
|---|---|---|
| Density array | O(n) | One density per item |
| Fractions array | O(n) | Output storage |
| Sorting auxiliary | O(n) or O(log n) | Depends on sort algorithm |
| Loop variables | O(1) | Constant extra |
| Total | O(n) | Linear auxiliary space |
If we sort indices rather than items, or use an in-place sorting algorithm, we can reduce auxiliary space. However, for most practical purposes, O(n) space is acceptable and the algorithm is considered efficient.
We now prove that the greedy algorithm produces an optimal solution. The technique we use is the exchange argument — a classic method for proving greedy correctness.
Theorem: The greedy algorithm that selects items in decreasing order of value-to-weight ratio produces an optimal solution to the Fractional Knapsack problem.
Proof by Exchange Argument:
Assume items are numbered 1, 2, ..., n such that:
ρ₁ ≥ ρ₂ ≥ ... ≥ ρₙ
where ρᵢ = vᵢ/wᵢ (already sorted by density).
Let G = (g₁, g₂, ..., gₙ) be the greedy solution (fractions chosen by our algorithm). Let O = (o₁, o₂, ..., oₙ) be any optimal solution.
Goal: Show that the value of G equals the value of O.
Strategy: If G ≠ O, we'll show we can transform O into G without decreasing value, proving G is also optimal.
The Exchange:
Suppose G ≠ O. Let k be the smallest index where gₖ ≠ oₖ.
Claim: gₖ > oₖ (greedy takes more of item k than optimal does)
Why? By construction:
Constructing O':
Modify O to create O' by:
This is valid as long as:
Change in Value:
Value(O') - Value(O) = δ × vₖ - δ × (wₖ/wⱼ) × vⱼ = δ × (vₖ - wₖ × (vⱼ/wⱼ)) = δ × wₖ × (vₖ/wₖ - vⱼ/wⱼ) = δ × wₖ × (ρₖ - ρⱼ)
Since j > k and items are sorted by density: ρₖ ≥ ρⱼ
Therefore: Value(O') ≥ Value(O)
Conclusion:
We can repeatedly apply this exchange, moving weight from lower-density items to higher-density items, until O becomes G. Each exchange maintains or increases value. Since O was optimal, and we never decreased value:
Value(G) ≥ Value(O) = OPT
But Value(G) ≤ OPT (can't exceed optimal). Therefore:
Value(G) = OPT ∎
The exchange argument shows that any deviation from the greedy solution can be 'improved back' to greedy without losing value. This proves the greedy solution is no worse than any other solution — hence it is optimal.
One might ask: why sort by density (v/w)? Why not by value alone, or weight alone, or some other criterion?
Alternative Criteria and Why They Fail:
Why Density Works:
Density ρ = v/w captures the efficiency of an item — the value obtained per unit of the scarce resource (capacity). By always choosing the highest-efficiency item:
Economic Interpretation:
Think of density as the 'interest rate' an item offers on the capacity you 'invest' in it:
Would you invest in B when A is available? No! Always invest in the highest-rate option first. This is precisely what the density-greedy algorithm does.
Every unit of capacity spent on a low-density item has an opportunity cost — it could have been spent on a high-density item. The greedy algorithm minimizes this opportunity cost by always selecting the highest-density option currently available.
Robust implementations must handle various edge cases correctly. Let's examine these systematically.
Handling Zero-Weight Items:
If an item has weight 0 and positive value, it should always be taken (it's free!). Implementation approaches:
Numerical Stability:
When dealing with floating-point arithmetic:
In production code, validate that weights and values are positive, capacity is positive, and handle division-by-zero gracefully. Defensive programming prevents runtime errors and produces meaningful results even for degenerate inputs.
Let's trace through a complete example to solidify understanding.
Problem Instance:
Knapsack capacity: W = 15 kg
| Item | Weight | Value | Density (v/w) |
|---|---|---|---|
| A | 5 | 30 | 6.0 |
| B | 10 | 40 | 4.0 |
| C | 3 | 18 | 6.0 |
| D | 8 | 24 | 3.0 |
Step 1: Compute and Sort by Density
Densities: A=6, B=4, C=6, D=3
Sorted order (descending): A, C, B, D (or C, A, B, D — tie between A and C)
Let's use order: A, C, B, D
Step 2: Greedy Selection
| Iteration | Item | Weight | Remaining Before | Action | Remaining After | Value Added |
|---|---|---|---|---|---|---|
| 1 | A | 5 | 15 | Take all (5 ≤ 15) | 10 | +30 |
| 2 | C | 3 | 10 | Take all (3 ≤ 10) | 7 | +18 |
| 3 | B | 10 | 7 | Take 7/10 = 0.7 | 0 | +28 |
| 4 | D | 8 | 0 | Take 0 | 0 | +0 |
Final Solution:
Verification — Is This Really Optimal?
Let's check an alternative: What if we took B fully?
That's worse than $76!
What if we took D?
Or fractionally: D can't improve on taking 7/10 of B which gave us 0.7 × 40 = 28. Taking 7/8 of D gives 0.875 × 24 = 21, which is worse.
The greedy solution is indeed optimal!
This trace demonstrates the algorithm working step-by-step. The highest-density items (A and C) are taken fully, then remaining capacity is perfectly filled with a fraction of B. The result exactly fills capacity at maximum value.
We have fully explored the greedy algorithm for Fractional Knapsack, from specification to implementation to proof of optimality.
What's Next:
The final page of this module draws a critical contrast: Fractional Knapsack vs. 0/1 Knapsack. We'll explore why the same greedy approach that works perfectly here fails completely for the discrete variant, why 0/1 Knapsack requires dynamic programming, and how to recognize which variant a real-world problem represents.
You now understand the complete greedy algorithm for Fractional Knapsack: how to implement it, why it runs in O(n log n) time, and why it produces optimal solutions via the exchange argument. Next, we'll contrast this with the fundamentally harder 0/1 Knapsack problem.