Loading learning content...
You've mastered Fractional Knapsack and its elegant greedy solution. Now consider this seemingly minor change to the problem:
Instead of taking any fraction of an item, you must take it entirely or not at all.
This single constraint — restricting xᵢ from [0, 1] to {0, 1} — transforms an efficiently solvable problem into one that is NP-Hard. The greedy algorithm that works perfectly for Fractional Knapsack completely fails for 0/1 Knapsack. Understanding why this happens is one of the most instructive lessons in algorithmic thinking.
By the end of this page, you will understand: (1) why greedy fails for 0/1 Knapsack, (2) concrete counterexamples demonstrating the failure, (3) why dynamic programming is necessary, (4) the DP solution approach and its complexity, and (5) how to recognize which variant a real problem represents.
Let's formally state the 0/1 Knapsack problem and contrast it with Fractional Knapsack.
Given:
Decision Variables:
Objective Function (Maximize):
maximize ∑ᵢ₌₁ⁿ (xᵢ · vᵢ)
Subject to Constraints:
The restriction from xᵢ ∈ [0,1] to xᵢ ∈ {0,1} appears minor but is computationally devastating. It takes the problem from P (polynomial time) to NP-Hard — one of the most fundamental complexity barriers in computer science.
The greedy algorithm fails for 0/1 Knapsack because discrete selection creates 'leftover' capacity that cannot be optimally filled.
The Core Issue — Wasted Capacity:
In Fractional Knapsack, after taking high-density items, you can perfectly fill remaining capacity with a precise fraction of the next item. In 0/1 Knapsack, if the next item doesn't fit, you cannot take any of it. The leftover capacity may be wasted or filled with lower-density items.
This creates scenarios where:
The Exchange Argument Breaks Down:
Recall how the exchange argument proved Fractional Knapsack greedy correctness: we could always shift weight from a lower-density item to a higher-density item without decreasing value.
In 0/1 Knapsack, this exchange may not be feasible:
The All-or-Nothing Constraint Blocks Incremental Improvement:
The exchange argument relies on continuous, incremental reallocation. Discrete constraints prevent this, breaking the proof and the algorithm's correctness.
The difference between the optimal Fractional Knapsack value and the optimal 0/1 Knapsack value is called the 'integrality gap.' This gap can be significant — the greedy (fractional-optimal) solution may be far from the true 0/1 optimum. The gap represents the 'cost' of the discrete constraint.
Let's construct concrete counterexamples where the greedy approach fails for 0/1 Knapsack.
Counterexample 1: The Classic Case
Setup:
Greedy 0/1 Approach:
Greedy result: $160, weight=30 (20 units wasted!)
Optimal 0/1 Solution:
Consider taking B and C:
Optimal result: $220, weight=50 (no waste)
Greedy gets $160, Optimal gets $220. The greedy approach wasted capacity by taking the highest-density item A, which prevented the optimal pairing of B + C.
Counterexample 2: Extreme Failure
Setup:
Greedy 0/1:
Result: $2 (99 units wasted!)
Optimal 0/1:
Take B only:
Result: $150 (full capacity used)
In this extreme case, greedy achieves only 1.3% of optimal! This isn't a close approximation — it's a catastrophic failure. The greedy heuristic can be arbitrarily bad for 0/1 Knapsack.
Why These Failures Occur:
In both examples, the greedy approach commits to a high-density item that 'uses up' capacity in a way that prevents better overall packing. The discrete constraint means we can't later 'take back' part of A to make room for a more valuable combination.
Fractional Knapsack Would Not Have This Problem:
In Counterexample 2 with fractions:
The fractional solution achieves $150.50, very close to the 0/1 optimal $150. The greedy-by-density approach works because we can fill remaining capacity with fractions.
Since greedy fails, we need a different approach for 0/1 Knapsack. Dynamic Programming provides an optimal solution by systematically considering all relevant combinations.
The DP Formulation:
Define: dp[i][w] = maximum value achievable using items 1..i with capacity w
Base Case:
Recurrence:
For each item i and capacity w:
If weight[i] > w:
dp[i][w] = dp[i-1][w] // Can't include item i
Else:
dp[i][w] = max(
dp[i-1][w], // Don't include item i
dp[i-1][w - weight[i]] + value[i] // Include item i
)
Final Answer: dp[n][W]
Why DP Works:
DP explores all relevant combinations by building up solutions incrementally:
At each step, we make the optimal choice given the subproblem structure. The key insight is optimal substructure: the best solution for items 1..i either includes item i (and we need optimal for items 1..i-1 with reduced capacity) or excludes it (and we need optimal for items 1..i-1 with full remaining capacity).
The 2D Table:
Capacity →
0 1 2 3 4 5 6 7 ...
Item ↓
0 0 0 0 0 0 0 0 0 ...
1 0 v₁* v₁* v₁* v₁* v₁* v₁* v₁* ...
2 0 ... ... ... ... ... ... ... ...
...
n ... ... [ANSWER]
The DP solution has time complexity O(nW) — polynomial in n but also in W (the capacity value). Since W is not the input size but a numeric value, this is 'pseudo-polynomial.' For small W, it's practical. For large W (e.g., W = 10⁹), it becomes infeasible.
Here's a complete implementation of the 0/1 Knapsack DP solution.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
def knapsack_01(weights, values, capacity): """ Solve 0/1 Knapsack using dynamic programming. Args: weights: List of item weights values: List of item values capacity: Maximum weight capacity (integer) Returns: Tuple of (max_value, selected_items) """ n = len(weights) # dp[i][w] = max value using items 0..i-1 with capacity w dp = [[0] * (capacity + 1) for _ in range(n + 1)] # Fill the DP table for i in range(1, n + 1): weight_i = weights[i - 1] value_i = values[i - 1] for w in range(capacity + 1): if weight_i > w: # Can't include item i-1 dp[i][w] = dp[i - 1][w] else: # Max of excluding or including dp[i][w] = max( dp[i - 1][w], # Exclude dp[i - 1][w - weight_i] + value_i # Include ) # Backtrack to find selected items selected = [] w = capacity for i in range(n, 0, -1): if dp[i][w] != dp[i - 1][w]: selected.append(i - 1) # Item i-1 was included w -= weights[i - 1] selected.reverse() return dp[n][capacity], selected # Example: The counterexample where greedy failsweights = [10, 20, 30]values = [60, 100, 120]capacity = 50 max_value, items = knapsack_01(weights, values, capacity)print(f"Maximum value: {max_value}")print(f"Selected items (indices): {items}")# Output: Maximum value: 220# Selected items: [1, 2] (items B and C)The 2D DP table can be reduced to 1D (O(W) space) since each row only depends on the previous row. Iterate capacity from W down to 0 to avoid overwriting values needed for the current row's computation.
Let's formally compare the computational complexity of both variants.
| Aspect | Fractional Knapsack | 0/1 Knapsack |
|---|---|---|
| Algorithm | Greedy by density | Dynamic Programming |
| Time Complexity | O(n log n) | O(nW) pseudo-polynomial |
| Space Complexity | O(n) | O(nW) or O(W) with optimization |
| Problem Class | P (Polynomial) | NP-Hard |
| Exact Solution | Always in polynomial time | Only for small W in practice |
| Large W (e.g., 10⁹) | No problem | Infeasible — requires approximations |
| Practical for n=10⁶ | Yes | Depends on W |
Why the Complexity Difference?
Fractional Knapsack:
0/1 Knapsack:
When O(nW) Is Practical:
For Large W, Approximation Algorithms Are Used:
0/1 Knapsack is NP-Hard. Unless P=NP (widely considered unlikely), no polynomial-time algorithm exists for solving it exactly. The O(nW) DP is pseudo-polynomial — efficient only when W is small relative to exponential in n.
In real problems and interviews, you must identify which knapsack variant applies. The choice fundamentally changes the solution approach.
Key Question: Can items be divided?
Interview Pattern Recognition:
When you see a knapsack-like problem in an interview:
If a problem doesn't clearly state divisibility, ask the interviewer: 'Can I take fractions of items, or must I take them whole?' This clarification is expected and shows you understand the critical distinction.
We've explored the fundamental contrast between Fractional and 0/1 Knapsack — two problems that appear similar but have radically different solutions.
Module Summary — Fractional Knapsack:
Across this module, we've thoroughly examined the Fractional Knapsack problem:
Fractional Knapsack is a quintessential example of when greedy algorithms shine. The problem's structure — continuous allocation, linear value scaling — perfectly aligns with greedy's strengths. Understanding why it works here, and why it fails for the discrete variant, deepens your grasp of when greedy algorithms are appropriate tools.
Congratulations! You've mastered the Fractional Knapsack problem — from problem formulation through the greedy algorithm, its proof of optimality, and the critical contrast with 0/1 Knapsack. You now understand a canonical example of greedy success and can recognize when greedy approaches are (and aren't) applicable to optimization problems.