There are 5 main approaches to solve this problem:
Let's start by understanding the problem from first principles. We need to find the maximum profit we can make by buying and selling stocks, with the constraint that after selling, we must wait one day before buying again.
Thinking Process: How would we approach this manually? Let's break down the problem:
At each day, we have different choices based on our current state:
Intuition for Recursive Approach: We can model this as a decision tree where at each day we make the best choice. We need to keep track of whether we're holding a stock or not.
Let's define a recursive function dfs(i, holding)
that returns the maximum profit starting from day i
with the state holding
(whether we are holding a stock or not).
Decision Logic:
This recursive approach explores all possible buying and selling decisions and returns the maximum profit.
Where n is the length of the prices array. In the worst case, we explore all possible combinations of buying and selling, which is exponential because at each step we have 2 choices and the recursion can go n levels deep.
The space complexity is O(n) due to the recursion stack, which can go as deep as the number of days in the worst case.
The brute force approach works, but it's inefficient because we recalculate the same subproblems multiple times. Let's analyze why and how we can optimize it.
Problem with Brute Force: If we trace the recursion tree, we'll notice many overlapping subproblems. For example, the state (i=3, holding=true)
might be calculated multiple times through different decision paths.
Thinking Process: How can we avoid recalculating the same states? We can use memoization (caching) to store the results of already computed states.
Key Insight: Our recursive function dfs(i, holding)
depends only on two parameters:
i
holding
a stock or notThere are only n * 2
possible states (n days, 2 holding states), so we can cache the results in a memo dictionary with keys (i, holding)
.
Optimization Strategy: Before computing a state, check if it's already in the memo. If yes, return the cached result. Otherwise, compute it, store it in the memo, and then return it.
Intuition: By storing the results of each state in a memo dictionary, we avoid redundant calculations and significantly improve the efficiency from exponential to linear time.
Where n is the length of the prices array. With memoization, we solve each subproblem only once, and there are O(n * 2) = O(n) possible states (day × holding status).
We use O(n) space for the memoization dictionary to store all possible states, plus O(n) for the recursion stack in the worst case, which simplifies to O(n).
While the memoized approach is efficient, we can eliminate the recursion overhead by using a bottom-up dynamic programming approach.
Thinking Process: How can we convert our recursive solution to an iterative one? Let's analyze the dependencies in our recursive solution:
i
, we need information from days i+1
and i+2
DP State Definition: We can define a 2D DP array where:
dp[i][0]
: Maximum profit starting from day i
if not holding a stockdp[i][1]
: Maximum profit starting from day i
if holding a stockDP Transition: For each day i
, we have:
dp[i][0] = max(buy a stock, do nothing)
dp[i][1] = max(sell the stock, continue holding)
Handling Cooldown: When we sell a stock on day i
, we need to look at the profit from day i+2
(skipping the cooldown day). This requires us to allocate extra space in our DP array to avoid out-of-bounds access.
Intuition: By working backwards from the end of the array, we ensure that all the necessary future states are already computed when we need them. This eliminates the need for recursion and gives us a clean iterative solution.
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step.
We use a 2D array of size (n+2) x 2, which is O(n) space. The extra space is needed to handle the cooldown period safely.
The bottom-up DP approach works well, but we can further optimize the space usage.
Observation: If we look at our DP transitions carefully, we notice that for day i
, we only need information from days i+1
and i+2
. We don't need to store the entire DP array!
Thinking Process: How can we reduce the space complexity? Let's analyze what information we need to keep:
i
, we need dp[i+1][0]
, dp[i+1][1]
, and dp[i+2][0]
Space Optimization Strategy: Instead of using a full 2D array, we can use three arrays:
curr
: For the current dayahead
: For the next day (i+1)ahead2
: For two days ahead (i+2)Rolling Window Technique: As we move backwards through the days, we shift our window by updating:
ahead2 = ahead
ahead = curr
Intuition: By using this rolling window approach, we maintain only the necessary state information and reduce the space complexity from O(n) to O(1), while keeping the same time complexity.
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step, just like in the bottom-up approach.
We only use a constant amount of extra space (three small arrays of size 2) regardless of the input size, which is a significant improvement over the O(n) space of the bottom-up approach.
Let's approach this problem from a different angle using a state machine model. This provides an intuitive way to understand the problem and leads to an elegant solution.
Thinking Process: What are the different states we can be in during the stock trading process? Let's identify them:
hold
: We are currently holding a stocksold
: We have just sold a stock and are in cooldownrest
: We are not holding a stock and not in cooldownState Transitions: How do we move between these states?
hold
: We can either continue holding or sell the stocksold
: We can only go to rest
(due to cooldown)rest
: We can either continue resting or buy a stockProfit Maximization: For each state, we want to maximize the profit:
hold = max(previous hold, previous rest - current price)
: Either continue holding or buy a new stocksold = previous hold + current price
: Sell the stock we were holdingrest = max(previous rest, previous sold)
: Either continue resting or come from sold state after cooldownVisual Representation: We can visualize this as a state diagram where each state has transitions to other states, and each transition has an associated profit/cost.
Intuition: By tracking the maximum profit for each state as we go through the days, we can find the overall maximum profit at the end. Since we don't want to end up holding a stock, our final answer will be the maximum of the sold
and rest
states.
This state machine approach is elegant because it directly models the problem constraints and provides a clear mental model for understanding the solution.
Where n is the length of the prices array. We iterate through the array once, performing constant-time operations at each step.
We only use three variables (hold, sold, rest) regardless of the input size, making this approach very memory-efficient.
12345678910111213141516171819function maxProfit(prices): function dfs(i, holding): if i >= length of prices: return 0 if holding: # Option 1: Sell the stock sell = prices[i] + dfs(i + 2, false) # Option 2: Continue holding hold = dfs(i + 1, true) return max(sell, hold) else: # Option 1: Buy a stock buy = -prices[i] + dfs(i + 1, true) # Option 2: Do nothing wait = dfs(i + 1, false) return max(buy, wait) return dfs(0, false)
Understand different approaches to solve the stock trader with cooldown problem and analyze their efficiency.
Let's start by understanding the problem from first principles. We need to find the maximum profit we can make by buying and selling stocks, with the constraint that after selling, we must wait one day before buying again.
Thinking Process: How would we approach this manually? Let's break down the problem:
At each day, we have different choices based on our current state:
Intuition for Recursive Approach: We can model this as a decision tree where at each day we make the best choice. We need to keep track of whether we're holding a stock or not.
Let's define a recursive function dfs(i, holding)
that returns the maximum profit starting from day i
with the state holding
(whether we are holding a stock or not).
Decision Logic:
This recursive approach explores all possible buying and selling decisions and returns the maximum profit.
The brute force approach works, but it's inefficient because we recalculate the same subproblems multiple times. Let's analyze why and how we can optimize it.
Problem with Brute Force: If we trace the recursion tree, we'll notice many overlapping subproblems. For example, the state (i=3, holding=true)
might be calculated multiple times through different decision paths.
Thinking Process: How can we avoid recalculating the same states? We can use memoization (caching) to store the results of already computed states.
Key Insight: Our recursive function dfs(i, holding)
depends only on two parameters:
i
holding
a stock or notThere are only n * 2
possible states (n days, 2 holding states), so we can cache the results in a memo dictionary with keys (i, holding)
.
Optimization Strategy: Before computing a state, check if it's already in the memo. If yes, return the cached result. Otherwise, compute it, store it in the memo, and then return it.
Intuition: By storing the results of each state in a memo dictionary, we avoid redundant calculations and significantly improve the efficiency from exponential to linear time.
While the memoized approach is efficient, we can eliminate the recursion overhead by using a bottom-up dynamic programming approach.
Thinking Process: How can we convert our recursive solution to an iterative one? Let's analyze the dependencies in our recursive solution:
i
, we need information from days i+1
and i+2
DP State Definition: We can define a 2D DP array where:
dp[i][0]
: Maximum profit starting from day i
if not holding a stockdp[i][1]
: Maximum profit starting from day i
if holding a stockDP Transition: For each day i
, we have:
dp[i][0] = max(buy a stock, do nothing)
dp[i][1] = max(sell the stock, continue holding)
Handling Cooldown: When we sell a stock on day i
, we need to look at the profit from day i+2
(skipping the cooldown day). This requires us to allocate extra space in our DP array to avoid out-of-bounds access.
Intuition: By working backwards from the end of the array, we ensure that all the necessary future states are already computed when we need them. This eliminates the need for recursion and gives us a clean iterative solution.
The bottom-up DP approach works well, but we can further optimize the space usage.
Observation: If we look at our DP transitions carefully, we notice that for day i
, we only need information from days i+1
and i+2
. We don't need to store the entire DP array!
Thinking Process: How can we reduce the space complexity? Let's analyze what information we need to keep:
i
, we need dp[i+1][0]
, dp[i+1][1]
, and dp[i+2][0]
Space Optimization Strategy: Instead of using a full 2D array, we can use three arrays:
curr
: For the current dayahead
: For the next day (i+1)ahead2
: For two days ahead (i+2)Rolling Window Technique: As we move backwards through the days, we shift our window by updating:
ahead2 = ahead
ahead = curr
Intuition: By using this rolling window approach, we maintain only the necessary state information and reduce the space complexity from O(n) to O(1), while keeping the same time complexity.
Let's approach this problem from a different angle using a state machine model. This provides an intuitive way to understand the problem and leads to an elegant solution.
Thinking Process: What are the different states we can be in during the stock trading process? Let's identify them:
hold
: We are currently holding a stocksold
: We have just sold a stock and are in cooldownrest
: We are not holding a stock and not in cooldownState Transitions: How do we move between these states?
hold
: We can either continue holding or sell the stocksold
: We can only go to rest
(due to cooldown)rest
: We can either continue resting or buy a stockProfit Maximization: For each state, we want to maximize the profit:
hold = max(previous hold, previous rest - current price)
: Either continue holding or buy a new stocksold = previous hold + current price
: Sell the stock we were holdingrest = max(previous rest, previous sold)
: Either continue resting or come from sold state after cooldownVisual Representation: We can visualize this as a state diagram where each state has transitions to other states, and each transition has an associated profit/cost.
Intuition: By tracking the maximum profit for each state as we go through the days, we can find the overall maximum profit at the end. Since we don't want to end up holding a stock, our final answer will be the maximum of the sold
and rest
states.
This state machine approach is elegant because it directly models the problem constraints and provides a clear mental model for understanding the solution.
Where n is the length of the prices array. In the worst case, we explore all possible combinations of buying and selling, which is exponential because at each step we have 2 choices and the recursion can go n levels deep.
The space complexity is O(n) due to the recursion stack, which can go as deep as the number of days in the worst case.
Where n is the length of the prices array. With memoization, we solve each subproblem only once, and there are O(n * 2) = O(n) possible states (day × holding status).
We use O(n) space for the memoization dictionary to store all possible states, plus O(n) for the recursion stack in the worst case, which simplifies to O(n).
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step.
We use a 2D array of size (n+2) x 2, which is O(n) space. The extra space is needed to handle the cooldown period safely.
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step, just like in the bottom-up approach.
We only use a constant amount of extra space (three small arrays of size 2) regardless of the input size, which is a significant improvement over the O(n) space of the bottom-up approach.
Where n is the length of the prices array. We iterate through the array once, performing constant-time operations at each step.
We only use three variables (hold, sold, rest) regardless of the input size, making this approach very memory-efficient.
12345678910111213141516171819function maxProfit(prices): function dfs(i, holding): if i >= length of prices: return 0 if holding: # Option 1: Sell the stock sell = prices[i] + dfs(i + 2, false) # Option 2: Continue holding hold = dfs(i + 1, true) return max(sell, hold) else: # Option 1: Buy a stock buy = -prices[i] + dfs(i + 1, true) # Option 2: Do nothing wait = dfs(i + 1, false) return max(buy, wait) return dfs(0, false)
1234567891011121314151617181920212223242526function maxProfit(prices): memo = {} function dfs(i, holding): if i >= length of prices: return 0 if (i, holding) in memo: return memo[(i, holding)] if holding: # Option 1: Sell the stock sell = prices[i] + dfs(i + 2, false) # Option 2: Continue holding hold = dfs(i + 1, true) memo[(i, holding)] = max(sell, hold) else: # Option 1: Buy a stock buy = -prices[i] + dfs(i + 1, true) # Option 2: Do nothing wait = dfs(i + 1, false) memo[(i, holding)] = max(buy, wait) return memo[(i, holding)] return dfs(0, false)
12345678910111213141516171819202122function maxProfit(prices): n = length of prices if n == 0: return 0 # Create DP array with extra space for cooldown safety dp = 2D array of size (n + 2) x 2, initialized with 0 for i from n-1 down to 0: # Not holding a stock dp[i][0] = max( -prices[i] + dp[i+1][1], # Buy dp[i+1][0] # Do nothing ) # Holding a stock dp[i][1] = max( prices[i] + dp[i+2][0], # Sell, cooldown next day dp[i+1][1] # Continue holding ) return dp[0][0]
1234567891011121314151617181920212223function maxProfit(prices): n = length of prices if n == 0: return 0 # Initialize variables for the next two days ahead = [0, 0] # dp[i+1] ahead2 = [0, 0] # dp[i+2] for i from n-1 down to 0: curr = [0, 0] # Not holding curr[0] = max(-prices[i] + ahead[1], ahead[0]) # Holding curr[1] = max(prices[i] + ahead2[0], ahead[1]) # Update for next iteration ahead2 = ahead ahead = curr return ahead[0]
12345678910111213141516function maxProfit(prices): n = length of prices if n <= 1: return 0 hold = -prices[0] # Maximum profit with a stock sold = 0 # Maximum profit after selling rest = 0 # Maximum profit at rest for i from 1 to n-1: prevHold = hold hold = max(hold, rest - prices[i]) # Continue holding or buy sold = prevHold + prices[i] # Sell the stock rest = max(rest, sold) # Continue resting or come from sold return max(sold, rest)
There are 5 main approaches to solve this problem:
Let's start by understanding the problem from first principles. We need to find the maximum profit we can make by buying and selling stocks, with the constraint that after selling, we must wait one day before buying again.
Thinking Process: How would we approach this manually? Let's break down the problem:
At each day, we have different choices based on our current state:
Intuition for Recursive Approach: We can model this as a decision tree where at each day we make the best choice. We need to keep track of whether we're holding a stock or not.
Let's define a recursive function dfs(i, holding)
that returns the maximum profit starting from day i
with the state holding
(whether we are holding a stock or not).
Decision Logic:
This recursive approach explores all possible buying and selling decisions and returns the maximum profit.
Where n is the length of the prices array. In the worst case, we explore all possible combinations of buying and selling, which is exponential because at each step we have 2 choices and the recursion can go n levels deep.
The space complexity is O(n) due to the recursion stack, which can go as deep as the number of days in the worst case.
The brute force approach works, but it's inefficient because we recalculate the same subproblems multiple times. Let's analyze why and how we can optimize it.
Problem with Brute Force: If we trace the recursion tree, we'll notice many overlapping subproblems. For example, the state (i=3, holding=true)
might be calculated multiple times through different decision paths.
Thinking Process: How can we avoid recalculating the same states? We can use memoization (caching) to store the results of already computed states.
Key Insight: Our recursive function dfs(i, holding)
depends only on two parameters:
i
holding
a stock or notThere are only n * 2
possible states (n days, 2 holding states), so we can cache the results in a memo dictionary with keys (i, holding)
.
Optimization Strategy: Before computing a state, check if it's already in the memo. If yes, return the cached result. Otherwise, compute it, store it in the memo, and then return it.
Intuition: By storing the results of each state in a memo dictionary, we avoid redundant calculations and significantly improve the efficiency from exponential to linear time.
Where n is the length of the prices array. With memoization, we solve each subproblem only once, and there are O(n * 2) = O(n) possible states (day × holding status).
We use O(n) space for the memoization dictionary to store all possible states, plus O(n) for the recursion stack in the worst case, which simplifies to O(n).
While the memoized approach is efficient, we can eliminate the recursion overhead by using a bottom-up dynamic programming approach.
Thinking Process: How can we convert our recursive solution to an iterative one? Let's analyze the dependencies in our recursive solution:
i
, we need information from days i+1
and i+2
DP State Definition: We can define a 2D DP array where:
dp[i][0]
: Maximum profit starting from day i
if not holding a stockdp[i][1]
: Maximum profit starting from day i
if holding a stockDP Transition: For each day i
, we have:
dp[i][0] = max(buy a stock, do nothing)
dp[i][1] = max(sell the stock, continue holding)
Handling Cooldown: When we sell a stock on day i
, we need to look at the profit from day i+2
(skipping the cooldown day). This requires us to allocate extra space in our DP array to avoid out-of-bounds access.
Intuition: By working backwards from the end of the array, we ensure that all the necessary future states are already computed when we need them. This eliminates the need for recursion and gives us a clean iterative solution.
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step.
We use a 2D array of size (n+2) x 2, which is O(n) space. The extra space is needed to handle the cooldown period safely.
The bottom-up DP approach works well, but we can further optimize the space usage.
Observation: If we look at our DP transitions carefully, we notice that for day i
, we only need information from days i+1
and i+2
. We don't need to store the entire DP array!
Thinking Process: How can we reduce the space complexity? Let's analyze what information we need to keep:
i
, we need dp[i+1][0]
, dp[i+1][1]
, and dp[i+2][0]
Space Optimization Strategy: Instead of using a full 2D array, we can use three arrays:
curr
: For the current dayahead
: For the next day (i+1)ahead2
: For two days ahead (i+2)Rolling Window Technique: As we move backwards through the days, we shift our window by updating:
ahead2 = ahead
ahead = curr
Intuition: By using this rolling window approach, we maintain only the necessary state information and reduce the space complexity from O(n) to O(1), while keeping the same time complexity.
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step, just like in the bottom-up approach.
We only use a constant amount of extra space (three small arrays of size 2) regardless of the input size, which is a significant improvement over the O(n) space of the bottom-up approach.
Let's approach this problem from a different angle using a state machine model. This provides an intuitive way to understand the problem and leads to an elegant solution.
Thinking Process: What are the different states we can be in during the stock trading process? Let's identify them:
hold
: We are currently holding a stocksold
: We have just sold a stock and are in cooldownrest
: We are not holding a stock and not in cooldownState Transitions: How do we move between these states?
hold
: We can either continue holding or sell the stocksold
: We can only go to rest
(due to cooldown)rest
: We can either continue resting or buy a stockProfit Maximization: For each state, we want to maximize the profit:
hold = max(previous hold, previous rest - current price)
: Either continue holding or buy a new stocksold = previous hold + current price
: Sell the stock we were holdingrest = max(previous rest, previous sold)
: Either continue resting or come from sold state after cooldownVisual Representation: We can visualize this as a state diagram where each state has transitions to other states, and each transition has an associated profit/cost.
Intuition: By tracking the maximum profit for each state as we go through the days, we can find the overall maximum profit at the end. Since we don't want to end up holding a stock, our final answer will be the maximum of the sold
and rest
states.
This state machine approach is elegant because it directly models the problem constraints and provides a clear mental model for understanding the solution.
Where n is the length of the prices array. We iterate through the array once, performing constant-time operations at each step.
We only use three variables (hold, sold, rest) regardless of the input size, making this approach very memory-efficient.
12345678910111213141516171819function maxProfit(prices): function dfs(i, holding): if i >= length of prices: return 0 if holding: # Option 1: Sell the stock sell = prices[i] + dfs(i + 2, false) # Option 2: Continue holding hold = dfs(i + 1, true) return max(sell, hold) else: # Option 1: Buy a stock buy = -prices[i] + dfs(i + 1, true) # Option 2: Do nothing wait = dfs(i + 1, false) return max(buy, wait) return dfs(0, false)
Understand different approaches to solve the stock trader with cooldown problem and analyze their efficiency.
Let's start by understanding the problem from first principles. We need to find the maximum profit we can make by buying and selling stocks, with the constraint that after selling, we must wait one day before buying again.
Thinking Process: How would we approach this manually? Let's break down the problem:
At each day, we have different choices based on our current state:
Intuition for Recursive Approach: We can model this as a decision tree where at each day we make the best choice. We need to keep track of whether we're holding a stock or not.
Let's define a recursive function dfs(i, holding)
that returns the maximum profit starting from day i
with the state holding
(whether we are holding a stock or not).
Decision Logic:
This recursive approach explores all possible buying and selling decisions and returns the maximum profit.
The brute force approach works, but it's inefficient because we recalculate the same subproblems multiple times. Let's analyze why and how we can optimize it.
Problem with Brute Force: If we trace the recursion tree, we'll notice many overlapping subproblems. For example, the state (i=3, holding=true)
might be calculated multiple times through different decision paths.
Thinking Process: How can we avoid recalculating the same states? We can use memoization (caching) to store the results of already computed states.
Key Insight: Our recursive function dfs(i, holding)
depends only on two parameters:
i
holding
a stock or notThere are only n * 2
possible states (n days, 2 holding states), so we can cache the results in a memo dictionary with keys (i, holding)
.
Optimization Strategy: Before computing a state, check if it's already in the memo. If yes, return the cached result. Otherwise, compute it, store it in the memo, and then return it.
Intuition: By storing the results of each state in a memo dictionary, we avoid redundant calculations and significantly improve the efficiency from exponential to linear time.
While the memoized approach is efficient, we can eliminate the recursion overhead by using a bottom-up dynamic programming approach.
Thinking Process: How can we convert our recursive solution to an iterative one? Let's analyze the dependencies in our recursive solution:
i
, we need information from days i+1
and i+2
DP State Definition: We can define a 2D DP array where:
dp[i][0]
: Maximum profit starting from day i
if not holding a stockdp[i][1]
: Maximum profit starting from day i
if holding a stockDP Transition: For each day i
, we have:
dp[i][0] = max(buy a stock, do nothing)
dp[i][1] = max(sell the stock, continue holding)
Handling Cooldown: When we sell a stock on day i
, we need to look at the profit from day i+2
(skipping the cooldown day). This requires us to allocate extra space in our DP array to avoid out-of-bounds access.
Intuition: By working backwards from the end of the array, we ensure that all the necessary future states are already computed when we need them. This eliminates the need for recursion and gives us a clean iterative solution.
The bottom-up DP approach works well, but we can further optimize the space usage.
Observation: If we look at our DP transitions carefully, we notice that for day i
, we only need information from days i+1
and i+2
. We don't need to store the entire DP array!
Thinking Process: How can we reduce the space complexity? Let's analyze what information we need to keep:
i
, we need dp[i+1][0]
, dp[i+1][1]
, and dp[i+2][0]
Space Optimization Strategy: Instead of using a full 2D array, we can use three arrays:
curr
: For the current dayahead
: For the next day (i+1)ahead2
: For two days ahead (i+2)Rolling Window Technique: As we move backwards through the days, we shift our window by updating:
ahead2 = ahead
ahead = curr
Intuition: By using this rolling window approach, we maintain only the necessary state information and reduce the space complexity from O(n) to O(1), while keeping the same time complexity.
Let's approach this problem from a different angle using a state machine model. This provides an intuitive way to understand the problem and leads to an elegant solution.
Thinking Process: What are the different states we can be in during the stock trading process? Let's identify them:
hold
: We are currently holding a stocksold
: We have just sold a stock and are in cooldownrest
: We are not holding a stock and not in cooldownState Transitions: How do we move between these states?
hold
: We can either continue holding or sell the stocksold
: We can only go to rest
(due to cooldown)rest
: We can either continue resting or buy a stockProfit Maximization: For each state, we want to maximize the profit:
hold = max(previous hold, previous rest - current price)
: Either continue holding or buy a new stocksold = previous hold + current price
: Sell the stock we were holdingrest = max(previous rest, previous sold)
: Either continue resting or come from sold state after cooldownVisual Representation: We can visualize this as a state diagram where each state has transitions to other states, and each transition has an associated profit/cost.
Intuition: By tracking the maximum profit for each state as we go through the days, we can find the overall maximum profit at the end. Since we don't want to end up holding a stock, our final answer will be the maximum of the sold
and rest
states.
This state machine approach is elegant because it directly models the problem constraints and provides a clear mental model for understanding the solution.
Where n is the length of the prices array. In the worst case, we explore all possible combinations of buying and selling, which is exponential because at each step we have 2 choices and the recursion can go n levels deep.
The space complexity is O(n) due to the recursion stack, which can go as deep as the number of days in the worst case.
Where n is the length of the prices array. With memoization, we solve each subproblem only once, and there are O(n * 2) = O(n) possible states (day × holding status).
We use O(n) space for the memoization dictionary to store all possible states, plus O(n) for the recursion stack in the worst case, which simplifies to O(n).
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step.
We use a 2D array of size (n+2) x 2, which is O(n) space. The extra space is needed to handle the cooldown period safely.
Where n is the length of the prices array. We process each day once, performing constant-time operations at each step, just like in the bottom-up approach.
We only use a constant amount of extra space (three small arrays of size 2) regardless of the input size, which is a significant improvement over the O(n) space of the bottom-up approach.
Where n is the length of the prices array. We iterate through the array once, performing constant-time operations at each step.
We only use three variables (hold, sold, rest) regardless of the input size, making this approach very memory-efficient.
12345678910111213141516171819function maxProfit(prices): function dfs(i, holding): if i >= length of prices: return 0 if holding: # Option 1: Sell the stock sell = prices[i] + dfs(i + 2, false) # Option 2: Continue holding hold = dfs(i + 1, true) return max(sell, hold) else: # Option 1: Buy a stock buy = -prices[i] + dfs(i + 1, true) # Option 2: Do nothing wait = dfs(i + 1, false) return max(buy, wait) return dfs(0, false)
1234567891011121314151617181920212223242526function maxProfit(prices): memo = {} function dfs(i, holding): if i >= length of prices: return 0 if (i, holding) in memo: return memo[(i, holding)] if holding: # Option 1: Sell the stock sell = prices[i] + dfs(i + 2, false) # Option 2: Continue holding hold = dfs(i + 1, true) memo[(i, holding)] = max(sell, hold) else: # Option 1: Buy a stock buy = -prices[i] + dfs(i + 1, true) # Option 2: Do nothing wait = dfs(i + 1, false) memo[(i, holding)] = max(buy, wait) return memo[(i, holding)] return dfs(0, false)
12345678910111213141516171819202122function maxProfit(prices): n = length of prices if n == 0: return 0 # Create DP array with extra space for cooldown safety dp = 2D array of size (n + 2) x 2, initialized with 0 for i from n-1 down to 0: # Not holding a stock dp[i][0] = max( -prices[i] + dp[i+1][1], # Buy dp[i+1][0] # Do nothing ) # Holding a stock dp[i][1] = max( prices[i] + dp[i+2][0], # Sell, cooldown next day dp[i+1][1] # Continue holding ) return dp[0][0]
1234567891011121314151617181920212223function maxProfit(prices): n = length of prices if n == 0: return 0 # Initialize variables for the next two days ahead = [0, 0] # dp[i+1] ahead2 = [0, 0] # dp[i+2] for i from n-1 down to 0: curr = [0, 0] # Not holding curr[0] = max(-prices[i] + ahead[1], ahead[0]) # Holding curr[1] = max(prices[i] + ahead2[0], ahead[1]) # Update for next iteration ahead2 = ahead ahead = curr return ahead[0]
12345678910111213141516function maxProfit(prices): n = length of prices if n <= 1: return 0 hold = -prices[0] # Maximum profit with a stock sold = 0 # Maximum profit after selling rest = 0 # Maximum profit at rest for i from 1 to n-1: prevHold = hold hold = max(hold, rest - prices[i]) # Continue holding or buy sold = prevHold + prices[i] # Sell the stock rest = max(rest, sold) # Continue resting or come from sold return max(sold, rest)