Loading learning content...
Palindromes—strings that read the same forwards and backwards—have fascinated mathematicians, linguists, and computer scientists alike. From the simple "racecar" to the legendary "A man, a plan, a canal: Panama," palindromes appear throughout language and culture. But beyond their linguistic charm, finding palindromic substrings efficiently is a fundamental algorithmic challenge with significant practical applications.
The Longest Palindromic Substring (LPS) problem asks: Given a string S, find the longest contiguous substring of S that is a palindrome. This seemingly simple question opens a fascinating exploration of algorithmic design, from naive approaches to elegant linear-time solutions.
Consider the string "babad". What is its longest palindromic substring? The answer could be "bab" or "aba"—both are valid since they share the same length (3). For "cbbd", the answer is "bb". For "racecar", the entire string is the answer.
By the end of this page, you will understand the naive brute force approach to finding the longest palindromic substring. You'll learn to systematically enumerate all substrings, verify palindromes, and track the maximum. Most importantly, you'll understand why this O(n³) approach, while correct, motivates the search for more efficient algorithms.
Before diving into algorithms, let's establish a precise, mathematical understanding of palindromes and the problem we're solving.
Definition: Palindrome
A string S of length n is a palindrome if and only if:
S[i] = S[n - 1 - i] for all i where 0 ≤ i < n
In other words, the character at position i from the start equals the character at position i from the end. This definition applies to both even-length and odd-length strings:
"aba" with center 'b')"abba" with center 'bb')Every single character is trivially a palindrome of length 1. The empty string is also considered a palindrome by convention.
| String | Length | Type | Is Palindrome? | Reason |
|---|---|---|---|---|
| "a" | 1 | Odd | ✓ Yes | Single character |
| "aa" | 2 | Even | ✓ Yes | S[0]='a' = S[1]='a' |
| "ab" | 2 | Even | ✗ No | S[0]='a' ≠ S[1]='b' |
| "aba" | 3 | Odd | ✓ Yes | S[0]='a' = S[2]='a', center 'b' matches itself |
| "abba" | 4 | Even | ✓ Yes | S[0]='a' = S[3]='a', S[1]='b' = S[2]='b' |
| "abcba" | 5 | Odd | ✓ Yes | Symmetric around center 'c' |
| "abcca" | 5 | Odd | ✗ No | S[0]='a' = S[4]='a', but S[1]='b' ≠ S[3]='c' |
The problem asks for the longest palindromic substring—a contiguous sequence of characters. This is different from the longest palindromic subsequence problem, which allows non-contiguous characters. For "character", the longest palindromic substring is "ara" or "rac" (length 3), but the longest palindromic subsequence is "carac" (length 5). Substring problems are typically O(n²) or better; subsequence problems often require O(n²) dynamic programming.
The Problem Statement, Precisely
Given a string S of length n over some alphabet Σ:
Input: String S = s₀s₁s₂...sₙ₋₁
Output: A substring S[i...j] (where 0 ≤ i ≤ j < n) such that:
If multiple substrings of the same maximum length exist, any one may be returned.
Key observations:
The brute force approach follows the most straightforward reasoning: enumerate every possible substring, check if it's a palindrome, and keep track of the longest one found.
This approach requires no clever insights—just systematic enumeration. While computationally expensive, it's valuable for several reasons:
Visualizing the Enumeration
For a string of length 5 like "babad", the brute force algorithm examines the following substrings:
Length 1: "b", "a", "b", "a", "d" (5 substrings)
Length 2: "ba", "ab", "ba", "ad" (4 substrings)
Length 3: "bab", "aba", "bad" (3 substrings)
Length 4: "baba", "abad" (2 substrings)
Length 5: "babad" (1 substring)
Total: 5 + 4 + 3 + 2 + 1 = 15 substrings = n(n+1)/2 substrings in general.
For each substring, we then check if it's a palindrome, which requires comparing up to half its characters.
At the heart of the brute force approach is a helper function that determines whether a given substring is a palindrome. This function is called O(n²) times, so its efficiency matters.
The Two-Pointer Technique
The classic palindrome verification uses two pointers:
This check runs in O(k) time for a substring of length k, giving O(n) worst-case per call.
12345678910111213141516171819202122232425262728293031
def is_palindrome(s: str, left: int, right: int) -> bool: """ Check if s[left:right+1] is a palindrome. Args: s: The original string left: Starting index (inclusive) right: Ending index (inclusive) Returns: True if the substring is a palindrome, False otherwise Time Complexity: O(right - left + 1) = O(n) worst case Space Complexity: O(1) """ while left < right: if s[left] != s[right]: return False left += 1 right -= 1 return True # Example usagetext = "racecar" # Check various substringsprint(is_palindrome(text, 0, 6)) # True: "racecar"print(is_palindrome(text, 0, 3)) # False: "race"print(is_palindrome(text, 1, 5)) # True: "aceca"print(is_palindrome(text, 2, 4)) # True: "cec"Notice that we pass the indices (left, right) rather than creating a new substring. This is a crucial optimization! Creating a substring in most languages requires O(k) time and O(k) space to copy k characters. By working with indices, our verification remains O(k) time but uses O(1) space. This seemingly minor detail impacts real-world performance significantly.
Now let's combine everything into a complete, production-quality brute force solution. We'll implement the full algorithm with careful attention to edge cases and clarity.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
def longest_palindromic_substring_brute_force(s: str) -> str: """ Find the longest palindromic substring using brute force. Algorithm: 1. Enumerate all O(n²) substrings 2. Check each for palindrome property 3. Track the longest palindrome found Args: s: Input string Returns: The longest palindromic substring Time Complexity: O(n³) - O(n²) substrings to enumerate - O(n) to check each substring for palindrome property Space Complexity: O(1) - Only storing indices, not creating new strings during search - Final substring creation is O(n) but that's for the output """ if not s: return "" n = len(s) # Track the best palindrome found # Start with the first character (always a valid palindrome) best_start = 0 best_length = 1 def is_palindrome(left: int, right: int) -> bool: """Check if s[left:right+1] is a palindrome.""" while left < right: if s[left] != s[right]: return False left += 1 right -= 1 return True # Enumerate all substrings for i in range(n): for j in range(i, n): length = j - i + 1 # Optimization: only check if longer than current best if length > best_length and is_palindrome(i, j): best_start = i best_length = length return s[best_start:best_start + best_length] # Comprehensive test casestest_cases = [ ("babad", ["bab", "aba"]), # Two valid answers ("cbbd", ["bb"]), # Even-length palindrome ("a", ["a"]), # Single character ("ac", ["a", "c"]), # No palindrome > 1 ("racecar", ["racecar"]), # Entire string ("", [""]), # Empty string ("aaaa", ["aaaa"]), # All same characters ("abcba", ["abcba"]), # Odd-length palindrome ("abaaba", ["abaaba"]), # Full string is palindrome ("forgeeksskeegfor", ["geeksskeeg"]), # Long example] print("Brute Force LPS Testing")print("=" * 50) for s, expected in test_cases: result = longest_palindromic_substring_brute_force(s) status = "✓" if result in expected else "✗" print(f"{status} Input: "{s}"") print(f" Result: "{result}" (Length: {len(result)})") print()Understanding the complexity of the brute force approach is essential for appreciating why we need better algorithms. Let's analyze both time and space complexity rigorously.
Time Complexity: O(n³)
The algorithm has three nested operations:
Let's compute the exact number of operations:
Detailed Count of Operations
For each pair (i, j), we perform O(j - i + 1) comparisons. The total number of comparisons is:
$$\sum_{i=0}^{n-1} \sum_{j=i}^{n-1} (j - i + 1)$$
Let's substitute k = j - i (the length minus 1):
$$\sum_{i=0}^{n-1} \sum_{k=0}^{n-1-i} (k + 1) = \sum_{i=0}^{n-1} \sum_{k=1}^{n-i} k$$
Using the formula for sum of first m integers: $\sum_{k=1}^{m} k = \frac{m(m+1)}{2}$:
$$\sum_{i=0}^{n-1} \frac{(n-i)(n-i+1)}{2}$$
This evaluates to approximately $\frac{n^3}{6}$, confirming O(n³) complexity.
For a string of length 1000, this means roughly 166 million comparisons. For length 10,000, it's 166 billion comparisons—clearly impractical.
| String Length (n) | Substrings O(n²) | Operations O(n³) | Estimated Time* |
|---|---|---|---|
| 100 | 5,050 | ~170,000 | < 1 ms |
| 1,000 | 500,500 | ~166,000,000 | ~100 ms |
| 10,000 | 50,005,000 | ~166,000,000,000 | ~100 seconds |
| 100,000 | 5,000,050,000 | ~166 trillion | ~28 hours |
| 1,000,000 | 500 billion | ~166 quadrillion | ~3 years |
The O(n³) complexity makes brute force impractical for strings longer than a few thousand characters. Most competitive programming problems and real-world applications involve strings of 10⁴ to 10⁶ characters, where brute force would time out or run for days. This stark limitation drives the need for the O(n²) and O(n) algorithms we'll explore in subsequent pages.
Space Complexity: O(1)
The brute force algorithm uses constant extra space:
The final output string requires O(n) space in the worst case (when the entire string is a palindrome), but this is typically not counted against the algorithm's space complexity since it's the required output.
Summary:
Given its O(n³) complexity, why study the brute force approach at all? This naive algorithm serves several critical purposes in algorithm design and software engineering practice.
Great algorithm designers don't jump to optimal solutions. They start with correctness, then optimize systematically. Donald Knuth's famous advice: "Premature optimization is the root of all evil." Understanding brute force deeply reveals exactly where the inefficiency lies—and guides us toward targeted improvements.
The key to optimizing the brute force approach is recognizing its redundant work. Where does the algorithm repeat computations that could be avoided?
Observation 1: Palindrome Structure is Hierarchical
If "abcba" is a palindrome, then its inner substring "bcb" must also be a palindrome. More generally:
A string S[i...j] is a palindrome if and only if:
This recursive structure means checking "abcba" independently of "bcb" wastes the palindrome check we already did for "bcb".
Observation 2: Center Symmetry
Every palindrome has a center:
Instead of checking every substring, we could check every possible center and expand outward. There are only 2n-1 centers (n single-character + n-1 between-character positions), potentially reducing the search space.
Observation 3: Wasted Comparisons
When we verify "bab" is a palindrome, we check:
Then when we verify "xbabx" (if it existed), we'd check:
The palindrome check doesn't leverage prior knowledge.
These observations lead directly to better algorithms:
• Expand Around Center (O(n²)): Exploits center symmetry—iterate over 2n-1 centers, expand outward until characters mismatch.
• Dynamic Programming (O(n²) time, O(n²) space): Exploits hierarchical structure—build a table where dp[i][j] indicates whether S[i...j] is a palindrome.
• Manacher's Algorithm (O(n)): Exploits all observations simultaneously—uses previously computed palindrome lengths to skip redundant checks.
We've thoroughly explored the brute force approach to finding the longest palindromic substring. Let's consolidate the key takeaways:
What's Next:
In the next page, we'll explore the Expand Around Center technique—a beautifully intuitive approach that reduces time complexity from O(n³) to O(n²) by leveraging the observation that every palindrome has a center. This technique is optimal for most practical scenarios and forms the foundation for understanding Manacher's linear-time algorithm.
You now understand the brute force approach to the longest palindromic substring problem. You can implement it correctly, analyze its O(n³) time complexity, and recognize the redundant work that optimization targets. Next, we'll transform these insights into the elegant expand-around-center technique.