Loading learning content...
Among the many elegant patterns in bit manipulation, few are as beautiful and frequently useful as detecting whether a number is a power of two. The expression n & (n-1) == 0 appears deceptively simple—just five tokens—yet it encapsulates profound mathematical insight about binary representation.
This technique isn't merely academic. It appears constantly in systems programming, memory allocation, hash table sizing, graphics programming, and optimization of algorithms that benefit from power-of-two constraints. Understanding why this expression works transforms you from someone who memorizes tricks to someone who truly comprehends binary arithmetic.
By the end of this page, you will understand exactly why n & (n-1) == 0 works, be able to derive it from first principles, know all edge cases and variations, and recognize where this pattern applies in real-world software engineering.
Before understanding the detection technique, we must first appreciate what makes powers of two unique in binary representation.
The Fundamental Property:
Every power of two has exactly one bit set in its binary representation. This is a direct consequence of how binary positional notation works:
0001001001001000Conversely, any number with exactly one bit set is, by definition, a power of two. The bit's position determines which power: bit position k (0-indexed from the right) represents 2ᵏ.
The Uniqueness:
No other positive integers share this single-bit property. Any number that isn't a power of two requires multiple bits to represent:
0011 (two bits: 2¹ + 2⁰)0101 (two bits: 2² + 2⁰)0110 (two bits: 2² + 2¹)0111 (three bits)1001 (two bits)This property gives us our detection criterion: a number is a power of two if and only if it has exactly one set bit.
| Power | Decimal | Binary (8-bit) | Set Bit Position |
|---|---|---|---|
| 2⁰ | 1 | 00000001 | Position 0 |
| 2¹ | 2 | 00000010 | Position 1 |
| 2² | 4 | 00000100 | Position 2 |
| 2³ | 8 | 00001000 | Position 3 |
| 2⁴ | 16 | 00010000 | Position 4 |
| 2⁵ | 32 | 00100000 | Position 5 |
| 2⁶ | 64 | 01000000 | Position 6 |
| 2⁷ | 128 | 10000000 | Position 7 |
Think of powers of two as the 'atoms' of binary representation. Every positive integer is a unique combination of these atoms (each used at most once). A power of two is, quite literally, a number composed of exactly one atom—hence exactly one bit.
The magic of n & (n-1) lies in understanding what happens when we subtract 1 from a binary number. This operation creates a cascade effect that propagates through the rightmost bits.
The Borrow Propagation:
When you subtract 1 from a number, you attempt to reduce the rightmost bit. If that bit is 1, it simply becomes 0. But if it's 0, you must borrow from the next position—which triggers a cascade:
n = 12: 1100
n-1 = 11: 1011
^^^^--- Everything changes from the rightmost 1 onward
The Critical Observation:
Subtracting 1 from any number:
This is precisely what binary subtraction does—borrows propagate until they find a 1, clearing that 1 and leaving a trail of 1s in their wake.
| n (decimal) | n (binary) | n-1 (binary) | What Changed |
|---|---|---|---|
| 8 | 1000 | 0111 | Single 1 cleared, all zeros became ones |
| 12 | 1100 | 1011 | Rightmost 1 cleared, its right became 1 |
| 6 | 0110 | 0101 | Rightmost 1 cleared, its right became 1 |
| 10 | 1010 | 1001 | Rightmost 1 cleared, zero became one |
| 16 | 10000 | 01111 | Single 1 cleared, all zeros became ones |
| 7 | 0111 | 0110 | Rightmost 1 simply cleared |
| 15 | 1111 | 1110 | Rightmost 1 simply cleared |
Notice how n and n-1 always differ in at least two bit positions (except when n's rightmost bit is 1, where they differ in exactly one position). The rightmost 1 in n is always cleared in n-1, and all bits to its right flip. This relationship is the foundation of n & (n-1).
Now we can understand the complete formula. When we compute n & (n-1), we're performing a bitwise AND between:
For a power of two (single set bit):
n = 8: 1000
n-1 = 7: 0111
----
n & (n-1): 0000 ✓ Result is 0!
Because n has only one bit set, and n-1 has that bit cleared (with all other bits to its right set), the AND produces zero. The only 1 in n aligns with a 0 in n-1.
For a non-power of two (multiple set bits):
n = 12: 1100
n-1 = 11: 1011
----
n & (n-1): 1000 ✗ Result is NOT 0!
Because n has multiple 1s, clearing the rightmost one still leaves other 1s intact. The bits to the left of the rightmost 1 remain unchanged in both n and n-1, so the AND preserves them.
Formally: If n = 2ᵏ, then n has exactly bit k set. n-1 = 2ᵏ - 1 has all bits from 0 to k-1 set (it's a sequence of k ones). Since these bit patterns have no overlapping 1s, their AND is 0. Conversely, if n has multiple set bits, the leftmost ones survive the AND since n-1 only affects bits from the rightmost 1 onward.
Here's where many developers get caught: zero is not a power of two, but 0 & (-1) == 0 is true!
n = 0: 00000000
n - 1: 11111111 (in two's complement, this wraps around to all 1s)
n & (n-1): 00000000
The AND of all zeros with anything is zero. So the expression n & (n-1) == 0 incorrectly returns true for n = 0.
The Complete Solution:
To correctly check for powers of two, you must also verify that n is positive:
n > 0 && (n & (n-1)) == 0
Or equivalently:
n != 0 && (n & (n-1)) == 0
This additional check ensures we exclude zero from being classified as a power of two.
Forgetting, to handle zero is one of the most common bugs when implementing power-of-two checks. In interviews and production code alike, always include the n > 0 or n != 0 guard. Many have failed technical interviews by presenting an incomplete solution.
What About Negative Numbers?
In signed integer representations (two's complement), negative numbers have a very specific property: they always have the sign bit set (the leftmost bit). This means:
11111000, not a single bitSince we typically consider powers of two as positive integers (which aligns with most use cases like array sizing, memory allocation), the n > 0 check correctly excludes negative numbers.
If your domain specifically requires checking if |n| is a power of two, you'd need:
Math.abs(n) > 0 && (Math.abs(n) & (Math.abs(n) - 1)) == 0
Let's see how this translates into clean, production-ready code across different programming languages. Note the consistency of the core logic while accommodating language-specific considerations.
123456789101112131415161718192021222324252627282930313233343536373839
def is_power_of_two(n: int) -> bool: """ Check if n is a power of two using bit manipulation. Time Complexity: O(1) - constant time bitwise operations Space Complexity: O(1) - no extra space used Args: n: Integer to check Returns: True if n is a positive power of two, False otherwise Examples: >>> is_power_of_two(1) # 2^0 True >>> is_power_of_two(16) # 2^4 True >>> is_power_of_two(0) False >>> is_power_of_two(6) # Not a power of 2 False """ # Guard against zero and negative numbers # Then check if exactly one bit is set return n > 0 and (n & (n - 1)) == 0 def is_power_of_two_verbose(n: int) -> bool: """ Same logic with explicit handling for educational clarity. """ if n <= 0: return False # Zero and negatives are not powers of two # n & (n-1) clears the rightmost set bit # If the result is 0, there was only one bit set rightmost_bit_cleared = n & (n - 1) return rightmost_bit_cleared == 0Note that C++20 provides std::has_single_bit() which does exactly this check. Java provides Integer.bitCount(). When these built-ins are available and readable, prefer them for maintenance—but understand the underlying bit manipulation for interviews and when built-ins aren't available.
While n & (n-1) == 0 is the canonical solution, there are other valid approaches. Understanding alternatives deepens your comprehension and prepares you for variations in interviews.
Approach: Repeatedly divide by 2
Keep dividing n by 2 as long as it's even. If you end up with 1, it was a power of two.
Logic:
12345678910111213141516171819
def is_power_of_two_iterative(n: int) -> bool: """ O(log n) time - loops log₂(n) times. O(1) space. """ if n <= 0: return False while n % 2 == 0: # While n is even n //= 2 return n == 1 # Should reduce to exactly 1 # Example trace for n = 16:# n = 16 → 8 → 4 → 2 → 1 ✓ (power of two) # Example trace for n = 12:# n = 12 → 6 → 3 (odd, loop stops) ✗ (not power of two)This approach is O(log n) time vs O(1) for the bit manipulation method. For large numbers, this is significantly slower. However, it's useful when bit operations aren't available or for educational purposes.
The power-of-two check appears constantly in systems programming and algorithm design. Understanding these applications reveals why this "simple trick" is actually foundational knowledge.
index = hash & (size - 1) replaces expensive modulo with cheap AND. This only works when size is a power of two.Example: Efficient Modulo in Hash Tables
When a hash table size is a power of two, we can replace:
index = hash % table_size // Division is slow
With:
index = hash & (table_size - 1) // Bitwise AND is fast
This works because table_size - 1 creates a bitmask of all 1s in the lower bits. For example, if table_size = 16 (10000), then table_size - 1 = 15 (01111). ANDing with this mask extracts exactly the lower 4 bits, which is equivalent to mod 16.
123456789101112131415161718192021222324252627282930313233343536373839404142434445
class FastHashTable: """ Hash table that exploits power-of-two sizing for O(1) modulo. """ def __init__(self, initial_capacity: int = 16): # Ensure capacity is a power of two if initial_capacity <= 0 or (initial_capacity & (initial_capacity - 1)) != 0: # Round up to next power of two initial_capacity = 1 << (initial_capacity - 1).bit_length() self.capacity = initial_capacity self.mask = self.capacity - 1 # Precompute for fast modulo self.buckets = [None] * self.capacity def _get_index(self, key) -> int: hash_value = hash(key) # Fast modulo using bitwise AND (only works for power-of-two!) return hash_value & self.mask # Equivalent to hash_value % capacity def _resize(self, new_capacity: int): """Double capacity while maintaining power-of-two property.""" assert new_capacity & (new_capacity - 1) == 0, "Must be power of two" # ... resize logic self.capacity = new_capacity self.mask = new_capacity - 1 def round_up_to_power_of_two(n: int) -> int: """ Round n up to the next power of two. Useful for buffer allocation, hash table sizing, etc. """ if n <= 0: return 1 if n & (n - 1) == 0: # Already a power of two return n # Set all bits below the highest bit, then add 1 n -= 1 n |= n >> 1 n |= n >> 2 n |= n >> 4 n |= n >> 8 n |= n >> 16 return n + 1The n & (n-1) pattern is a gateway to several related techniques. Understanding these connections builds a cohesive mental model of bit manipulation.
| Pattern | Expression | What It Does | Example |
|---|---|---|---|
| Clear rightmost set bit | n & (n-1) | Removes the lowest 1 bit | 12 (1100) → 8 (1000) |
| Isolate rightmost set bit | n & (-n) | Extracts just the lowest 1 bit | 12 (1100) → 4 (0100) |
| Set rightmost zero bit | n | (n+1) | Sets the lowest 0 bit to 1 | 12 (1100) → 13 (1101) |
| Clear rightmost consecutive 1s | n & (n+1) | Clears trailing 1s | 7 (0111) → 0 (0000) |
| Count trailing zeros | log₂(n & -n) | Position of rightmost 1 bit | 12 (1100) → 2 |
These patterns form building blocks for more complex bit manipulation algorithms. Brian Kernighan's bit counting algorithm (covered in the next page) is built directly on the n & (n-1) pattern. Master these fundamentals, and advanced techniques become natural extensions.
We've thoroughly explored one of the most elegant expressions in bit manipulation. Let's consolidate the key insights:
What's Next:
Now that you understand the power-of-two check, we're ready to explore counting set bits (popcount). The next page covers multiple approaches to counting how many 1 bits exist in a number, culminating in Brian Kernighan's elegant algorithm—which uses exactly the n & (n-1) pattern we've mastered here.
You now deeply understand one of the most fundamental bit manipulation patterns. The expression n > 0 && (n & (n-1)) == 0 will serve you in countless scenarios—from interview problems to systems programming. Next, we'll extend this foundation to count all set bits in a number.