Loading content...
Before we dive into sophisticated bit manipulation techniques, we must master the most elemental operation in the bit manipulation toolkit: determining whether a number is even or odd. While this might seem trivially simple—after all, we can use the modulo operator n % 2—the bitwise approach reveals the fundamental insight that makes all bit manipulation possible: integers are just sequences of bits, and those bits have meaning.
This page isn't just about replacing modulo with a faster operation. It's about developing the mental model that transforms how you see numbers. When you look at an integer and instantly visualize its binary representation, you've unlocked the key to dozens of optimization techniques that separate competent programmers from exceptional ones.
By the end of this page, you will understand why the last bit of any integer determines its parity, how to use the AND operator for this check, why this approach is faster than modulo, and how this simple technique forms the foundation for all bit manipulation patterns.
To understand why the least significant bit (LSB) determines parity, we need to revisit how positional notation works in binary.
Every integer in base 2 is expressed as a sum of powers of 2:
$$n = b_k \cdot 2^k + b_{k-1} \cdot 2^{k-1} + \ldots + b_1 \cdot 2^1 + b_0 \cdot 2^0$$
Where each $b_i$ is either 0 or 1.
The critical insight: Every power of 2 greater than $2^0$ is even:
Only $2^0 = 1$ is odd.
Therefore: The sum of any combination of $2^1, 2^2, 2^3, \ldots$ will always be even (sum of even numbers is even). The only term that can make the total odd is $b_0 \cdot 2^0 = b_0$.
This means:
The least significant bit alone determines whether a number is even or odd.
| Decimal | Binary | LSB (b₀) | Parity |
|---|---|---|---|
| 0 | 0000 | 0 | Even |
| 1 | 0001 | 1 | Odd |
| 2 | 0010 | 0 | Even |
| 3 | 0011 | 1 | Odd |
| 4 | 0100 | 0 | Even |
| 5 | 0101 | 1 | Odd |
| 6 | 0110 | 0 | Even |
| 7 | 0111 | 1 | Odd |
| 8 | 1000 | 0 | Even |
| 15 | 1111 | 1 | Odd |
| 42 | 101010 | 0 | Even |
| 127 | 1111111 | 1 | Odd |
Look at the binary representations above. Notice the alternating pattern in the rightmost bit: 0, 1, 0, 1, 0, 1... This is no coincidence—it's the mathematical structure of counting in binary. Every increment toggles the LSB, and every other number has it set.
Now that we understand why the LSB determines parity, we need a way to extract just that bit. This is where the bitwise AND operator becomes our precision tool.
Recall how AND works:
1 AND 1 = 11 AND 0 = 00 AND 1 = 00 AND 0 = 0AND only produces 1 when both inputs are 1.
The Masking Technique:
To extract the LSB, we AND the number with 1 (binary: 0001):
n: b_k b_{k-1} ... b_2 b_1 b_0
& 1: 0 0 ... 0 0 1
────────────────────────────────────────
Result: 0 0 ... 0 0 b_0
Every bit position except the last is ANDed with 0, producing 0. Only the last bit b_0 is ANDed with 1, preserving its value.
The result:
b_0 = 1, the result is 1 (truthy in most languages)b_0 = 0, the result is 0 (falsy in most languages)12345678910111213141516171819202122232425262728293031323334
def is_even(n: int) -> bool: """ Check if a number is even using bitwise AND. The expression (n & 1) extracts the least significant bit. - If LSB is 0, the number is even → return True - If LSB is 1, the number is odd → return False Time Complexity: O(1) - single bitwise operation Space Complexity: O(1) - no additional memory """ return (n & 1) == 0 def is_odd(n: int) -> bool: """ Check if a number is odd using bitwise AND. The expression (n & 1) directly gives us the parity: - 1 means odd - 0 means even """ return (n & 1) == 1 # Demonstration with various numberstest_numbers = [0, 1, 2, 3, 4, 5, 42, -1, -2, -7, -8, 2**31, 2**31 - 1] print("Number | Binary (last 8 bits) | n & 1 | Even? | Odd?")print("-" * 60)for num in test_numbers: binary = format(num & 0xFF, '08b') # Show last 8 bits result = num & 1 print(f"{num:6} | {binary} | {result} | {is_even(num)!s:5} | {is_odd(num)}")You might wonder: why bother with n & 1 when n % 2 does the same thing and is more readable? This is a valid question, and the answer involves understanding what happens at the hardware level.
The Modulo Operation:
The modulo operator % computes the remainder after division. For n % 2:
The Bitwise AND Operation:
The & operator performs a parallel bit-by-bit comparison:
-7 % 2 differentlyModern optimizing compilers often convert n % 2 to n & 1 automatically. However, this optimization isn't guaranteed across all compilers, languages, or optimization levels. Using n & 1 explicitly ensures optimal code regardless of compiler behavior, and communicates your intent to other developers: 'I'm doing bit manipulation here.'
The Negative Number Consideration:
One subtle but important difference: modulo behavior varies for negative numbers.
| Expression | Python | JavaScript | Java | C++ |
|---|---|---|---|---|
-7 % 2 | 1 | -1 | -1 | -1 |
-7 & 1 | 1 | 1 | 1 | 1 |
In two's complement representation (used by virtually all modern systems), negative odd numbers still have their LSB set to 1. The bitwise approach gives consistent results across languages, while modulo behavior is language-dependent.
For parity checking, n & 1 is both faster and more predictable.
To fully appreciate why n & 1 works for negative numbers, we need to understand two's complement representation—the near-universal method for representing signed integers in binary.
How Two's Complement Works:
For an n-bit signed integer:
Example with 8-bit integers:
+7 = 00000111
Step 1: Invert all bits
11111000
Step 2: Add 1
11111001 = -7
Why the LSB Still Indicates Parity:
Let's trace through the logic:
If n is odd, it ends in 1
Inverting the bits, the LSB becomes 0
Adding 1 sets the LSB back to 1
Result: -n (for odd n) ends in 1
If n is even, it ends in 0
Inverting the bits, the LSB becomes 1
Adding 1 causes a carry, LSB becomes 0
Result: -n (for even n) ends in 0
The parity is preserved through negation!
| Decimal | Binary (8-bit) | LSB | Parity |
|---|---|---|---|
| +7 | 00000111 | 1 | Odd |
| -7 | 11111001 | 1 | Odd |
| +6 | 00000110 | 0 | Even |
| -6 | 11111010 | 0 | Even |
| +1 | 00000001 | 1 | Odd |
| -1 | 11111111 | 1 | Odd |
| +4 | 00000100 | 0 | Even |
| -4 | 11111100 | 0 | Even |
This isn't coincidence—it's mathematically guaranteed. The operation of negating in two's complement is equivalent to computing -n = ~n + 1. Since ~n flips the LSB and adding 1 involves a carry chain, the LSB of -n equals the LSB of n. The parity-preserving property is inherent to two's complement arithmetic.
The even/odd check might seem basic, but it appears in numerous practical scenarios. Understanding when to apply it efficiently is part of developing strong algorithmic intuition.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
# Example 1: Alternating pattern generationdef generate_checkerboard(size: int) -> list[list[str]]: """ Generate a checkerboard pattern using bitwise parity check. Cell color depends on whether (row + col) is even or odd. """ board = [] for row in range(size): board_row = [] for col in range(size): # (row + col) & 1 gives alternating pattern if (row + col) & 1: board_row.append("⬛") # Odd sum → black else: board_row.append("⬜") # Even sum → white board.append(board_row) return board # Display 5x5 checkerboardfor row in generate_checkerboard(5): print(" ".join(row))print() # Example 2: Median calculation with parity checkdef find_median(sorted_arr: list[int]) -> float: """ Find the median of a sorted array. Behavior differs based on array length parity. """ n = len(sorted_arr) mid = n // 2 if n & 1: # Odd length: single middle element return float(sorted_arr[mid]) else: # Even length: average of two middle elements return (sorted_arr[mid - 1] + sorted_arr[mid]) / 2.0 # Testodd_arr = [1, 3, 5, 7, 9]even_arr = [1, 3, 5, 7]print(f"Median of {odd_arr}: {find_median(odd_arr)}")print(f"Median of {even_arr}: {find_median(even_arr)}")print() # Example 3: Efficient modular exponentiation (preview)def power_mod(base: int, exp: int, mod: int) -> int: """ Compute (base^exp) % mod using binary exponentiation. Uses parity check to decide multiply vs square. """ result = 1 base = base % mod while exp > 0: if exp & 1: # If exp is odd, multiply result result = (result * base) % mod exp >>= 1 # Divide exp by 2 base = (base * base) % mod return result # Test: 2^10 mod 1000 = 1024 mod 1000 = 24print(f"2^10 mod 1000 = {power_mod(2, 10, 1000)}")# Test: 3^13 mod 7 = 1594323 mod 7 = 3print(f"3^13 mod 7 = {power_mod(3, 13, 7)}")While the even/odd check is straightforward, there are subtle issues that can trip up developers. Understanding these edge cases deepens your mastery.
0 & 1 = 0, so zero is correctly identified as even. This is mathematically correct but sometimes surprises developers.3.7 & 1 gives 1 because the number is truncated to an integer first.n & 1 (gives 0 or 1) with n && 1 (logical AND, different behavior).n & 1 == 0, remember that == has higher precedence than & in many languages. Use parentheses: (n & 1) == 0.123456789101112131415161718192021222324
# Edge Case 1: Zero is evenprint(f"0 & 1 = {0 & 1}") # Output: 0print(f"Is 0 even? {(0 & 1) == 0}") # Output: True # Edge Case 2: Very large numbers (Python handles gracefully)huge = 2**1000 + 1print(f"(2^1000 + 1) is odd? {(huge & 1) == 1}") # Output: True huge_even = 2**1000print(f"(2^1000) is even? {(huge_even & 1) == 0}") # Output: True # Edge Case 3: Operator precedence trapn = 5# WRONG: This evaluates as n & (1 == 0), which is n & False = 0# Python doesn't have this issue due to its operator precedence,# but C/C++/Java do!# # In C/C++/Java, always write: (n & 1) == 0# Not: n & 1 == 0 # Demonstration of correct usagefor num in [0, 1, 2, -1, -2, 2**100, 2**100 + 1]: is_even = (num & 1) == 0 print(f"{num} is {'even' if is_even else 'odd'}")JavaScript converts numbers to 32-bit signed integers for bitwise operations. For numbers larger than 2³¹ - 1, results may be unexpected. Use BigInt for reliable bit operations on large numbers: (BigInt(n) & BigInt(1)) === BigInt(0).
While we've discussed the theoretical performance advantages of bitwise AND over modulo, let's examine actual benchmarks. In practice, modern compilers often optimize n % 2 to n & 1, but understanding the raw difference helps appreciate the optimization.
What to expect:
123456789101112131415161718192021222324252627282930313233343536373839404142
import timeitimport random # Generate test datatest_data = [random.randint(-10**9, 10**9) for _ in range(10_000_000)] def modulo_check(): """Count even numbers using modulo.""" count = 0 for n in test_data: if n % 2 == 0: count += 1 return count def bitwise_check(): """Count even numbers using bitwise AND.""" count = 0 for n in test_data: if (n & 1) == 0: count += 1 return count # Verify both give same resultassert modulo_check() == bitwise_check() # Benchmarkprint("Benchmarking 10 million even/odd checks...")print() modulo_time = timeit.timeit(modulo_check, number=3) / 3print(f"Modulo (n % 2 == 0): {modulo_time:.4f} seconds") bitwise_time = timeit.timeit(bitwise_check, number=3) / 3print(f"Bitwise (n & 1 == 0): {bitwise_time:.4f} seconds") print()speedup = modulo_time / bitwise_timeprint(f"Speedup: {speedup:.2f}x")print()print("Note: In Python, the difference may be small due to interpreter")print("overhead. In compiled languages (C/C++), the difference is more")print("pronounced in unoptimized builds.")For a single even/odd check, the performance difference is negligible. But in scenarios like image processing (checking millions of pixel coordinates), game physics (per-frame calculations across thousands of objects), or data analysis (processing billions of records), these micro-optimizations compound significantly.
The even/odd check using n & 1 is the gateway technique to bit manipulation. Once you internalize this pattern—using AND with a mask to extract specific bits—you've learned the conceptual foundation for:
Extracting bit ranges:
n & 0b1111 // Extract the last 4 bits
n & 0xFF // Extract the last 8 bits (one byte)
Checking specific bit positions:
n & (1 << k) // Check if bit at position k is set
Creating bit masks dynamically:
(1 << k) - 1 // Mask with k rightmost bits set
Every technique in this module builds on this foundation. By mastering the simple case of extracting the LSB, you've taken the first step toward fluency in bit manipulation.
Key insight: The expression n & 1 is not just a trick—it's the application of the fundamental principle that AND + mask = bit extraction. This principle, once understood, unlocks an entire category of optimization techniques.
You should now be able to look at any integer and instantly visualize: 'Is the rightmost bit a 0 or a 1?' If you can do this, you've developed the mental model needed for the techniques in this module. If not, spend more time with the binary representations until they become second nature.
We've explored the simplest yet most fundamental bit manipulation technique in depth. Let's consolidate what we've learned:
n & 1 isolates the least significant bit, giving 0 for even and 1 for odd numbers.(n & 1) == 0 to avoid operator precedence issues.What's Next:
In the next page, we'll learn to extract the rightmost set bit from a number—a slightly more complex operation that introduces the interaction between a number and its two's complement negation. This technique is fundamental to many efficient algorithms including Brian Kernighan's bit counting algorithm.
You now understand the fundamental bit manipulation technique for parity checking. This foundation—visualizing numbers in binary and using AND with masks to extract bits—will serve you throughout the rest of this module and beyond. Proceed to the next page to learn about extracting the rightmost set bit.