Loading content...
Here's a seemingly impossible puzzle: add two integers without using the + operator or any other arithmetic operator. No addition, no subtraction, no multiplication, no division—just pure bitwise operations like AND, OR, XOR, and shifts.
This isn't merely a coding puzzle. It reveals a profound truth about computation: all arithmetic emerges from logic. At the deepest level of every CPU, addition is performed using nothing but logic gates. There are no 'addition circuits'—only combinations of AND, OR, and NOT gates arranged to produce the sum.
Understanding this connection bridges the gap between software abstraction and hardware reality, giving you insight into how billions of transistors conspire to do something as 'simple' as 2 + 2.
By the end of this page, you will understand how addition works at the bit level, implement addition using only XOR and AND, handle negative numbers correctly, and appreciate why CPUs can add billions of numbers per second despite using 'slow' logic gates.
Before implementing addition with bitwise operators, we need to deeply understand how binary addition works at the mechanical level. Just like decimal addition but simpler—there are only two digits!
Single-bit addition truth table:
| Bit A | Bit B | Sum | Carry |
|---|---|---|---|
| 0 | 0 | 0 | 0 |
| 0 | 1 | 1 | 0 |
| 1 | 0 | 1 | 0 |
| 1 | 1 | 0 | 1 |
Key observations:
The Sum bit is 1 when exactly one input is 1, otherwise 0. This is precisely the XOR operation!
Sum = A ⊕ BThe Carry bit is 1 only when both inputs are 1. This is precisely the AND operation!
Carry = A & BThis is no coincidence—it's why these operations exist. XOR and AND were designed (or discovered) to represent exactly these arithmetic primitives.
12345678910111213141516
// A half adder adds two bits and produces sum and carryfunction halfAdder(a: number, b: number): { sum: number; carry: number } { return { sum: a ^ b, // XOR gives the sum bit carry: a & b // AND gives the carry bit };} // Verify against truth tableconsole.log("0 + 0:", halfAdder(0, 0)); // { sum: 0, carry: 0 }console.log("0 + 1:", halfAdder(0, 1)); // { sum: 1, carry: 0 }console.log("1 + 0:", halfAdder(1, 0)); // { sum: 1, carry: 0 }console.log("1 + 1:", halfAdder(1, 1)); // { sum: 0, carry: 1 } // The half adder is the fundamental building block!// But it can't handle incoming carries from previous positions...It's called a 'half' adder because it only handles half the problem—adding two bits. When adding multi-bit numbers, we also need to handle the carry from the previous position, which requires a 'full' adder that takes three inputs: A, B, and carry-in.
When adding multi-bit numbers, we need to handle three inputs at each position: bit A, bit B, and the carry-in from the previous position. This is what a full adder does.
| A | B | Carry In | Sum | Carry Out |
|---|---|---|---|---|
| 0 | 0 | 0 | 0 | 0 |
| 0 | 0 | 1 | 1 | 0 |
| 0 | 1 | 0 | 1 | 0 |
| 0 | 1 | 1 | 0 | 1 |
| 1 | 0 | 0 | 1 | 0 |
| 1 | 0 | 1 | 0 | 1 |
| 1 | 1 | 0 | 0 | 1 |
| 1 | 1 | 1 | 1 | 1 |
Deriving the formulas:
Sum: The sum is 1 when an odd number of inputs are 1. This is:
Sum = A ⊕ B ⊕ Carry_inCarry Out: A carry is produced when at least two inputs are 1:
Carry_out = (A & B) | (A & Carry_in) | (B & Carry_in)Alternatively, using a cascade of half adders:
Carry_out = (A & B) | ((A ⊕ B) & Carry_in)12345678910111213141516171819202122232425262728
// A full adder handles three inputs: two bits plus carry-infunction fullAdder(a: number, b: number, carryIn: number): { sum: number; carryOut: number } { // First half adder: add a and b const partialSum = a ^ b; const carry1 = a & b; // Second half adder: add partial sum and carry-in const sum = partialSum ^ carryIn; const carry2 = partialSum & carryIn; // Carry out if either half adder produced a carry const carryOut = carry1 | carry2; return { sum, carryOut };} // Alternative single-expression version:function fullAdderCompact(a: number, b: number, cin: number): { sum: number; cout: number } { return { sum: a ^ b ^ cin, cout: (a & b) | ((a ^ b) & cin) };} // Verify a few cases:console.log("1 + 1 + 1:", fullAdder(1, 1, 1)); // { sum: 1, carryOut: 1 } (3 in binary: 11)console.log("1 + 0 + 1:", fullAdder(1, 0, 1)); // { sum: 0, carryOut: 1 } (2 in binary: 10)console.log("0 + 1 + 0:", fullAdder(0, 1, 0)); // { sum: 1, carryOut: 0 } (1 in binary: 01)A full adder is built from two half adders plus an OR gate:
This is exactly how hardware adders are constructed from logic gates!
We can now add multi-bit numbers by chaining full adders. The carry-out of each position becomes the carry-in of the next position, 'rippling' from LSB to MSB.
12345678910111213141516171819202122232425262728293031323334353637
function rippleCarryAdd(a: number, b: number, numBits: number = 32): number { let result = 0; let carry = 0; for (let i = 0; i < numBits; i++) { // Extract bit i from each number const bitA = (a >>> i) & 1; const bitB = (b >>> i) & 1; // Full adder for this position const sum = bitA ^ bitB ^ carry; const newCarry = (bitA & bitB) | ((bitA ^ bitB) & carry); // Place the sum bit in the result if (sum === 1) { result |= (1 << i); } carry = newCarry; } return result >>> 0; // Convert to unsigned} // Test casesconsole.log(rippleCarryAdd(5, 3, 8)); // 8 (0101 + 0011 = 1000)console.log(rippleCarryAdd(7, 1, 8)); // 8 (0111 + 0001 = 1000)console.log(rippleCarryAdd(15, 1, 8)); // 16 (01111 + 00001 = 10000)console.log(rippleCarryAdd(255, 1, 8)); // 0 (overflow wraps in 8 bits) // Detailed trace for 5 + 3:// Bit 0: A=1, B=1, C=0 → Sum=0, Carry=1// Bit 1: A=0, B=1, C=1 → Sum=0, Carry=1// Bit 2: A=1, B=0, C=1 → Sum=0, Carry=1// Bit 3: A=0, B=0, C=1 → Sum=1, Carry=0// Bit 4-7: All zeros// Result: 00001000 = 8 ✓Trace visualization for 5 + 3:
| Bit Position | A | B | Carry In | Sum | Carry Out |
|---|---|---|---|---|---|
| 0 (LSB) | 1 | 1 | 0 | 0 | 1 |
| 1 | 0 | 1 | 1 | 0 | 1 |
| 2 | 1 | 0 | 1 | 0 | 1 |
| 3 | 0 | 0 | 1 | 1 | 0 |
| 4 | 0 | 0 | 0 | 0 | 0 |
| 5 | 0 | 0 | 0 | 0 | 0 |
| 6 | 0 | 0 | 0 | 0 | 0 |
| 7 (MSB) | 0 | 0 | 0 | 0 | 0 |
The ripple carry adder has O(n) delay where n is the bit width. Each bit position must wait for the carry from the previous position before computing. For 64-bit addition, the carry might ripple through all 64 positions!
Real CPUs use carry-lookahead or carry-select adders to reduce this to O(log n) delay, computing carries in parallel.
Instead of processing bit by bit, we can use a brilliant insight: apply XOR and AND to all bits simultaneously, then iterate until there are no more carries!
The key insight:
a ^ b computes the sum of all bits ignoring carriesa & b identifies all positions that will produce carries(a & b) << 1We then add the partial sum and the shifted carries. If there are still carries, we repeat. Eventually, carries become zero and we're done!
123456789101112131415161718192021222324252627282930313233343536373839404142434445
function addWithoutPlus(a: number, b: number): number { while (b !== 0) { // Step 1: Compute sum ignoring carries (XOR) const sumWithoutCarry = a ^ b; // Step 2: Compute the carries (AND) and shift left const carry = (a & b) << 1; // Step 3: Add sum and carry (by iterating) a = sumWithoutCarry; b = carry; } return a;} // Trace for 5 + 3:// Initial: a = 0101, b = 0011//// Iteration 1:// sumWithoutCarry = 0101 ^ 0011 = 0110// carry = (0101 & 0011) << 1 = 0001 << 1 = 0010// a = 0110, b = 0010//// Iteration 2:// sumWithoutCarry = 0110 ^ 0010 = 0100// carry = (0110 & 0010) << 1 = 0010 << 1 = 0100// a = 0100, b = 0100//// Iteration 3:// sumWithoutCarry = 0100 ^ 0100 = 0000// carry = (0100 & 0100) << 1 = 0100 << 1 = 1000// a = 0000, b = 1000//// Iteration 4:// sumWithoutCarry = 0000 ^ 1000 = 1000// carry = (0000 & 1000) << 1 = 0000 << 1 = 0000// a = 1000, b = 0000//// b is now 0, loop exits// Return a = 1000 = 8 ✓ console.log(addWithoutPlus(5, 3)); // 8console.log(addWithoutPlus(7, 8)); // 15console.log(addWithoutPlus(100, 23)); // 123Each iteration eliminates at least one '1' bit from the carry value, because carries propagate but eventually stop. In the worst case (adding all 1s), we need O(n) iterations for n bits, but typically far fewer. The magic is that all bit positions are processed in parallel within each iteration!
1234567891011121314
function addRecursive(a: number, b: number): number { // Base case: no carries left if (b === 0) return a; // Recursive case: add the partial sum and shifted carries return addRecursive(a ^ b, (a & b) << 1);} console.log(addRecursive(5, 3)); // 8console.log(addRecursive(42, 58)); // 100 // The recursive version is elegant and clearly shows the invariant:// add(a, b) = add(a ^ b, (a & b) << 1) when b ≠ 0// add(a, 0) = a (base case)The algorithm works correctly for negative numbers too, thanks to two's complement representation. In two's complement, negative numbers are represented such that the standard addition algorithm works without modification!
Key two's complement properties:
12345678910111213141516171819202122232425262728293031323334353637383940414243
// In JavaScript, bitwise operations use 32-bit signed integersfunction addWithNegatives(a: number, b: number): number { // JavaScript's bitwise ops work on 32-bit signed ints a = a | 0; // Convert to 32-bit signed integer b = b | 0; while (b !== 0) { const sumWithoutCarry = a ^ b; const carry = (a & b) << 1; a = sumWithoutCarry; b = carry; } return a; // Result is 32-bit signed integer} // Positive + Positiveconsole.log(addWithNegatives(5, 3)); // 8 // Negative + Positiveconsole.log(addWithNegatives(-5, 3)); // -2 // Positive + Negative console.log(addWithNegatives(5, -3)); // 2 // Negative + Negativeconsole.log(addWithNegatives(-5, -3)); // -8 // Edge casesconsole.log(addWithNegatives(0, 0)); // 0console.log(addWithNegatives(-1, 1)); // 0 // Let's trace -5 + 3:// -5 in 32-bit two's complement: 11111111_11111111_11111111_11111011// 3 in 32-bit two's complement: 00000000_00000000_00000000_00000011//// XOR: 11111111_11111111_11111111_11111000 (partial sum)// AND: 00000000_00000000_00000000_00000011 (carry positions)// Carry << 1: 00000000_00000000_00000000_00000110//// ... continues until carry is 0 ...//// Result: 11111111_11111111_11111111_11111110 = -2 ✓Different languages handle integers differently:
• JavaScript: Bitwise ops use 32-bit signed integers; the algorithm works but may loop for very negative numbers if not handled carefully • Python: Integers are arbitrary precision; negative numbers can cause infinite loops without masking! • C/C++/Java: Fixed-width integers (int, long) work correctly • Rust: Similar to C, with explicit overflow handling options
For Python, explicitly mask to simulate fixed-width integers.
12345678910111213141516171819202122232425
def add_python(a: int, b: int) -> int: """Add two integers using only bitwise operations in Python.""" # Python integers have arbitrary precision, so we need to mask # to simulate 32-bit behavior MASK = 0xFFFFFFFF # 32 bits of 1s MAX_INT = 0x7FFFFFFF # Maximum positive 32-bit signed int while b != 0: # Mask to keep within 32 bits sum_without_carry = (a ^ b) & MASK carry = ((a & b) << 1) & MASK a = sum_without_carry b = carry # If result is negative (MSB is set), convert to Python negative if a > MAX_INT: a = ~(a ^ MASK) # Convert from two's complement return a # Test casesprint(add_python(5, 3)) # 8print(add_python(-5, 3)) # -2print(add_python(-5, -3)) # -8print(add_python(-1, -1)) # -2Once we can add without +, subtraction is straightforward: a - b = a + (-b). To negate a number in two's complement, we invert all bits and add 1:
-b = ~b + 1
But wait—we can't use + for the + 1 part! Fortunately, we just built that.
12345678910111213141516171819202122232425262728293031323334
function add(a: number, b: number): number { while (b !== 0) { const sum = a ^ b; const carry = (a & b) << 1; a = sum; b = carry; } return a;} function negate(x: number): number { // Two's complement negation: invert bits and add 1 return add(~x, 1);} function subtract(a: number, b: number): number { // a - b = a + (-b) return add(a, negate(b));} // Test casesconsole.log(subtract(10, 3)); // 7console.log(subtract(5, 8)); // -3console.log(subtract(-5, -3)); // -2console.log(subtract(0, 5)); // -5 // Trace for 10 - 3:// negate(3):// ~3 = 11111111_11111111_11111111_11111100// ~3 + 1 = 11111111_11111111_11111111_11111101 = -3// add(10, -3):// 10 = 00000000_00000000_00000000_00001010// -3 = 11111111_11111111_11111111_11111101// Result = 00000000_00000000_00000000_00000111 = 7 ✓Yes, multiplication and division can also be implemented using only bitwise operations!
• Multiplication: Use shift-and-add (the same algorithm we learned for pencil-and-paper multiplication) • Division: Use shift-and-subtract (long division in binary)
These are beyond our current scope, but follow naturally from the same principles.
Our iterative algorithm takes O(n) iterations in the worst case, which isn't how real CPUs achieve nanosecond additions. Let's explore the hardware solutions that make addition blazingly fast.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859
// Carry Lookahead uses "generate" and "propagate" signals// // Generate (G): This bit position will generate a carry regardless of carry-in// G[i] = A[i] & B[i]//// Propagate (P): This bit position will propagate an incoming carry// P[i] = A[i] ^ B[i] (or A[i] | B[i] for slight variation)//// Using these, we can compute carries WITHOUT waiting:// C[1] = G[0]// C[2] = G[1] | (P[1] & G[0])// C[3] = G[2] | (P[2] & G[1]) | (P[2] & P[1] & G[0])// C[4] = G[3] | (P[3] & G[2]) | (P[3] & P[2] & G[1]) | (P[3] & P[2] & P[1] & G[0])//// These formulas have no dependencies - all can be computed in parallel! interface CarryLookahead { generate: number[]; // G[i] for each bit propagate: number[]; // P[i] for each bit carries: number[]; // C[i+1] for each bit (carry into next position)} function computeCarryLookahead(a: number, b: number, bits: number): CarryLookahead { const generate: number[] = []; const propagate: number[] = []; const carries: number[] = [0]; // C[0] = 0 (no input carry) // Compute G and P for each bit (these can be parallel in hardware) for (let i = 0; i < bits; i++) { const ai = (a >>> i) & 1; const bi = (b >>> i) & 1; generate[i] = ai & bi; propagate[i] = ai ^ bi; } // Compute carries using lookahead (also parallel in hardware) for (let i = 0; i < bits; i++) { // C[i+1] = G[i] | (P[i] & C[i]) carries[i + 1] = generate[i] | (propagate[i] & carries[i]); } return { generate, propagate, carries };} // With carries precomputed, sum is trivial: S[i] = P[i] ^ C[i]function carryLookaheadAdd(a: number, b: number, bits: number = 8): number { const { propagate, carries } = computeCarryLookahead(a, b, bits); let result = 0; for (let i = 0; i < bits; i++) { const sum = propagate[i] ^ carries[i]; result |= (sum << i); } return result;} console.log(carryLookaheadAdd(5, 3, 8)); // 8console.log(carryLookaheadAdd(127, 1, 8)); // 128 (in 8 bits, this overflows to 128)In hardware, the carry lookahead formulas are implemented as parallel logic gates, all computing simultaneously. This is fundamentally different from our software loop—hardware can evaluate G[0], G[1], ... G[n] all at once because each is an independent AND gate.
A 64-bit Kogge-Stone adder computes the sum in about 6 gate delays, regardless of carry chain length!
We've built addition from first principles, understanding how arithmetic emerges from pure logic. Let's consolidate the key insights:
A ^ B is the sum at each bit position without considering carryA & B identifies where carries will be generated(A & B) << 1 places carries at the correct next positionThe Core Algorithm:
function add(a: number, b: number): number {
while (b !== 0) {
const sum = a ^ b;
const carry = (a & b) << 1;
a = sum;
b = carry;
}
return a;
}
This elegant algorithm captures the essence of how addition works at the bit level—a fundamental insight that connects software to hardware.
What's next:
With arithmetic operations mastered, we'll explore the Maximum XOR Pair problem—finding two numbers in an array whose XOR is maximum. This problem introduces the trie data structure applied to bit manipulation, a powerful technique for optimizing bit-by-bit decisions.
You now understand how addition works at the deepest level—from single-bit half adders through full adders to complete arithmetic. This knowledge demystifies CPU operation and provides a powerful technique for situations where arithmetic operators are restricted.