Loading content...
Try this in almost any programming language:
console.log(0.1 + 0.2); // Expected: 0.3
// Actual: 0.30000000000000004
console.log(0.1 + 0.2 === 0.3); // Expected: true
// Actual: false
This result has shocked countless programmers. It spawned an entire website dedicated to explaining it (0.30000000000000004.com). It's the subject of thousands of Stack Overflow questions, confused blog posts, and jokes about programming languages being "broken."
But it's not a bug. It's not a limitation of JavaScript, Python, Java, or any specific language. It's not even unique to computers—it's a fundamental consequence of representing decimal fractions in binary.
This page provides the definitive explanation. By the end, you won't just understand that 0.1 + 0.2 ≠ 0.3—you'll understand exactly why, at a level that lets you predict, explain, and professionally handle similar situations throughout your career.
By the end of this page, you will trace through the exact binary representation of 0.1, 0.2, and 0.3, understand why 0.1 + 0.2 produces 0.30000000000000004, know how to properly handle decimal arithmetic in your code, and have professional-grade confidence in explaining this phenomenon.
The fundamental issue is that 0.1 cannot be exactly represented in binary floating-point. This isn't a flaw in IEEE 754—it's an inherent limitation of base conversion.
An Analogy: Decimal's Limitations
Consider the fraction 1/3 in decimal:
1/3 = 0.3333333333333... (repeating forever)
No finite decimal can exactly represent 1/3. If we truncate at any point, we introduce error. This isn't a flaw in decimal—it's that 3 doesn't divide evenly into powers of 10.
The Same Problem in Binary
In binary (base 2), the "easy" fractions are powers of 2 and their sums:
But 1/10 (decimal 0.1) is:
1/10 = 0.0001100110011001100110011... (binary, repeating forever)
The pattern "0011" repeats infinitely. Just as 1/3 cannot be finitely represented in decimal, 1/10 cannot be finitely represented in binary.
Why 1/10 Repeats in Binary
When converting a decimal to binary, we repeatedly multiply by 2 and track the integer part:
0.1 × 2 = 0.2 → 0
0.2 × 2 = 0.4 → 0
0.4 × 2 = 0.8 → 0
0.8 × 2 = 1.6 → 1 (subtract 1, continue with 0.6)
0.6 × 2 = 1.2 → 1 (subtract 1, continue with 0.2)
0.2 × 2 = 0.4 → 0 ← We're back to 0.2! The cycle repeats.
...
Since we return to 0.2, the binary expansion is the repeating sequence .0001100110011...
| Decimal | Binary | Exact? |
|---|---|---|
| 0.5 | 0.1 | ✓ Yes |
| 0.25 | 0.01 | ✓ Yes |
| 0.125 | 0.001 | ✓ Yes |
| 0.75 | 0.11 | ✓ Yes |
| 0.1 | 0.0001100110011... (repeating) | ✗ No |
| 0.2 | 0.0011001100110... (repeating) | ✗ No |
| 0.3 | 0.0100110011001... (repeating) | ✗ No |
This limitation exists in pure mathematics, not just computers. Binary cannot exactly represent 0.1 for the same fundamental reason decimal cannot exactly represent 1/3. Computers just make this visible because they must truncate at some point.
Let's trace through exactly how IEEE 754 double precision represents these values.
Double Precision Reminder
The Exact Value of 0.1 in Double Precision
When we store "0.1" in a double, the computer:
The stored bit pattern represents:
0.1 ≈ 0.1000000000000000055511151231257827021181583404541015625
This value is slightly greater than 0.1 by about 5.55 × 10⁻¹⁸.
The Exact Value of 0.2 in Double Precision
0.2 = 2 × 0.1, so it has the same problematic pattern shifted:
0.2 (binary) = 0.0011001100110011... (repeating)
After normalization and rounding to 53 bits:
0.2 ≈ 0.200000000000000011102230246251565404236316680908203125
This value is slightly greater than 0.2 by about 1.11 × 10⁻¹⁷.
The Exact Value of 0.3 in Double Precision
0.3 (binary) = 0.0100110011001100... (repeating)
After normalization and rounding to 53 bits:
0.3 ≈ 0.299999999999999988897769753748434595763683319091796875
This value is slightly less than 0.3 by about 1.11 × 10⁻¹⁷.
| Decimal Value | Stored Approximation | Error Direction | Error Magnitude |
|---|---|---|---|
| 0.1 | 0.10000000000000000555... | Slightly HIGH | ≈ +5.55 × 10⁻¹⁸ |
| 0.2 | 0.20000000000000001110... | Slightly HIGH | ≈ +1.11 × 10⁻¹⁷ |
| 0.3 | 0.29999999999999998889... | Slightly LOW | ≈ −1.11 × 10⁻¹⁷ |
The stored values of 0.1 and 0.2 are both slightly HIGH, while the stored value of 0.3 is slightly LOW. This means 0.1 + 0.2 will be too high, while 0.3 is too low—they pass each other in opposite directions!
Now let's trace through exactly what happens when we compute 0.1 + 0.2:
Step 1: Fetch the Operands
The CPU fetches the stored representations:
Step 2: Perform the Addition
The floating-point unit adds these exact bit patterns:
0.1000000000000000055511151231257827...
Step 3: Round to Representable Value
The true sum (0.300000000000000016653...) has infinite precision. We must round it to the nearest representable double.
The two nearest representable doubles near 0.3 are:
Our computed sum 0.30000000000000001665... falls between these. By IEEE 754's round-to-nearest-even rule, we round to the higher value:
Result: 0.30000000000000004440892098500626161694526672363281250
Step 4: Compare with "0.3"
Now we compare this result with the representation of 0.3:
0.1 + 0.2 = 0.30000000000000004440892098...
0.3 = 0.29999999999999998889776975...
These differ by about 5.55 × 10⁻¹⁷ in their bit representations—exactly one "ulp" (unit in the last place) in double precision.
Since the bit patterns differ, 0.1 + 0.2 === 0.3 is false.
Why the Display Shows 0.30000000000000004
When you print a floating-point number, the runtime shows enough digits to uniquely identify the bit pattern. For 0.30000000000000004440..., showing "0.30000000000000004" (17 significant digits) is sufficient to distinguish it from adjacent representable values.
Some languages hide more digits:
0.300000000000000040.300000000000000040.30000000000000004But internally, they all have the same IEEE 754 representation.
The computed 0.1 + 0.2 and the stored 0.3 differ by exactly one ulp (one unit in the last place). This is the best possible outcome—the error is the smallest representable difference. IEEE 754 is working exactly as designed; it's just that 'correct rounding' still produces approximation.
Let's synthesize everything into a complete understanding:
Chain of Events
0.1 cannot be exactly represented in binary. It rounds to a value slightly larger than 0.1.
0.2 cannot be exactly represented in binary. It also rounds to a value slightly larger than 0.2.
Adding two "too large" values produces a sum that's even more too large. The errors compound in the same direction.
0.3 rounds to a value slightly smaller than 0.3. This is just how the rounding works out for this particular value.
The sum and the direct value round to different representable values. Since they're on opposite sides of the true 0.3, they end up at different nearest neighbors.
Equality comparison checks bit patterns, not mathematical values. Different bit patterns mean false.
Not All Decimal Additions Fail
Importantly, not all decimal additions produce this mismatch:
0.1 + 0.4 === 0.5 // true!
0.1 + 0.1 === 0.2 // true!
0.5 + 0.5 === 1.0 // true!
In these cases, either:
The 0.1 + 0.2 case is notorious precisely because it's simple, intuitive, and visibly fails. But it's not uniquely broken—it's just the most famous example.
0.1 + 0.2 ≠ 0.3 is actually a case where IEEE 754 works perfectly. Every operation produces the correctly rounded result. The 'error' is inherent to the impossibility of exactly representing 0.1, 0.2, and 0.3 in binary—not to any implementation flaw.
A common misconception is that this is a bug specific to JavaScript, Python, or some other "weakly typed" language. In reality, every language using IEEE 754 floating-point exhibits this behavior.
Demonstration Across Languages
12
console.log(0.1 + 0.2); // 0.30000000000000004console.log(0.1 + 0.2 === 0.3); // falseEven Low-Level Languages
This isn't about language design or type systems. C, C++, Rust, Fortran—every language that uses hardware floating-point produces identical results because they all execute the same IEEE 754 operations on the same CPU hardware.
The only languages that "get it right" are those that:
Excel and Calculators
You might notice that Excel shows 0.1 + 0.2 = 0.3 correctly. This is because Excel applies special display logic that rounds results for presentation. Internally, Excel stores the same 0.30000000000000004—it just hides it from you.
Most calculators do the same: they compute in binary floating-point but display in rounded decimal, masking the underlying precision issues. This is user-friendly but can mislead when you assume the displayed value is the stored value.
This is not a bug to be fixed, a language flaw to be patched, or a standard to be improved. It's the mathematical consequence of representing base-10 fractions in a base-2 system. Every IEEE 754 implementation in history, present, and future will produce 0.30000000000000004 for 0.1 + 0.2.
Now that you understand why this happens, here's how to handle it like a professional:
For General-Purpose Code
// BAD
if (total === 0.3) { ... }
// GOOD
const EPSILON = 1e-10;
if (Math.abs(total - 0.3) < EPSILON) { ... }
// BETTER (in Python)
import math
if math.isclose(total, 0.3, rel_tol=1e-9): ...
// Display only meaningful digits
console.log((0.1 + 0.2).toFixed(1)); // "0.3"
For Financial Applications
// Store as cents, not dollars
const balance = 1999; // $19.99 as 1999 cents
const price = 350; // $3.50 as 350 cents
const total = balance - price; // 1649 cents = $16.49, exact
from decimal import Decimal
# Decimal arithmetic is exact for base-10 fractions
result = Decimal('0.1') + Decimal('0.2')
print(result == Decimal('0.3')) # True!
// JavaScript: use decimal.js or similar
import Decimal from 'decimal.js';
const result = new Decimal('0.1').plus('0.2');
console.log(result.equals('0.3')); // true
For most applications—games, UI, general computation—floating-point is fine. The 10⁻¹⁶ relative error is smaller than almost any real-world measurement uncertainty. Reserve special handling (Decimal types, integer arithmetic) for financial calculations, scientific data archives, and applications where exact decimal semantics are legally or contractually required.
The 0.1 + 0.2 problem is the most famous, but similar issues appear throughout floating-point arithmetic. Here are other quirks every programmer should recognize:
Associativity Violation
Floating-point addition is not associative:
a = 1e-16
b = 1.0
c = -1.0
print((a + b) + c) # 0.0 (absorbed, then canceled)
print(a + (b + c)) # 1e-16 (b + c = 0, then add a)
The order of operations changes the result! This is why compilers must be careful about "optimizing" floating-point code.
Large + Small = Large
x = 1e16
y = 1.0
print(x + y == x) # True! y was absorbed
When magnitudes differ by more than ~15 orders, small values simply disappear.
Subtraction of Nearly Equal Values
a = 1.0000000000000001
b = 1.0000000000000000
print(a - b) # 0.0, not 1e-16 (catastrophic cancellation)
Subtracting similar values loses almost all precision.
The Decimal Point Illusion
Don't be fooled by "round" decimal values:
# These look exact but aren't:
0.1 # actually 0.10000000000000000555...
0.2 # actually 0.20000000000000001110...
0.3 # actually 0.29999999999999998889...
0.6 # actually 0.59999999999999997779...
# These ARE exact (powers of 2):
0.5 # exactly 0.5
0.25 # exactly 0.25
0.125 # exactly 0.125
Loop Accumulation
sum = 0.0
for _ in range(10):
sum += 0.1
print(sum) # 0.9999999999999999, not 1.0
print(sum == 1.0) # False
Repeated addition of inexact values accumulates error.
| Quirk | Example | Cause | Mitigation |
|---|---|---|---|
| Addition ≠ expected | 0.1 + 0.2 ≠ 0.3 | Representation error | Epsilon comparison |
| Associativity broken | (a+b)+c ≠ a+(b+c) | Rounding order matters | Careful operation order |
| Small + large = large | 1e16 + 1 = 1e16 | Absorption | Sum small values first |
| Subtraction destroys | 1.0001 − 1.0 = ? | Cancellation | Algebraic reformulation |
| Loop accumulation | sum += 0.1 × 10 ≠ 1 | Error buildup | Kahan summation |
These aren't random failures—they're predictable consequences of finite precision. Once you recognize the patterns, you can predict when issues will arise and choose appropriate mitigation strategies.
We've dissected the most famous floating-point quirk in computing history. But more importantly, we've completed our journey through floating-point data types—from why they exist to how they work to how they fail.
Module Complete: Floating-Point Data Types
Across five pages, we've built comprehensive understanding of floating-point representation:
Why they exist — Integers can't represent continuous quantities; floating-point fills this gap with a scientific-notation-inspired design.
Fixed-point vs. floating-point — Fixed-point is simpler but inflexible; floating-point adapts precision to magnitude.
IEEE 754 — The standard that unified computing, with its clever encoding of sign, exponent, and significand.
Precision errors — Representation error, rounding, accumulation, catastrophic cancellation, and professional mitigation techniques.
Why 0.1 + 0.2 ≠ 0.3 — The complete explanation of computing's most famous numerical quirk.
You now understand floating-point not as a mysterious black box, but as a brilliantly engineered approximation system with known characteristics and predictable behavior. This knowledge will serve you throughout your career in numerical computing.
Congratulations! You've mastered floating-point data types at a professional level. You can now explain why floating-point exists, how IEEE 754 works, when precision errors arise, and exactly why 0.1 + 0.2 ≠ 0.3. More importantly, you know how to handle these issues appropriately in your own code—the mark of an experienced engineer.