Loading learning content...
So far, we've discussed bits and binary primarily in terms of positive integers. But real-world computation requires negative numbers: bank balances can be overdrawn, temperatures can drop below zero, and game characters can move in negative directions.
The challenge is subtle but profound: bits have no inherent notion of 'negative.' A bit is either 0 or 1—there's no natural way to indicate that the value it contributes should be subtracted rather than added. Yet computers work with fixed-width bit patterns. How do we encode sign information within these constraints?
This page explores unsigned representation, the fundamental challenge of representing negatives, and the various schemes engineers have devised to solve this problem—culminating in two's complement, the universal standard.
By the end of this page, you will understand how unsigned integers work, why representing negatives is non-trivial, and the strengths and weaknesses of different signed representation schemes. This prepares you for deep understanding of two's complement on the next page.
Let's begin with the straightforward case: unsigned integers. These represent only non-negative values (0 and positive integers) and use all bits purely for magnitude.
For an n-bit unsigned integer:
The conversion between binary and decimal is straightforward positional notation:
8-bit Unsigned Integer Examples: Binary Calculation Decimal ────── ─────────── ─────── 00000000 0 = 0 00000001 1 = 1 00001010 8 + 2 = 10 00101010 32 + 8 + 2 = 42 01100100 64 + 32 + 4 = 100 10000000 128 = 128 11111111 128 + 64 + 32 + 16 + 8 + 4 + 2 + 1 = 255 Range: [0, 255] = [0, 2⁸ - 1]Properties of unsigned integers:
Contiguous range: All values from 0 to 2ⁿ-1 are representable with no gaps.
Wrap-around on overflow: When an operation exceeds the maximum value, the result 'wraps around.' Adding 1 to 255 (in 8 bits) yields 0. This is mathematically equivalent to modular arithmetic: all operations are performed modulo 2ⁿ.
All bits are magnitude bits: There's no special interpretation of any bit position. Every bit contributes its power of 2 to the total.
Comparison is straightforward: Lexicographic comparison of bit patterns matches numeric comparison. The binary representation sorts in the same order as the numeric value.
Unsigned integers are ideal when values are inherently non-negative: array sizes, memory addresses, counts, hash values, and bit manipulation applications. Their simplicity and full utilization of the bit range make them the default choice when negatives aren't needed.
| Bits | Minimum | Maximum | Example Type |
|---|---|---|---|
| 8 | 0 | 255 | uint8_t, unsigned char |
| 16 | 0 | 65,535 | uint16_t, unsigned short |
| 32 | 0 | 4,294,967,295 | uint32_t, unsigned int |
| 64 | 0 | 18,446,744,073,709,551,615 | uint64_t, unsigned long long |
Many domains require negative values:
Without negative numbers, we'd need awkward workarounds: separate 'is negative' flags, biased representations, or parallel tracking of positive and negative components. These approaches are error-prone and complicate arithmetic.
The core challenge:
We have a fixed number of bits and need to represent both positive and negative values. If we have n bits, we can represent at most 2ⁿ distinct values. How do we divide these between positive and negative numbers?
There's no free lunch: if we allocate bit patterns to negative numbers, we must reduce the range of positive numbers. With 8 bits:
The question isn't whether to give up positive range—we must. The question is how to map bit patterns to negative values in a way that makes computation efficient.
A good signed representation scheme should:
• Use existing unsigned hardware (adders) for signed arithmetic • Make comparisons efficient • Have a unique representation for zero • Allow straightforward negation • Work consistently across all bit widths
These criteria shaped the evolution from early schemes to two's complement.
The most intuitive approach to representing signed numbers is sign-magnitude representation. It directly mimics how we write negative numbers: a sign indicator followed by a magnitude.
The scheme:
For example, in 8-bit sign-magnitude:
8-bit Sign-Magnitude Representation: Binary Interpretation Decimal ────── ────────────── ─────── 0 0000001 positive, magnitude 1 = +1 0 0101010 positive, magnitude 42 = +42 0 1111111 positive, magnitude 127 = +127 1 0000001 negative, magnitude 1 = -1 1 0101010 negative, magnitude 42 = -42 1 1111111 negative, magnitude 127 = -127 0 0000000 positive zero = +0 1 0000000 negative zero = -0 ← Problem! Range: [-127, +127] (missing -128 compared to two's complement)Advantages of sign-magnitude:
Disadvantages of sign-magnitude (why it's not used):
Historical usage:
Despite these drawbacks, sign-magnitude appeared in early computers and still appears in IEEE 754 floating-point representation (where the sign, exponent, and mantissa are separate fields). For integers, however, the arithmetic complexity proved prohibitive, leading engineers to seek alternatives.
One's complement attempts to address sign-magnitude's arithmetic complexity. The key insight: define negation as bitwise complement—flip every bit.
The scheme:
For example, in 8-bit one's complement:
8-bit One's Complement Representation: Binary Interpretation Decimal ────── ────────────── ─────── 00000001 positive = +1 00101010 positive = +42 01111111 positive = +127 11111110 complement of 00000001 = -1 11010101 complement of 00101010 = -42 10000000 complement of 01111111 = -127 00000000 positive zero = +0 11111111 complement of 00000000 = -0 ← Still a problem! Note: To negate any number, flip all bits. Negative x has bit pattern = ~(positive x) Range: [-127, +127]How one's complement arithmetic works:
One's complement addition uses the same adder as unsigned addition, but with a twist: any carry-out from the MSB is added back to the result (called end-around carry).
Example: Adding 5 and -3 in 8-bit one's complement:
00000101 (+5)
+ 11111100 (-3, which is complement of 00000011)
──────────
100000001 (9 bits - there's a carry out)
+ 1 (add the carry back)
──────────
00000010 (+2) ✓ Correct!
Advantages over sign-magnitude:
Remaining problems:
One's complement was used in several early computers, including the CDC 6600 (1960s), one of the first supercomputers. However, the dual-zero problem and end-around carry complexity led to its eventual replacement by two's complement in virtually all modern systems.
Before introducing two's complement, let's examine one more scheme: offset binary (also called excess-K or biased representation). This approach uses a different strategy altogether.
The scheme:
For 8-bit excess-128 (offset by 128):
8-bit Excess-128 (Offset Binary) Representation: Actual Value Stored Value (V + 128) Binary ──────────── ────────────────────── ────── -128 0 00000000 -127 1 00000001 -1 127 01111111 0 128 10000000 +1 129 10000001 +127 255 11111111 Range: [-128, +127] (full n-bit signed range) Note: The binary patterns sort in the same order as the values!Advantages of offset binary:
Disadvantages of offset binary:
Where offset binary is used:
Despite its arithmetic complexity, offset binary appears in:
IEEE 754 floating-point exponents: Exponents use excess-127 (single precision) or excess-1023 (double precision). This simplifies exponent comparison.
Audio processing: Digital audio sometimes uses offset binary for compatibility with older analog-to-digital converters.
Database ordering: Some databases use offset transformations to enable unsigned comparison of signed values.
For general-purpose integer arithmetic, however, the arithmetic complexity makes offset binary impractical. We need a scheme that works with standard adders—which brings us to two's complement.
Let's summarize the schemes we've examined before introducing two's complement. Understanding their trade-offs clarifies why two's complement ultimately won:
| Property | Sign-Magnitude | One's Complement | Offset Binary |
|---|---|---|---|
| Range | -127 to +127 | -127 to +127 | -128 to +127 |
| Zeros | Two (+0, -0) | Two (+0, -0) | One |
| Negation | Flip sign bit | Flip all bits | 2K - stored |
| Addition | Complex | End-around carry | Subtract bias |
| Comparison | Complex | Moderate | Simple |
| Hardware Cost | High | Medium | Medium |
What we want:
The ideal scheme would have:
No scheme we've examined achieves all of these. Sign-magnitude and one's complement waste a bit pattern on negative zero. Offset binary requires arithmetic corrections. All three need special handling that complicates hardware.
Two's complement achieves all goals.
The next page develops two's complement in depth. Here, we'll preview why it succeeds where others failed:
Two's complement works because it treats the most significant bit as contributing a negative value (-2^(n-1)) while all other bits contribute positive values. This single change—giving the MSB negative weight—makes standard addition work correctly for signed values.
You might wonder why we've spent so much time on signed representation. After all, many bit manipulation problems use unsigned integers. The answer: understanding representation avoids subtle bugs and unlocks techniques specific to signed values.
Common bug example:
Consider this code:
int array[10];
int index = -1;
if (index < sizeof(array)) { // sizeof returns unsigned!
// This branch is NOT taken on many platforms!
}
The comparison index < sizeof(array) compares a signed integer (-1) with an unsigned value (40 or 80, depending on pointer size). The signed -1 is converted to unsigned, becoming a very large positive number (e.g., 4294967295 on 32-bit). The comparison fails unexpectedly.
Understanding signed representation—specifically that -1 in two's complement is all-ones—explains this behavior and how to avoid it.
In C/C++, if a signed and unsigned integer are combined in an operation, the signed value is converted to unsigned. This conversion preserves the bit pattern but changes its interpretation. Be explicit about your types to avoid surprises.
Let's consolidate what we've learned about representing positive and negative integers:
What's next:
With the stage set—understanding unsigned representation, the challenges of representing negatives, and the shortcomings of alternative schemes—we're ready for two's complement. The next page develops two's complement in full depth: its definition, properties, arithmetic behavior, and why it became the universal standard for signed integer representation.
You now understand the landscape of signed integer representation schemes and why each has limitations. This context sets up two's complement not as an arbitrary choice, but as the elegant solution to real engineering constraints. The next page delivers that solution in detail.