Loading learning content...
Every email you send, every video you stream, every voice call you make over the internet—all of these are, at their most fundamental level, sequences of discrete signals traveling through physical media. These signals are the atomic units of digital communication, the building blocks upon which the entire edifice of modern networking rests.
Understanding how digital signals are represented is not merely academic—it is the essential foundation for comprehending everything from the physical layer of the OSI model to the design decisions that determine network performance, reliability, and cost. Without a firm grasp of signal representation, concepts like encoding, modulation, bandwidth, and noise immunity remain opaque abstractions rather than engineering tools.
By the end of this page, you will understand: the fundamental distinction between analog and digital signals; how binary data is mapped to physical voltage, light, or electromagnetic states; the mathematical representation of digital signals; time-domain and frequency-domain analysis; and the engineering implications of signal representation choices in real-world network systems.
At the heart of signal representation lies a fundamental distinction that shapes all of telecommunications: the difference between analog and digital signals. Understanding this dichotomy is essential before we can meaningfully discuss how digital data is transmitted.
Analog Signals: Continuous Variation
An analog signal is a continuously varying quantity that can take on any value within a given range. Think of the sound wave produced by a human voice—the air pressure varies smoothly and continuously over time, passing through infinitely many values between any two measurable points. When this sound wave is converted to an electrical signal by a microphone, the resulting voltage is also continuous, varying smoothly in proportion to the original acoustic energy.
Mathematically, an analog signal can be represented as a continuous function of time:
$$s(t) : \mathbb{R} \to \mathbb{R}$$
Where $t$ represents time, and $s(t)$ can assume any real value within the signal's amplitude range. The key characteristic is that between any two time points $t_1$ and $t_2$, the signal passes through infinitely many intermediate values.
Most natural phenomena—temperature, pressure, light intensity, sound—are inherently analog. The physical world operates in continuous gradients, not discrete steps. This is why early telecommunications systems (telephone, radio, television) were all analog: they directly represented physical phenomena using proportionally varying electrical signals.
Digital Signals: Discrete States
A digital signal, in contrast, represents information using a finite set of discrete values—typically two values in binary systems, though multi-level digital signals also exist. Rather than varying continuously, a digital signal jumps abruptly between defined states, remaining at each state for a defined time interval.
The simplest and most common form is the binary digital signal, which takes only two values:
Mathematically, a binary digital signal can be represented as:
$$s(t) \in {V_{low}, V_{high}} \quad \forall t$$
The signal is quantized in amplitude—it can only exist in one of the defined states at any given moment. The transitions between states are ideally instantaneous, though in reality they occur over finite (but very short) time intervals.
| Characteristic | Analog Signal | Digital Signal |
|---|---|---|
| Value Range | Infinite (continuous) | Finite (discrete levels) |
| Mathematical Domain | Real numbers (ℝ) | Finite set {V₀, V₁, ..., Vₙ} |
| Noise Sensitivity | High (any distortion corrupts information) | Low (only level changes matter) |
| Regeneration | Difficult (amplification amplifies noise too) | Easy (threshold detection restores original) |
| Processing | Analog circuits required | Digital logic, software processing |
| Storage | Degradation over time (tape, vinyl) | Perfect copies possible indefinitely |
| Bandwidth Efficiency | Efficient for source-matched signals | Requires bandwidth overhead for encoding |
The Key Insight: Discretization Enables Reliability
The seemingly restrictive nature of digital signals—allowing only discrete values—is precisely what makes them so powerful for communication. Because a digital receiver only needs to distinguish between a finite number of states (often just two), it can tolerate significant distortion, noise, and attenuation. As long as the received signal can be correctly classified into the appropriate state, the original information is perfectly recovered.
This is fundamentally different from analog transmission, where any distortion introduced into the signal represents an irreversible loss of information. An analog signal that has been corrupted by noise cannot be restored to its original form—the noise has become part of the signal. A digital signal, however, can be regenerated at each repeater or receiver, with the original clean states perfectly reconstructed.
The practical implementation of digital signals requires a systematic method for mapping binary data (sequences of 0s and 1s) to physical states that can propagate through transmission media. This mapping is the bridge between the abstract world of information and the physical world of signal transmission.
Voltage Representation in Electrical Systems
In copper-based transmission systems (Ethernet cables, USB connections, PCB traces), digital signals are typically represented as voltage levels. The specific voltage values depend on the signaling standard being used:
Notice the gap between valid high and valid low voltage ranges in each standard. This gap is the noise margin—the amount of noise or distortion a signal can tolerate before it risks being misinterpreted. Larger noise margins (like RS-232's ±25V swing) are more robust but consume more power and bandwidth. Modern high-speed interfaces use smaller voltage swings compensated by differential signaling and sophisticated receivers.
Optical Representation in Fiber Systems
In fiber optic transmission, binary values are represented as the presence or absence of light:
For more sophisticated systems using wavelength division multiplexing (WDM), different wavelengths of light can carry independent data streams, each using its own on/off keying or more complex modulation.
Electromagnetic Representation in Wireless Systems
Wireless systems map binary data to variations in electromagnetic wave properties:
These modulation schemes will be explored in depth in later chapters, but the fundamental principle remains: discrete data values are mapped to discrete physical states.
The Bit Interval and Symbol Rate
Every binary representation operates within a defined bit interval (also called bit period or bit time), denoted as $T_b$. This is the time duration allocated to each bit:
$$T_b = \frac{1}{R_b}$$
Where $R_b$ is the bit rate in bits per second (bps). For example, a 1 Gbps (gigabit per second) connection has a bit interval of:
$$T_b = \frac{1}{10^9 \text{ bps}} = 1 \text{ ns (nanosecond)}$$
This means that every nanosecond, the signal must be in a defined state representing either a 0 or a 1. The shorter the bit interval, the faster the data rate—but also the more demanding the requirements on timing precision, signal integrity, and receiver sensitivity.
The time-domain representation describes how a signal varies as a function of time. This is the most intuitive way to visualize digital signals—as a waveform plotted on a graph with time on the horizontal axis and signal amplitude (voltage, light intensity, etc.) on the vertical axis.
Ideal Rectangular Pulses
In theoretical discussions, digital signals are often depicted as perfect rectangular pulses:
This idealized representation makes analysis tractable but reflects a simplified view of reality.
123456789101112131415
Bit Sequence: 1 0 1 1 0 0 1 0 ┌─────┐ ┌─────┬─────┐ ┌─────┐ │ │ │ │ │ │ │ Voltage ────┘ └─────┘ │ └───────────┘ └──── │ │ │ │ │ │ │ │ Time → ────┼─────┼─────┼─────┼─────┼─────┼─────┼─────┼──── T0 T1 T2 T3 T4 T5 T6 T7 Where: - Tₙ marks the beginning of bit interval n - High level represents binary 1 - Low level represents binary 0 - Each bit interval has duration Tb = T₁ - T₀ Real-World Signal Characteristics
Actual digital signals deviate significantly from the ideal rectangular form due to the physical properties of transmission media and electronic components. Understanding these non-idealities is essential for practical network engineering:
1. Rise and Fall Times
Real signals cannot transition instantaneously between states. The rise time ($t_r$) is the time required for the signal to go from 10% to 90% of its full amplitude, while the fall time ($t_f$) is the corresponding time for the downward transition. These times are governed by the bandwidth limitations of the transmission system:
$$t_r \approx \frac{0.35}{BW}$$
Where $BW$ is the system bandwidth in Hz. For a 1 GHz bandwidth system, the minimum rise time is approximately 0.35 ns.
2. Overshoot and Undershoot
Reflections and impedance mismatches cause overshoot (signal momentarily exceeds target level) and undershoot (signal goes below target). Typically specified as a percentage of signal swing, overshoot exceeding 10-20% may cause reliability issues or even damage receivers.
3. Ringing
Damped oscillations following signal transitions are called ringing. These result from the LC (inductance-capacitance) characteristics of transmission lines and can persist for multiple nanoseconds, potentially corrupting subsequent bit intervals.
4. Jitter
Jitter is the variation in signal transition timing from the ideal positions. It can be caused by power supply noise, crosstalk from adjacent signals, or thermal effects in oscillators. Jitter reduces timing margins and is a critical parameter in high-speed serial links.
Engineers use eye diagrams to assess time-domain signal quality. By superimposing many consecutive bit intervals, the opening in the center reveals the available margin for error-free detection. A 'closed eye' indicates severe signal degradation where reliable data recovery becomes impossible. We will examine eye diagrams in detail when discussing signal quality metrics.
Time-Domain Parameters
The following parameters fully characterize a digital signal in the time domain:
| Parameter | Symbol | Definition | Typical Values |
|---|---|---|---|
| Bit Period | $T_b$ | Duration of one bit | 1 ns (1 Gbps) |
| Rise Time | $t_r$ | 10%-90% transition time | 0.1-0.3 × $T_b$ |
| Fall Time | $t_f$ | 90%-10% transition time | 0.1-0.3 × $T_b$ |
| Voltage High | $V_H$ | Logic 1 voltage level | Standard-dependent |
| Voltage Low | $V_L$ | Logic 0 voltage level | Standard-dependent |
| Voltage Swing | $V_{swing}$ | $V_H - V_L$ | 0.4V - 5V |
| Jitter | $T_j$ | Timing variation (peak-to-peak) | 0.01-0.1 × $T_b$ |
| Duty Cycle | D | % of period at high level | 50% ideal |
While time-domain representation shows how a signal changes over time, frequency-domain representation reveals what frequencies compose the signal. This perspective is essential for understanding bandwidth requirements, channel capacity, and interference effects.
Fourier Analysis: The Mathematical Foundation
The mathematical tool connecting time and frequency domains is the Fourier Transform. Any signal $s(t)$ can be decomposed into a sum of sinusoidal components at different frequencies:
$$S(f) = \int_{-\infty}^{\infty} s(t) e^{-j2\pi ft} dt$$
Where $S(f)$ is the frequency spectrum of the signal—a function showing the amplitude and phase of each frequency component.
The Spectrum of a Square Wave
A periodic square wave (alternating 1s and 0s) provides an illustrative example. Its Fourier series expansion reveals that it consists of:
$$s(t) = \frac{4}{\pi}\left[\sin(2\pi f_0 t) + \frac{1}{3}\sin(6\pi f_0 t) + \frac{1}{5}\sin(10\pi f_0 t) + ...\right]$$
Where $f_0$ is the fundamental frequency (half the bit rate for alternating bits). The key insight: a square wave contains infinitely many frequency components, all odd harmonics of the fundamental, with amplitudes decreasing as 1/n.
Harry Nyquist established that the minimum bandwidth required to transmit a signal with a symbol rate of Rs symbols/second is Rs/2 Hz (the Nyquist bandwidth). For binary signaling where one bit equals one symbol, a 1 Gbps signal theoretically requires at least 500 MHz of bandwidth. In practice, practical filters and encoding schemes typically require 0.5 to 1.0 times the bit rate in bandwidth.
Bandwidth-Limited Signals: Practical Reality
Since real channels cannot provide infinite bandwidth, transmitted signals are inherently bandwidth-limited. What happens when higher harmonics are removed?
Inadequate Bandwidth (< 1× fundamental):
Adequate Bandwidth (≥ 3× fundamental):
Generous Bandwidth (≥ 5× fundamental):
The art of communication system design is finding the optimal balance—providing enough bandwidth for reliable detection while maximizing spectral efficiency.
To completely characterize a digital signal for engineering purposes, we must specify a comprehensive set of parameters. These parameters define the signal's suitability for a given application and determine the requirements for transmitters, receivers, and transmission media.
Amplitude Parameters
Amplitude parameters define the vertical characteristics of the signal:
For a binary signal alternating between $V_H$ and $V_L$: $$V_{pp} = V_H - V_L$$ $$V_{rms} = \sqrt{\frac{V_H^2 + V_L^2}{2}}$$
| Logic Family | V_OH (min) | V_OL (max) | V_IH (min) | V_IL (max) | Noise Margin |
|---|---|---|---|---|---|
| TTL 5V | 2.4V | 0.4V | 2.0V | 0.8V | 0.4V |
| LVCMOS 3.3V | 2.4V | 0.4V | 2.0V | 0.8V | 0.4V |
| LVCMOS 1.8V | 1.2V | 0.45V | 0.65×VDD | 0.35×VDD | ~0.3V |
| LVDS | +350mV diff | -350mV diff | +100mV | -100mV | 250mV |
| RS-232 | -5V to -15V | +5V to +15V | -3V | +3V | 2V |
Timing Parameters
Timing parameters define the temporal characteristics that govern how accurately bits can be transmitted and received:
Power Parameters
Power considerations are critical for system design, especially at scale:
Noise margin quantifies how much noise a signal can tolerate before errors occur. It's defined as the difference between the output level and the input threshold: NM_H = V_OH - V_IH (high-level margin) and NM_L = V_IL - V_OL (low-level margin). Larger margins mean more robust communication but often require more power or bandwidth. System designers must balance reliability against efficiency.
Understanding signal representation in theory is only the first step. Practical implementation introduces numerous challenges that network engineers must navigate.
Impedance Matching
Signal integrity depends critically on characteristic impedance matching throughout the transmission path. When a signal encounters an impedance discontinuity (where source, line, or load impedance differs), a portion of the signal energy reflects back:
$$\Gamma = \frac{Z_L - Z_0}{Z_L + Z_0}$$
Where $\Gamma$ is the reflection coefficient, $Z_L$ is the load impedance, and $Z_0$ is the characteristic impedance. Common characteristic impedances:
Differential Signaling: The Modern Solution
Many modern high-speed interfaces use differential signaling rather than single-ended (ground-referenced) signals. Instead of one wire carrying the signal referenced to ground, two wires carry complementary signals:
$$V_{signal} = V_+ - V_-$$
The receiver ignores the absolute voltage on either wire and responds only to their difference. This provides several advantages:
Differential signaling is used in:
The shift to differential signaling in modern networking isn't accidental—it's a direct response to the challenges of high-speed digital transmission. As data rates increased from Mbps to Gbps, single-ended signaling couldn't maintain signal integrity over practical distances. Differential signaling enables reliable multi-gigabit communication over affordable copper cables, representing a triumph of signal representation engineering.
We have established the foundational concepts of how digital signals are represented in computer networks. These concepts underpin everything from physical layer specifications to network architecture decisions.
What's Next:
With a firm understanding of how digital signals are represented, we now turn to bit rate and baud rate—two often-confused concepts that define the information-carrying capacity of a digital signal. Understanding their relationship and distinction is essential for analyzing any digital communication system.
You now possess a comprehensive understanding of digital signal representation—the foundation upon which all digital transmission techniques are built. This knowledge will inform every topic that follows, from encoding schemes to modulation techniques to channel capacity analysis.