Loading content...
The OpenFlow protocol is elegant in specification, but networks ultimately depend on switches—the physical or virtual devices that actually forward packets. An OpenFlow switch is where abstract flow tables meet wire-speed reality, where control plane intelligence translates into data plane action.
OpenFlow switches span an enormous spectrum: from $100 software switches running on commodity servers to $100,000 chassis systems forwarding terabits per second. Some are purpose-built for OpenFlow; others are traditional switches with OpenFlow added as a feature. Some run on specialized ASICs; others on general-purpose CPUs.
Understanding this landscape is essential for SDN deployment:
This page explores OpenFlow switch implementations in exhaustive depth: switch architectures and ASICs, hardware switch capabilities, software switch implementations, hybrid deployments, and practical selection criteria. By the end, you'll know how to evaluate and deploy OpenFlow switches for any environment.
By completing this page, you will understand: OpenFlow switch internal architecture, the role of switching ASICs and their programmability, hardware switch capabilities and limitations, software switch implementations (Open vSwitch, BESS), white-box switches and merchant silicon, performance benchmarking methodology, and selection criteria for production deployments.
Before examining specific OpenFlow implementations, let's understand the fundamental architecture of network switches.
The Three-Tier Switch Model
Modern switches can be conceptualized as three tiers:
Control Plane
The switch's CPU runs:
The control plane is typically a modest embedded processor (ARM, x86) running Linux or a proprietary OS. OpenFlow messages are processed here.
Data Plane
The switch's ASIC handles wire-speed packet processing:
The data plane ASIC is often the most expensive component, determining the switch's capabilities.
Physical Layer
PHYs and SerDes handle the electrical/optical interface to network cables. Different port types (SFP+, QSFP28, etc.) support different speeds and media.
The switching ASIC determines real-world OpenFlow capabilities. TCAM size limits flow table capacity. Parser flexibility limits which fields can be matched. Action engine capabilities limit which modifications are supported. When evaluating switches, always verify ASIC capabilities—marketing claims may not reflect hardware reality.
The switching ASIC is the heart of hardware OpenFlow switches. Understanding ASIC capabilities and limitations is essential for realistic SDN deployment.
Major ASIC Vendors
| Vendor | Key Products | OpenFlow Support | Notes |
|---|---|---|---|
| Memory Broadcom | Memory Memory Memory Memory Memory Trident, Memory Tomahawk, Jericho | Comprehensive | Dominant merchant silicon. Used in most white-box switches. |
| Barefoot/Intel | Tofino (P4) | Full programmability | P4-programmable. Maximum flexibility but new technology. |
| Mellanox/NVIDIA | Spectrum | Good | Strong in datacenter, HPC integration. |
| Marvell | Memory Memory Prestera | Varies | Carrier-grade equipment. |
| Memory Custom | Cisco Memoria Memoria UADP, Juniper | Vendor-specific | Proprietary ASICs for vendor switches. |
TCAM Characteristics
The TCAM (Ternary Content-Addressable Memory) stores wildcard flow entries. Key characteristics:
| Aspect | Typical Range | Impact |
|---|---|---|
| Entry count | 2K - 64K entries | Limits number of wildcard flows |
| Entry width | 320 - 640 bits | Fields that can be matched simultaneously |
| Lookup speed | ~10 nanoseconds | Wire-speed at all port rates |
| Power consumption | ~15W per Mbit | Limits practical TCAM size |
| Update speed | 1K - 10K updates/sec | Rate of flow modifications |
Vendors advertise total TCAM entries, but usable OpenFlow entries are often less. Reasons: (1) Some TCAM shared with ACLs, routing, (2) Wide matches consume multiple entries, (3) Priority encoding may consume entries. Always test actual capacity with your flow patterns.
Match Field Support
Not all ASICs support all OpenFlow match fields. Common limitations:
Action Limitations
Similarly, action support varies:
Multi-Table Pipelines
OpenFlow specifies up to 255 tables, but ASIC support is limited:
| ASIC Generation | Tables | Notes |
|---|---|---|
| Early (2010-2013) | 1-2 tables | Limited OpenFlow 1.0 support |
| Mainstream (2014-2018) | 4-16 tables | Adequate for most applications |
| Modern (2019+) | 16-64 tables | Full OpenFlow pipeline support |
| P4-programmable | Arbitrary | User-defined table count and structure |
Let's examine the major categories of hardware OpenFlow switches:
1. Branded Vendor Switches
Traditional networking vendors (Cisco, Juniper, Arista, HP) have added OpenFlow support to their existing product lines.
| Vendor | Product Examples | OpenFlow Version | Notes |
|---|---|---|---|
| HP/Aruba | FlexNetwork, Aruba CX | 1.0-1.3 | Early SDN adopter, good support |
| Arista | 7000 series | 1.0-1.3 | Strong datacenter presence |
| Cisco | Nexus (limited) | 1.0-1.3 | Limited, favors own ACI solution |
| Juniper | QFX, EX series | 1.0-1.3 | Integrated with Contrail |
| Dell/EMC | PowerSwitch | 1.0-1.3 | Good white-box alternative |
Advantages: Proven hardware, vendor support, integration with existing management.
Disadvantages: Higher cost, OpenFlow often secondary feature, may lag protocol versions.
2. White-Box Switches
White-box switches are hardware from ODMs (Original Design Manufacturers) sold without traditional vendor software. Users install their own NOS (Network Operating System).
| ODM | Example Models | Typical ASIC | Form Factors |
|---|---|---|---|
| Edgecore | AS7712, AS7816 | Memory Broadcom Memoria Tomahawk/Memoria Trident | 1U, 25G/100G ToR |
| Delta | AG9032v2, AG5648 | Memory Broadcom | 1U, ToR and spine |
| Quanta | QuantaMesh | Memory Broadcom | Datacenter focus |
| Celestica | Seastone | Memory Broadcom, Memoria Memoria Memoria Memoria Barefoot Memoria Memoria | P4-capable options |
| Mellanox | Spectrum-based | Mellanox Spectrum | HPC-optimized |
Network Operating Systems for White-Box:
Advantages: Low cost (often 50-70% less than branded), flexibility, rapid innovation.
Disadvantages: DIY support, integration burden, less turnkey.
3. Bare Metal Switches
Bare metal switches are white-box hardware with basic firmware—not even a stock NOS. Users install everything from ONIE (Open Network Install Environment).
ONIE (Open Network Install Environment) is a small boot loader for white-box switches. It enables installing any compatible NOS via USB, PXE, or HTTP. ONIE-compatible switches can run SONiC, Cumulus, PICOS, or your custom image. It's the 'BIOS' of network switches.
Software switches run on commodity server hardware, implementing OpenFlow in kernel modules or user-space processes. They trade raw performance for flexibility and cost.
Open vSwitch (OVS)
OVS is the dominant software switch for OpenFlow. It's the backbone of most cloud and virtualization SDN deployments.
12345678910111213141516171819202122232425262728293031323334
# Create an OVS bridge with OpenFlow enabledovs-vsctl add-br br0 # Configure OpenFlow controllerovs-vsctl set-controller br0 tcp:192.168.1.100:6653 # Set OpenFlow version (1.3)ovs-vsctl set bridge br0 protocols=OpenFlow13 # Add physical ports to bridgeovs-vsctl add-port br0 eth0ovs-vsctl add-port br0 eth1 # Add internal port (for host access)ovs-vsctl add-port br0 internal0 -- set Interface internal0 type=internal # Configure fail mode (secure = drop on controller failure)ovs-vsctl set-fail-mode br0 secure # View OpenFlow flowsovs-ofctl -O OpenFlow13 dump-flows br0 # Manually add a flow (for testing)ovs-ofctl -O OpenFlow13 add-flow br0 \ "priority=100,ip,nw_dst=10.0.0.1,actions=output:2" # View switch statisticsovs-ofctl -O OpenFlow13 dump-ports br0 # Enable DPDK for high performanceovs-vsctl set Open_vSwitch . other_config:dpdk-init=trueovs-vsctl add-br br-dpdk -- set bridge br-dpdk datapath_type=netdevovs-vsctl add-port br-dpdk dpdk0 -- set Interface dpdk0 type=dpdk \ options:dpdk-devargs=0000:03:00.0OVS Architecture
OVS Performance Characteristics
| Datapath | Throughput | Latency | CPU Usage | Use Case |
|---|---|---|---|---|
| Kernel | 1-10 Gbps | 50-200 μs | Moderate | General virtualization |
| DPDK | 25-100 Gbps | 10-50 μs | Dedicated cores | NFV, high-speed forwarding |
| AF_XDP | 10-50 Gbps | 20-100 μs | Moderate | Modern Linux (5.x+) |
| Offload (SmartNIC) | 25-100 Gbps | ~10 μs | Minimal | Hardware-accelerated cloud |
Other Software Switches
Software switches excel when: (1) Flexibility > raw performance, (2) Virtualization/cloud integration needed, (3) Tunneling required, (4) Rapid iteration on forwarding logic, (5) SmartNIC offload available. Use hardware switches when: (1) Maximum port density needed, (2) Multi-terabit capacity required, (3) Microsecond latency critical.
Modern deployments often combine hardware and software switching for optimal cost/performance balance.
Hybrid Switch Deployments
Hybrid deployments use OpenFlow and traditional switching simultaneously:
This allows incremental SDN adoption without full infrastructure replacement.
SmartNICs: Hardware-Accelerated Software Switching
SmartNICs (DPUs) combine programmable network processing with standard NICs:
| Vendor | Product | Processor | OpenFlow/SDN Support |
|---|---|---|---|
| NVIDIA/Mellanox | ConnectX-6/BlueField | ARM cores + ASIC | OVS offload, custom |
| Intel | IPU (Mt. Evans) | Xeon cores + FPGA | OVS offload, P4 |
| Pensando/AMD | DPU DSC-25 | ARM cores + ASIC | Custom pipeline |
| Xilinx/AMD | Alveo | FPGA | P4-programmable |
| Netronome | Agilio | NPU | Strong OpenFlow |
OVS Hardware Offload
Open vSwitch can offload flows to compatible SmartNICs, achieving hardware performance with software flexibility:
1234567891011121314151617
# Enable hardware offload in OVSovs-vsctl set Open_vSwitch . other_config:hw-offload=true # Enable TC (traffic control) flower offloadovs-vsctl set Open_vSwitch . other_config:tc-policy=skip_sw # Verify offload is workingovs-appctl dpctl/dump-flows --names type=offloaded# Output shows flows handled by NIC hardware # Check NIC offload statisticsethtool -S enp3s0f0 | grep offload # Example offloaded flow (from dump-flows):# recirc_id(0),in_port(eth0),eth(src=...,dst=...),eth_type(0x0800),# ipv4(src=10.0.0.1,dst=10.0.0.2), packets:1000000, bytes:64000000,# used:0.001s, offloaded:yes, dp:tc, actions:output(eth1)Not all flows can be offloaded. Limitations: (1) Complex actions may require CPU, (2) Tunneling offload varies by NIC, (3) Meter/group offload often unsupported, (4) Connection tracking offload is vendor-specific. Monitor 'offloaded:yes/no' in flow dumps to verify what's accelerated.
Selecting the right OpenFlow switch requires understanding performance characteristics. Let's examine how to benchmark and evaluate switch performance.
Key Performance Metrics
| Metric | Definition | Measurement Method |
|---|---|---|
| Throughput | Packets/second or bits/second at wire speed | Traffic generators (IXIA, Spirent, TRex) |
| Latency | Time from packet ingress to egress | Hardware timestamping, specialized probes |
| Jitter | Variance in latency | Statistical analysis of latency samples |
| Flow capacity | Maximum concurrent flow entries | Progressive flow loading until rejection |
| Flow setup rate | New flows installed per second | Controller flow-mod stress test |
| PACKET_IN rate | Table-miss pkts handled per second | Unknown-flow traffic generation |
Benchmarking Methodology
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173
"""OpenFlow Switch Benchmarking FrameworkTests flow capacity, flow setup rate, and PACKET_IN handling.""" import timefrom ryu.controller import ofp_eventfrom ryu.controller.handler import set_ev_cls, MAIN_DISPATCHER class SwitchBenchmarker: """Benchmark OpenFlow switch capabilities.""" def __init__(self): self.test_results = {} def test_flow_capacity(self, datapath, max_flows=100000): """ Test maximum flow table capacity. Progressively install flows until errors. """ parser = datapath.ofproto_parser ofproto = datapath.ofproto successful_flows = 0 start_time = time.time() for i in range(max_flows): # Generate unique match per flow match = parser.OFPMatch( eth_type=0x0800, ipv4_dst=self._int_to_ip(0x0A000000 + i) # 10.x.x.x ) actions = [parser.OFPActionOutput(1)] inst = [parser.OFPInstructionActions( ofproto.OFPIT_APPLY_ACTIONS, actions)] mod = parser.OFPFlowMod( datapath=datapath, priority=100, match=match, instructions=inst ) try: datapath.send_msg(mod) successful_flows += 1 # Periodically check for errors via barrier if i % 1000 == 0: barrier = parser.OFPBarrierRequest(datapath) datapath.send_msg(barrier) # Small delay to receive potential errors time.sleep(0.01) except Exception as e: print(f"Flow installation failed at {i}: {e}") break elapsed = time.time() - start_time self.test_results['flow_capacity'] = { 'max_flows': successful_flows, 'time_seconds': elapsed, 'flows_per_second': successful_flows / elapsed } return self.test_results['flow_capacity'] def test_flow_setup_rate(self, datapath, duration_sec=10): """ Test sustained flow installation rate. Install flows as fast as possible for duration. """ parser = datapath.ofproto_parser ofproto = datapath.ofproto # First, clear existing flows self._clear_all_flows(datapath) start_time = time.time() flows_installed = 0 while time.time() - start_time < duration_sec: # Install batch of 100 flows for i in range(100): match = parser.OFPMatch( eth_type=0x0800, ipv4_dst=self._int_to_ip(0x0A000000 + flows_installed) ) actions = [parser.OFPActionOutput(1)] inst = [parser.OFPInstructionActions( ofproto.OFPIT_APPLY_ACTIONS, actions)] mod = parser.OFPFlowMod( datapath=datapath, priority=100, match=match, instructions=inst, idle_timeout=5 # Auto-cleanup ) datapath.send_msg(mod) flows_installed += 1 # Barrier every batch barrier = parser.OFPBarrierRequest(datapath) datapath.send_msg(barrier) elapsed = time.time() - start_time self.test_results['flow_setup_rate'] = { 'total_flows': flows_installed, 'duration_sec': elapsed, 'flows_per_second': flows_installed / elapsed } return self.test_results['flow_setup_rate'] def test_packet_in_rate(self, datapath, duration_sec=10): """ Test PACKET_IN handling rate. Count PACKET_IN messages received over duration. Requires external traffic generator sending unknown flows. """ self._packet_in_count = 0 self._test_start = time.time() self._test_duration = duration_sec # Table-miss should be configured to send to controller # External traffic generator sends unique flows # Handler increments count (see _packet_in_handler) time.sleep(duration_sec) self.test_results['packet_in_rate'] = { 'total_packet_in': self._packet_in_count, 'duration_sec': duration_sec, 'packet_in_per_second': self._packet_in_count / duration_sec } return self.test_results['packet_in_rate'] @set_ev_cls(ofp_event.EventOFPPacketIn, MAIN_DISPATCHER) def _packet_in_handler(self, ev): """Count PACKET_IN messages during test.""" if hasattr(self, '_test_start'): elapsed = time.time() - self._test_start if elapsed < self._test_duration: self._packet_in_count += 1 def _int_to_ip(self, ip_int): """Convert integer to IP address string.""" return '.'.join([str((ip_int >> i) & 0xFF) for i in [24, 16, 8, 0]]) def _clear_all_flows(self, datapath): """Delete all flows from switch.""" parser = datapath.ofproto_parser ofproto = datapath.ofproto match = parser.OFPMatch() mod = parser.OFPFlowMod( datapath=datapath, command=ofproto.OFPFC_DELETE, out_port=ofproto.OFPP_ANY, out_group=ofproto.OFPG_ANY, match=match ) datapath.send_msg(mod) barrier = parser.OFPBarrierRequest(datapath) datapath.send_msg(barrier) time.sleep(0.5)Reference Performance Numbers
For context, here are approximate performance ranges:
| Switch Type | Throughput | Flow Capacity | Flow Setup Rate |
|---|---|---|---|
| ToR Hardware (Trident) | 6.5 Tbps | 16K-64K TCAM | 5K-20K flows/sec |
| Spine Hardware (Tomahawk) | 12.8+ Tbps | 32K-128K TCAM | 10K-50K flows/sec |
| OVS (Kernel) | 1-10 Gbps | 1M+ flows (DRAM) | 50K-100K flows/sec |
| OVS (DPDK) | 25-100 Gbps | 1M+ flows | 100K-500K flows/sec |
| SmartNIC Offload | 25-100 Gbps | 128K-1M flows | 10K-100K flows/sec |
Selecting the right OpenFlow switch requires balancing multiple factors. Here's a systematic framework:
1. Feature Requirements
2. Performance Requirements
3. Operational Considerations
4. Cost Factors
| Type | CapEx | OpEx | Total 3-Year TCO |
|---|---|---|---|
| Branded vendor | $$$$$ | $$$ | Highest |
| White-box + commercial NOS | $$$ | $$ | Medium-High |
| White-box + open source | $$$ | $ | Medium |
| Software switch (OVS) | $ | $$ | Low-Medium |
| SmartNIC offload | $$ | $ | Low-Medium |
Start with requirements, not products: (1) List must-have features, (2) Define performance floor, (3) Establish budget ceiling, (4) Shortlist products meeting all criteria, (5) PoC test with your workload, (6) Verify support/community. Never assume marketing claims—test everything with your actual traffic patterns.
OpenFlow switches—whether hardware or software—are where SDN meets reality. Understanding their architectures, capabilities, and limitations enables informed deployment decisions.
Module Complete: OpenFlow
You have now completed a comprehensive exploration of OpenFlow—the foundational southbound interface that enabled the SDN revolution. From protocol messages to flow tables, from match-action rules to controller communication, from hardware ASICs to software switches, you understand how OpenFlow transforms network infrastructure into programmable systems.
Key Module Takeaways:
Congratulations! You have mastered OpenFlow—the protocol that made Software-Defined Networking practical. You understand the complete stack from protocol messages through switch implementations. This knowledge positions you to design, deploy, and troubleshoot SDN networks. Continue to the next module to explore SDN controllers in depth.