Loading learning content...
Imagine a world where network engineers could program their switches the way developers program applications—where routing decisions, security policies, and traffic management weren't buried in proprietary firmware but exposed through a clean, standardized API. This vision became reality with OpenFlow, the protocol that launched the Software-Defined Networking revolution.
Before OpenFlow, every network switch was an autonomous island. Each device ran its own control logic, made its own forwarding decisions, and communicated through its own proprietary interfaces. Network-wide coordination required manual configuration across dozens or hundreds of devices. Policy changes took weeks. Innovation moved at the pace of hardware vendors.
OpenFlow changed everything by establishing a universal interface between the control plane and the data plane. It defined how a centralized controller could instruct switches on exactly how to handle every packet—transforming network devices from independent decision-makers into programmable forwarding engines.
This page provides an exhaustive exploration of the OpenFlow protocol: its origins, its architecture, its message types, its evolution across versions, and its fundamental role in enabling truly software-defined networks. By the end, you will understand not just what OpenFlow does, but why it matters and how it works at the deepest technical level.
By completing this page, you will understand: the historical context that demanded OpenFlow, the protocol's layered architecture, the complete taxonomy of OpenFlow messages, the security mechanisms protecting controller-switch communication, version-specific features from OpenFlow 1.0 to 1.5, and the practical implications for network programmability. This knowledge forms the foundation for understanding all subsequent SDN concepts.
The Problem with Traditional Networks
To appreciate why OpenFlow was revolutionary, we must first understand the architectural ossification that plagued traditional networks. In conventional network infrastructure, every switch and router combined two distinct functions:
The Control Plane: Decision-making logic that determines how packets should be handled—routing table computation, spanning tree calculations, access control evaluation, QoS classification
The Data Plane (Forwarding Plane): Packet processing logic that actually moves packets based on control plane decisions—table lookups, header modifications, port selection, queuing
These two planes were vertically integrated within each device. A Cisco switch ran Cisco's control logic. A Juniper router ran Juniper's algorithms. There was no standardized way to separate them.
This vertical integration created what researchers called 'Internet ossification'—the inability to innovate in network infrastructure at the pace of applications. While software applications evolved monthly, network protocols evolved over decades. Introducing a new routing algorithm required convincing vendors to implement it in their proprietary firmware, then waiting for customer upgrade cycles. Innovation was structurally blocked.
The Stanford Research Breakthrough
The solution emerged from Stanford University's computer science department in the mid-2000s. Researchers Nick McKeown, Martin Casado, and their colleagues asked a fundamental question: What if we could separate the control plane from the data plane and expose forwarding hardware through a standard interface?
This concept crystallized into OpenFlow, first described in the seminal 2008 paper "OpenFlow: Enabling Innovation in Campus Networks" published in ACM SIGCOMM Computer Communication Review. The insight was elegantly simple:
The name "OpenFlow" reflects this vision: an open standard for controlling flow through network devices.
| Year | Milestone | Significance |
|---|---|---|
| 2008 | OpenFlow 1.0 specification released | First complete protocol definition enabling SDN research |
| 2009 | Open Networking Foundation (ONF) formed | Industry consortium to govern OpenFlow standards |
| 2011 | OpenFlow 1.1 specification | Multiple tables, groups, MPLS support |
| 2011 | Google announces B4 SDN WAN | First hyperscale production OpenFlow deployment |
| 2012 | OpenFlow 1.2 specification | IPv6 support, extensible match fields |
| 2012 | OpenFlow 1.3 specification (LTS) | Meters, per-flow stats, auxiliary connections |
| 2013 | OpenFlow 1.4 specification | Bundles, synchronized tables, optical ports |
| 2014 | OpenFlow 1.5 specification | Egress tables, scheduled bundles, copy-field action |
| 2016 | OpenFlow 1.5.1 (current) | Minor corrections and clarifications |
Why OpenFlow Became the Standard
Several factors contributed to OpenFlow's dominance as the SDN southbound interface:
Simplicity: OpenFlow started with a deliberately minimal specification. The 1.0 version was simple enough that graduate students could implement both switches and controllers, accelerating research adoption.
Hardware compatibility: OpenFlow leveraged the Ternary Content-Addressable Memory (TCAM) already present in commodity switches for ACL processing. It didn't require new hardware—just new firmware.
Vendor neutrality: The Open Networking Foundation governed the specification through an open process, preventing any single vendor from controlling the standard.
Research community momentum: Stanford's reputation and the compelling vision attracted researchers worldwide, creating a critical mass of implementations, experiments, and publications.
By 2012, OpenFlow had transcended academia. Google announced that its entire B4 backbone network—connecting data centers globally—ran on OpenFlow. Microsoft, Facebook, and Amazon followed with their own SDN deployments. The protocol had proven itself at hyperscale.
OpenFlow defines a precise architecture for controller-switch interaction. Understanding this architecture is essential for grasping how SDN actually works at the implementation level.
The Three-Layer Model
The OpenFlow architecture involves three distinct layers:
The OpenFlow Channel
The OpenFlow channel is the communication path between a controller and a switch. This channel carries OpenFlow protocol messages that enable the controller to configure the switch and receive notifications about network events.
Key characteristics of the OpenFlow channel:
OpenFlow originally used TCP port 6633 (unofficially assigned). When IANA officially registered the OpenFlow protocol in 2013, port 6653 became the standard. Many legacy deployments still use 6633, so controllers often listen on both ports.
The OpenFlow Switch Components
An OpenFlow switch consists of several key components that work together to enable programmable forwarding:
1. Flow Tables The heart of an OpenFlow switch is its flow table(s). Each flow table contains a set of flow entries that determine how packets are processed. OpenFlow 1.0 supported a single table; later versions support multiple tables processed in a pipeline.
2. Group Table Introduced in OpenFlow 1.1, the group table enables operations that span multiple flow entries—multicast, load balancing, fast failover. Group entries reference multiple action buckets that can be executed together or selectively.
3. Meter Table Added in OpenFlow 1.3, the meter table enables rate-limiting and QoS operations. Meters can measure packet rates and apply actions (drop, remark) when thresholds are exceeded.
4. OpenFlow Channel The secure channel connecting the switch to the controller, as described above.
5. OpenFlow Protocol The set of messages and procedures that govern controller-switch communication.
The OpenFlow protocol defines a comprehensive set of message types that enable all controller-switch interactions. Understanding these messages is essential for implementing SDN applications and debugging OpenFlow networks.
Messages are categorized by their direction and purpose:
Controller-to-Switch Messages
These messages allow the controller to manage switch state, query information, and send packets through the switch:
| Message Type | Purpose | Response |
|---|---|---|
| FEATURES_REQUEST | Query switch capabilities (datapath ID, buffer count, number of tables, supported actions) | FEATURES_REPLY |
| GET_CONFIG_REQUEST | Query switch configuration (how handle misses, max packet bytes to send) | GET_CONFIG_REPLY |
| SET_CONFIG | Set switch configuration parameters | None |
| FLOW_MOD | Add, modify, or delete flow entries in tables | None (use BARRIER for confirmation) |
| GROUP_MOD | Add, modify, or delete group table entries | None |
| METER_MOD | Add, modify, or delete meter entries (OF 1.3+) | None |
| PACKET_OUT | Send a packet out through a specified port | None |
| BARRIER_REQUEST | Ensure all previous messages have been processed | BARRIER_REPLY |
| MULTIPART_REQUEST | Query statistics or table configuration (replaces STATS in OF 1.3+) | MULTIPART_REPLY |
| ROLE_REQUEST | Set or query controller role (master/slave/equal) | ROLE_REPLY |
| GET_ASYNC_REQUEST | Query asynchronous message delivery settings | GET_ASYNC_REPLY |
| SET_ASYNC | Configure which async messages to receive | None |
| BUNDLE_CONTROL | Manage bundles for atomic flow modifications (OF 1.4+) | BUNDLE_CONTROL |
| BUNDLE_ADD_MESSAGE | Add message to a bundle | None |
Asynchronous Messages
Switches send these messages without explicit controller request, typically to notify about network events or request guidance:
| Message Type | Trigger | Purpose |
|---|---|---|
| PACKET_IN | Packet matches no flow entry, or flow entry specifies 'send to controller' | Deliver packet (or portion) to controller for processing/decision |
| FLOW_REMOVED | Flow entry expires (idle timeout, hard timeout) or is deleted | Notify controller of flow removal, include match counters |
| PORT_STATUS | Port configuration changes (up/down, speed change) | Notify controller of physical network topology changes |
| ROLE_STATUS | Controller role change (OF 1.4+) | Notify when master/slave designation changes |
| TABLE_STATUS | Table configuration change (OF 1.4+) | Notify when table properties are modified |
| REQUESTFORWARD | Controller requests forwarded from switch (OF 1.5+) | Handle group add/mod requests requiring controller cooperation |
Symmetric Messages
These messages can be initiated by either the controller or the switch:
| Message Type | Purpose | Details |
|---|---|---|
| HELLO | Establish connection, negotiate version | Contains highest supported OpenFlow version; both sides use lowest common version |
| ECHO_REQUEST | Keepalive / latency measurement | Can contain arbitrary data that must be echoed back |
| ECHO_REPLY | Response to ECHO_REQUEST | Returns same data received in request |
| ERROR | Report errors in message processing | Contains error type, code, and failing message data |
| EXPERIMENTER | Vendor-specific extensions | Provides mechanism for proprietary features outside standard |
PACKET_IN is perhaps the most critical message type. Every unmatched packet generates a PACKET_IN, which travels to the controller, incurs processing delay, and may trigger a FLOW_MOD and PACKET_OUT in response. This 'first-packet penalty' can create bottlenecks if not managed. Strategies include: proactive flow installation (install flows before traffic), flow aggregation (use wildcards), and local switch fallback (default actions for unhandled traffic).
Every OpenFlow message begins with a common header that enables parsing and routing. Understanding this structure is essential for protocol-level debugging and implementation.
OpenFlow Header Format
All OpenFlow messages share an 8-byte header:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
/* OpenFlow header structure (8 bytes) */struct ofp_header { uint8_t version; /* OpenFlow version: 0x01 = 1.0, 0x02 = 1.1, 0x03 = 1.2, 0x04 = 1.3, 0x05 = 1.4, 0x06 = 1.5 */ uint8_t type; /* Message type (see ofp_type enum) */ uint16_t length; /* Total message length including header (big-endian) */ uint32_t xid; /* Transaction ID for matching request/reply pairs */}; /* Message types for OpenFlow 1.3 */enum ofp_type { /* Symmetric messages */ OFPT_HELLO = 0, /* Controller <-> Switch */ OFPT_ERROR = 1, /* Controller <-> Switch */ OFPT_ECHO_REQUEST = 2, /* Controller <-> Switch */ OFPT_ECHO_REPLY = 3, /* Controller <-> Switch */ OFPT_EXPERIMENTER = 4, /* Controller <-> Switch */ /* Controller-to-switch messages */ OFPT_FEATURES_REQUEST = 5, /* Controller -> Switch */ OFPT_FEATURES_REPLY = 6, /* Switch -> Controller */ OFPT_GET_CONFIG_REQUEST = 7, /* Controller -> Switch */ OFPT_GET_CONFIG_REPLY = 8, /* Switch -> Controller */ OFPT_SET_CONFIG = 9, /* Controller -> Switch */ /* Asynchronous messages */ OFPT_PACKET_IN = 10, /* Switch -> Controller */ OFPT_FLOW_REMOVED = 11, /* Switch -> Controller */ OFPT_PORT_STATUS = 12, /* Switch -> Controller */ /* Controller-to-switch messages */ OFPT_PACKET_OUT = 13, /* Controller -> Switch */ OFPT_FLOW_MOD = 14, /* Controller -> Switch */ OFPT_GROUP_MOD = 15, /* Controller -> Switch */ OFPT_PORT_MOD = 16, /* Controller -> Switch */ OFPT_TABLE_MOD = 17, /* Controller -> Switch */ /* Multipart messages (statistics) */ OFPT_MULTIPART_REQUEST = 18, /* Controller -> Switch */ OFPT_MULTIPART_REPLY = 19, /* Switch -> Controller */ /* Barrier messages */ OFPT_BARRIER_REQUEST = 20, /* Controller -> Switch */ OFPT_BARRIER_REPLY = 21, /* Switch -> Controller */ /* Queue configuration */ OFPT_QUEUE_GET_CONFIG_REQUEST = 22, /* Controller -> Switch */ OFPT_QUEUE_GET_CONFIG_REPLY = 23, /* Switch -> Controller */ /* Role messages */ OFPT_ROLE_REQUEST = 24, /* Controller -> Switch */ OFPT_ROLE_REPLY = 25, /* Switch -> Controller */ /* Async configuration */ OFPT_GET_ASYNC_REQUEST = 26, /* Controller -> Switch */ OFPT_GET_ASYNC_REPLY = 27, /* Switch -> Controller */ OFPT_SET_ASYNC = 28, /* Controller -> Switch */ /* Meter messages */ OFPT_METER_MOD = 29 /* Controller -> Switch */};Field-by-Field Breakdown
version (1 byte): Identifies the OpenFlow protocol version. Both controller and switch send HELLO messages with their highest supported version. The connection operates at the lower of the two versions. This enables backward compatibility—a 1.5 controller can communicate with a 1.3 switch at OF 1.3.
type (1 byte): Specifies the message type from the ofp_type enumeration. Each type has a defined message structure following the header.
length (2 bytes): Total message length in bytes, including the header. Stored in network byte order (big-endian). The maximum message length is 65535 bytes, though practical limits may be lower due to buffer constraints.
xid (4 bytes): Transaction ID chosen by the message initiator. Replies copy this ID from the corresponding request, enabling request-response correlation—critical when controllers manage thousands of switches with in-flight queries.
Controllers should use monotonically increasing XIDs to simplify debugging in packet captures. Some implementations embed timestamps or request types in the XID for faster correlation. Avoid XID collisions by maintaining per-switch XID counters.
Example: FLOW_MOD Message Structure
Let's examine a complete message structure. The FLOW_MOD message is the primary mechanism for installing, modifying, and deleting flow table entries:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
/* Flow modification message - primary flow table programming mechanism */struct ofp_flow_mod { struct ofp_header header; /* Flow identification */ uint64_t cookie; /* Controller-assigned flow identifier */ uint64_t cookie_mask; /* Mask for cookie matching (modify/delete) */ /* Table and command */ uint8_t table_id; /* Target flow table ID */ uint8_t command; /* OFPFC_ADD, OFPFC_MODIFY, OFPFC_DELETE, etc. */ /* Timeouts */ uint16_t idle_timeout; /* Seconds before idle expiration (0 = never) */ uint16_t hard_timeout; /* Seconds before absolute expiration (0 = never) */ /* Priority and buffer */ uint16_t priority; /* Flow entry priority (higher = matched first) */ uint32_t buffer_id; /* Buffered packet ID (PACKET_IN context) */ /* Ports for delete commands */ uint32_t out_port; /* Output port filter for delete */ uint32_t out_group; /* Output group filter for delete */ /* Flags */ uint16_t flags; /* OFPFF_SEND_FLOW_REM, OFPFF_CHECK_OVERLAP, etc. */ uint8_t pad[2]; /* Alignment padding */ /* Variable-length match and instructions */ struct ofp_match match; /* Packet matching criteria (TLV format in 1.2+) */ /* struct ofp_instruction instructions[]; -- follows match */}; /* Flow modification commands */enum ofp_flow_mod_command { OFPFC_ADD = 0, /* Add new flow entry */ OFPFC_MODIFY = 1, /* Modify matching flow entries */ OFPFC_MODIFY_STRICT = 2, /* Modify entry with exact priority/match */ OFPFC_DELETE = 3, /* Delete matching flow entries */ OFPFC_DELETE_STRICT = 4 /* Delete entry with exact priority/match */}; /* Flow modification flags */enum ofp_flow_mod_flags { OFPFF_SEND_FLOW_REM = 1 << 0, /* Send FLOW_REMOVED when entry expires */ OFPFF_CHECK_OVERLAP = 1 << 1, /* Check for overlapping entries first */ OFPFF_RESET_COUNTS = 1 << 2, /* Reset packet/byte counters */ OFPFF_NO_PKT_COUNTS = 1 << 3, /* Don't track packet count */ OFPFF_NO_BYT_COUNTS = 1 << 4 /* Don't track byte count */};The OpenFlow channel between controller and switch is a high-value target for attackers. Compromising this channel would grant an adversary complete control over network forwarding. The OpenFlow specification mandates robust security mechanisms.
TLS Encryption
OpenFlow strongly recommends (and many deployments require) TLS encryption for the OpenFlow channel:
Certificate Management
Production OpenFlow deployments require careful certificate management:
Switch certificates: Each switch needs a unique certificate identifying itself to the controller. Certificates can be pre-provisioned or distributed through secure enrollment protocols.
Controller certificates: The controller presents its certificate to switches, enabling switches to verify they're connecting to the legitimate controller, not an attacker performing a man-in-the-middle attack.
Certificate validation: Switches should validate controller certificates against a trusted CA. Similarly, controllers should validate switch certificates to prevent rogue switches from joining the network.
Revocation: Certificate revocation (CRL or OCSP) enables immediate removal of compromised switch or controller credentials.
The controller is a single point of control for the entire OpenFlow network. If compromised, an attacker gains total visibility and control—they can redirect all traffic, create invisible backdoors, or bring down the network. Controller security requires: isolated management networks, strict access control, comprehensive logging, runtime integrity monitoring, and rapid incident response capabilities.
Controller Role Mechanism
OpenFlow 1.2 introduced the controller role mechanism to support multi-controller deployments. This enables high availability and load distribution:
| Role | Can Modify? | Receives Async? | Use Case |
|---|---|---|---|
| OFPCR_ROLE_MASTER | Yes | Yes | Active controller with full control |
| OFPCR_ROLE_SLAVE | No | No | Standby controller for failover |
| OFPCR_ROLE_EQUAL | Yes | Yes | Distributed control (default) |
| OFPCR_ROLE_NOCHANGE | — | — | Query current role without changing |
The role mechanism uses generation IDs to prevent split-brain scenarios. When a controller claims the master role, it includes a generation ID. The switch only accepts the claim if the generation ID is higher than any previously seen, ensuring a strict ordering of mastership claims.
Fail Modes
OpenFlow switches must handle controller connection failures gracefully. Two modes are defined:
Fail-secure mode: The switch continues to use existing flow entries but cannot add new entries. Expired entries are removed normally. This mode maintains network operation during transient controller failures.
Fail-standalone mode: The switch reverts to traditional Ethernet switching behavior (MAC learning, STP). This mode ensures basic connectivity even with prolonged controller unavailability, but loses SDN-specific policies.
Operators choose the fail mode based on their availability versus consistency requirements.
The OpenFlow protocol has evolved substantially since its 2008 inception. Each version added capabilities that expanded what SDN networks could achieve. Understanding version differences is critical for deployment planning and interoperability.
OpenFlow 1.3 (June 2012) - Long-Term Support
The most widely deployed OpenFlow version. Designated LTS for stability.
Key Additions:
Meter Bands:
Why LTS Matters:
Status: De facto standard. Most production deployments target OF 1.3.
For new deployments, target OpenFlow 1.3 as the baseline. It offers the best balance of features, vendor support, and stability. Only move to 1.4/1.5 if you specifically need bundles, egress tables, or scheduled changes—and verify your hardware supports these versions.
Let's trace through a complete OpenFlow interaction to see how the protocol operates in practice. We'll follow a new switch connecting to a controller and handling its first packet.
Phase 1: Connection Establishment
Phase 2: Reactive Flow Installation
When a new packet arrives at the switch with no matching flow, the switch sends it to the controller:
Key Observations from This Flow:
The first-packet penalty: The initial packet incurs controller round-trip delay. Subsequent matching packets are switched locally at wire speed.
Buffer management: The switch may buffer the packet locally (buffer_id) rather than sending the entire packet to the controller. The controller references this buffer in PACKET_OUT, avoiding double-transit of packet data.
Reactive vs. proactive: This example shows reactive flow installation triggered by traffic. Proactive approaches pre-install flows before traffic arrives, eliminating first-packet delay but requiring traffic prediction.
Timeout management: The 60-second idle_timeout means the flow expires if unused for 60 seconds, automatically cleaning up stale entries.
Match granularity: The controller chose IP-level matching. It could have used more specific (TCP ports) or broader (MAC addresses, wildcards) matching based on policy.
If the switch has no buffering or the buffer is full, buffer_id in PACKET_IN is set to OFP_NO_BUFFER (0xffffffff). In this case, the full packet is included in the PACKET_IN data field, and the controller must include the full packet in PACKET_OUT as well.
OpenFlow is the foundational protocol that enabled the Software-Defined Networking revolution. By standardizing the interface between network control logic and forwarding hardware, it broke the vertical integration that had locked innovation for decades.
What's Next:
Now that you understand the OpenFlow protocol at the message level, we'll dive deeper into flow tables—the heart of OpenFlow switches. You'll learn how flow entries are structured, how matching works, and how actions transform packets. This knowledge is essential for designing efficient SDN forwarding rules.
You now possess a comprehensive understanding of the OpenFlow protocol—its genesis, architecture, message types, security mechanisms, and version evolution. This knowledge forms the essential foundation for understanding all other OpenFlow concepts. Next, we explore flow tables in detail.