Loading learning content...
Network policies in OpenFlow are expressed through match-action rules—a deceptively simple paradigm that hides remarkable expressive power. Each rule says: "If a packet matches these criteria, perform these operations."
This model has deep roots in computer science. Match-action generalizes the production rules of expert systems, the pattern-action pairs of AWK, and the guard-command structures of formal methods. In networking, it provides a uniform abstraction for everything from basic forwarding to complex traffic engineering.
Yet the simplicity is deceiving. OpenFlow defines dozens of action types. Actions can be immediate or deferred. Multiple actions compose into action sets with specific ordering semantics. Instructions wrap actions with additional control flow. And the interaction between match-action rules and multi-table pipelines creates a computational framework nearly as powerful as general-purpose programming.
This page explores the match-action paradigm in exhaustive depth: the complete action vocabulary, instruction types and their semantics, action composition rules, and practical examples showing how these primitives combine to implement real network behaviors. By the end, you'll think in match-action—seeing network policies as compositions of these fundamental building blocks.
By completing this page, you will understand: the complete taxonomy of OpenFlow actions, the distinction between immediate and deferred action execution, OpenFlow instruction types and their pipeline semantics, action set composition and ordering rules, special actions for controller communication and groups, and practical patterns for common network policies.
Conceptual Foundation
The match-action paradigm reduces all packet processing to a universal pattern:
If packet matches [criteria], then [transform and/or forward]
This simple structure is applied recursively across flow entries and tables to build arbitrarily complex policies. The power comes from:
The Processing Model
When a packet enters an OpenFlow switch, it undergoes this process:
Match vs. Action: Separation of Concerns
The match-action split is not merely syntactic—it represents fundamental separation:
Match: Pure observation. Matching examines packet headers but never modifies them. Matches are side-effect free. The same packet matched against the same entry always produces the same result.
Action: State mutation. Actions modify packet headers, update counters, change queuing, emit copies. Actions have observable effects.
This separation enables:
Match-action rules are declarative—you describe WHAT should happen, not HOW. The switch figures out the efficient execution strategy. This abstraction enables hardware optimization: TCAMs for parallel matching, action engines for pipelining, buffers for atomic updates. Write policies at the intent level; let the hardware map them efficiently.
OpenFlow defines a comprehensive set of actions that cover all aspects of packet handling. We'll examine each category:
Output Actions
These actions determine where packets go:
| Action | Parameters | Description |
|---|---|---|
| output | port, max_len | Send packet out specified port. max_len for CONTROLLER port specifies bytes to send. |
| output:CONTROLLER | max_len | Send to controller as PACKET_IN. Use for reactive flow installation or exception handling. |
| output:ALL | — | Flood to all ports except ingress. Standard broadcast behavior. |
| output:FLOOD | — | Flood according to spanning tree (exclude blocked ports). |
| output:IN_PORT | — | Send back out the ingress port (hairpin). |
| output:TABLE | — | Resubmit to table 0 (used in PACKET_OUT context). |
| output:NORMAL | — | Submit to switch's normal L2/L3 processing. |
| output:LOCAL | — | Send to switch's local control plane (management interface). |
Group Action
The group action enables multipath processing:
| Action | Parameters | Description |
|---|---|---|
| group | group_id | Execute the action buckets of the specified group table entry. Enables multicast, load balancing, fast failover. |
Groups are powerful because they allow a single action to trigger multiple output behaviors based on group type (ALL, SELECT, FAST_FAILOVER, INDIRECT).
Set-Field Actions
These actions modify packet headers:
| Action | Effect | Use Case |
|---|---|---|
| set_field:eth_src | Modify Ethernet source MAC | MAC rewriting for NAT, routing |
| set_field:eth_dst | Modify Ethernet destination MAC | L2 rewriting for next-hop delivery |
| set_field:ipv4_src | Modify IPv4 source address | Source NAT |
| set_field:ipv4_dst | Modify IPv4 destination address | Destination NAT, load balancing VIP rewriting |
| set_field:tcp_src | Modify TCP source port | Port translation (PAT) |
| set_field:tcp_dst | Modify TCP destination port | Service port remapping |
| set_field:ip_dscp | Modify DSCP value | QoS marking |
| set_field:vlan_vid | Modify VLAN ID | VLAN translation |
| set_field:mpls_label | Modify MPLS label | Label swapping |
OpenFlow 1.2+ uses the general 'set_field' action for all header modifications. Earlier versions had dedicated actions (set_vlan_vid, set_nw_src, etc.). The set_field approach is more extensible—any OXM match field can potentially be set, though hardware may limit which fields are actually modifiable.
Tag Manipulation Actions
These actions add or remove encapsulation headers:
| Action | Effect | Description |
|---|---|---|
| push_vlan | Add VLAN header | Push 802.1Q header with specified ethertype (0x8100 or 0x88a8) |
| pop_vlan | Remove VLAN header | Strip outermost VLAN tag |
| push_mpls | Add MPLS header | Push MPLS label stack entry |
| pop_mpls | Remove MPLS header | Pop outermost MPLS label |
| push_pbb | Add PBB header | Push Provider Backbone Bridge encapsulation |
| pop_pbb | Remove PBB header | Pop PBB encapsulation |
TTL Actions
These actions manage Time-To-Live fields for routing:
| Action | Effect | Use Case |
|---|---|---|
| dec_nw_ttl | Decrement IP TTL by 1 | L3 routing (each hop decrements TTL) |
| dec_mpls_ttl | Decrement MPLS TTL by 1 | MPLS forwarding |
| set_nw_ttl | Set IP TTL to specific value | Initialize TTL for encapsulated traffic |
| set_mpls_ttl | Set MPLS TTL to specific value | Initialize MPLS label stack |
| copy_ttl_in | Copy TTL from outer to inner header | Decapsulation (tunnel egress) |
| copy_ttl_out | Copy TTL from inner to outer header | Encapsulation (tunnel ingress) |
QoS and Metering Actions
These actions affect packet scheduling and rate limiting:
| Action | Parameters | Description |
|---|---|---|
| set_queue | queue_id | Set the queue ID for output port queuing. Maps to DSCP-based or other QoS configurations. |
| meter | meter_id | Apply metering (rate limiting) defined in meter table. Can drop or remark exceeding traffic. |
Special Actions
| Action | Effect |
|---|---|
| (no action / empty list) | Drop the packet. Absence of output action means discard. |
| copy_field | Copy value from one OXM field to another (OF 1.5+). |
Actions don't appear alone in flow entries—they're wrapped in instructions that govern execution behavior. Instructions control WHEN and HOW actions execute.
Instruction Types
OpenFlow 1.1+ defines these instruction types:
| Instruction | Purpose | Execution Time |
|---|---|---|
| Apply-Actions | Execute contained actions immediately, in order | Now |
| Write-Actions | Add actions to action set (executed at pipeline exit) | Deferred |
| Clear-Actions | Remove all actions from action set | Now |
| Write-Metadata | Write metadata register for subsequent tables | Now |
| Goto-Table | Continue processing at specified table | Pipeline control |
| Meter | Apply rate limiting from meter table | Now |
Instruction Ordering
When a flow entry contains multiple instructions, they execute in a defined order:
This ordering is fixed—you cannot reorder instructions within an entry.
Use Apply-Actions when the result must be visible immediately—e.g., modifying a header field that subsequent tables will match on. Use Write-Actions when you want to accumulate actions across tables that execute atomically at the end. A common pattern: early tables Apply-Actions to set metadata/modify headers; later tables Write-Actions to define final output behavior.
The Action Set
The action set is a per-packet data structure that accumulates actions across the pipeline. Key properties:
Singleton by type: The action set contains at most one action of each type. If you Write-Actions with set_field:ipv4_dst twice, the second replaces the first.
Defined execution order: When the packet exits the pipeline, actions execute in this order regardless of when they were added:
Atomic execution: All actions execute as an indivisible unit. Either all apply or none (on error).
Metadata Propagation
The metadata register is a 64-bit value passed between tables. Tables can read metadata in match fields and write it via Write-Metadata instruction.
Common uses:
Write-Metadata includes a mask, enabling partial updates:
12345678910111213141516171819202122232425262728293031323334353637383940414243
# Table 0: Classify traffic source# Bit 0 = trusted (1) or untrusted (0)# Bit 1 = internal (1) or external (0) # Entry: Match trusted internal port, set metadatamatch = parser.OFPMatch(in_port=1) # Trusted internal portinstructions = [ parser.OFPInstructionWriteMetadata( metadata=0b11, # Set bits 0 and 1 metadata_mask=0b11 # Mask: only modify bits 0-1 ), parser.OFPInstructionGotoTable(1)] # Entry: Match external port, set metadatamatch2 = parser.OFPMatch(in_port=10) # External-facing portinstructions2 = [ parser.OFPInstructionWriteMetadata( metadata=0b00, # Clear bits 0 and 1 metadata_mask=0b11 ), parser.OFPInstructionGotoTable(1)] # Table 1: Match on metadata for policy# Allow trusted internal to access servermatch_trusted = parser.OFPMatch( metadata=(0b11, 0b11), # (value, mask) - both bits set ipv4_dst="10.0.0.5")instructions_allow = [ parser.OFPInstructionActions( ofproto.OFPIT_APPLY_ACTIONS, [parser.OFPActionOutput(3)] )] # Block external to servermatch_untrusted = parser.OFPMatch( metadata=(0b00, 0b11), # Not trusted, not internal ipv4_dst="10.0.0.5")instructions_block = [] # No instructions = dropIndividual actions are building blocks. Real network functions require composing multiple actions into coherent behaviors. Let's examine common composition patterns.
Pattern 1: L3 Routing
When routing across L3 boundaries, the packet needs MAC addresses rewritten to the next-hop:
12345678910111213141516171819202122
# Routing packet to 10.0.2.0/24 via gateway on port 3# Gateway MAC: 00:00:00:00:00:03# Router MAC (source): 00:00:00:00:00:01 match = parser.OFPMatch( eth_type=0x0800, ipv4_dst=("10.0.2.0", "255.255.255.0") # Destination subnet) actions = [ # Rewrite MACs for next hop parser.OFPActionSetField(eth_src="00:00:00:00:00:01"), # Router's MAC parser.OFPActionSetField(eth_dst="00:00:00:00:00:03"), # Next-hop MAC # Decrement TTL (critical for routing!) parser.OFPActionDecNwTtl(), # Output to next-hop port parser.OFPActionOutput(3)] instructions = [ parser.OFPInstructionActions(ofproto.OFPIT_APPLY_ACTIONS, actions)]Pattern 2: NAT (Network Address Translation)
NAT requires rewriting IP addresses and potentially ports:
1234567891011121314151617181920212223242526272829
# SNAT: Outbound traffic from 192.168.1.0/24 to internet# Rewrite source IP to public NAT address 203.0.113.5 match_outbound = parser.OFPMatch( eth_type=0x0800, ipv4_src=("192.168.1.0", "255.255.255.0"), in_port=1 # Internal port) actions_snat = [ parser.OFPActionSetField(ipv4_src="203.0.113.5"), # NAT public IP parser.OFPActionOutput(2) # External port] # DNAT: Inbound traffic to VIP 203.0.113.10:80# Rewrite to internal server 192.168.1.100:8080 match_inbound = parser.OFPMatch( eth_type=0x0800, ip_proto=6, # TCP ipv4_dst="203.0.113.10", tcp_dst=80) actions_dnat = [ parser.OFPActionSetField(ipv4_dst="192.168.1.100"), parser.OFPActionSetField(tcp_dst=8080), parser.OFPActionOutput(1) # Internal port]Pattern 3: VLAN Manipulation
VLAN tagging at network edges:
1234567891011121314151617181920212223242526272829303132
# Access port: Tag untagged traffic with VLAN 100match_untagged = parser.OFPMatch( in_port=1, vlan_vid=0x0000 # No VLAN tag present) actions_tag = [ parser.OFPActionPushVlan(0x8100), # Push 802.1Q header parser.OFPActionSetField(vlan_vid=0x1064), # 0x1000 | 100 (VLAN 100) parser.OFPActionOutput(ofproto.OFPP_NORMAL)] # Trunk port: Already tagged traffic, just forwardmatch_tagged = parser.OFPMatch( in_port=1, vlan_vid=(0x1000, 0x1000) # Any tag present (bit 12 set)) actions_forward = [ parser.OFPActionOutput(ofproto.OFPP_NORMAL)] # VLAN translation: Change VLAN 100 to VLAN 200match_vlan100 = parser.OFPMatch( in_port=2, vlan_vid=0x1064 # VLAN 100) actions_translate = [ parser.OFPActionSetField(vlan_vid=0x10C8), # VLAN 200 parser.OFPActionOutput(3)]OpenFlow encodes VLAN presence in bit 12 of vlan_vid. Value 0x0000 means no tag. Values 0x1000-0x1FFF indicate tag present with VID in lower 12 bits. So VLAN 100 is represented as 0x1064 (0x1000 | 100). This encoding can be confusing—always use the OFPVID_PRESENT constant.
Pattern 4: QoS Marking and Queuing
Differentiate traffic treatment by ToS/DSCP and queue assignment:
1234567891011121314151617181920212223
# Mark VoIP traffic (UDP ports 5060, 10000-20000) with EF DSCPmatch_voip = parser.OFPMatch( eth_type=0x0800, ip_proto=17, # UDP udp_dst=(5060, 0xFFFF) # SIP signaling) actions_voip = [ parser.OFPActionSetField(ip_dscp=46), # EF (Expedited Forwarding) parser.OFPActionSetQueue(0), # Priority queue parser.OFPActionOutput(3)] # Mark best-effort traffic with BE DSCPmatch_default = parser.OFPMatch( eth_type=0x0800) actions_default = [ parser.OFPActionSetField(ip_dscp=0), # BE (Best Effort) parser.OFPActionSetQueue(3), # Default queue parser.OFPActionOutput(3)]Pattern 5: Mirroring/SPAN
Copy packets to monitoring port while also forwarding normally:
1234567891011121314151617181920212223242526272829
# Mirror all traffic from suspicious host to IDSmatch_suspect = parser.OFPMatch( eth_src="00:00:00:AA:BB:CC" # Suspicious host) actions_mirror = [ # Send copy to IDS on port 99 parser.OFPActionOutput(99), # Also forward normally through pipeline parser.OFPActionOutput(ofproto.OFPP_NORMAL)] # Note: Multiple output actions create multiple packet copies# This effectively implements port mirroring/SPAN # Alternative using group for more control# ALL group sends to multiple bucketsgroup_id = 1buckets = [ parser.OFPBucket(actions=[parser.OFPActionOutput(99)]), # IDS parser.OFPBucket(actions=[parser.OFPActionOutput(ofproto.OFPP_NORMAL)]) # Normal] group_mod = parser.OFPGroupMod( datapath, ofproto.OFPGC_ADD, ofproto.OFPGT_ALL, group_id, buckets) # Then use group action in flow entryactions_via_group = [parser.OFPActionGroup(group_id)]The group action is OpenFlow's mechanism for implementing behaviors that can't be expressed as a single output port: multicast, load balancing, and fast failover. Understanding groups is essential for advanced SDN.
Group Table Structure
Each group table entry contains:
Group Types
| Type | Bucket Selection | Use Case |
|---|---|---|
| ALL | Execute all buckets | Multicast, broadcast, mirroring |
| SELECT | Execute one bucket (load balance/ECMP) | Load balancing across equal-cost paths |
| INDIRECT | Single bucket indirection | Decouple flows from output (next-hop abstraction) |
| FAST_FAILOVER | First live bucket | Sub-50ms failover on link failure |
ALL Group: Multicast/Broadcast
The ALL group executes every bucket, creating packet copies for each:
1234567891011121314151617181920212223242526272829303132333435
# Multicast group: Send to ports 2, 3, 4 simultaneously# Used for 224.0.0.0/4 multicast or explicit broadcast def create_multicast_group(self, datapath, group_id, ports): ofproto = datapath.ofproto parser = datapath.ofproto_parser buckets = [] for port in ports: actions = [parser.OFPActionOutput(port)] bucket = parser.OFPBucket(actions=actions) buckets.append(bucket) group_mod = parser.OFPGroupMod( datapath, ofproto.OFPGC_ADD, # Command: ADD ofproto.OFPGT_ALL, # Type: ALL group_id, buckets ) datapath.send_msg(group_mod) return group_id # Usage: Create multicast group for VLAN 100 broadcastmulticast_group = create_multicast_group(datapath, 100, [2, 3, 4, 5]) # Flow entry using the groupmatch = parser.OFPMatch( vlan_vid=0x1064, # VLAN 100 eth_dst="ff:ff:ff:ff:ff:ff" # Broadcast) actions = [parser.OFPActionGroup(multicast_group)]instructions = [parser.OFPInstructionActions( ofproto.OFPIT_APPLY_ACTIONS, actions)]SELECT Group: Load Balancing (ECMP)
The SELECT group executes exactly one bucket, chosen by a selection algorithm (typically hash-based):
123456789101112131415161718192021222324252627282930313233343536373839404142
# ECMP load balancing across 4 next-hops# Selection based on hash of packet headers def create_ecmp_group(self, datapath, group_id, next_hops): """ next_hops: list of (port, dst_mac) tuples """ ofproto = datapath.ofproto parser = datapath.ofproto_parser buckets = [] for port, dst_mac in next_hops: actions = [ parser.OFPActionSetField(eth_dst=dst_mac), parser.OFPActionDecNwTtl(), parser.OFPActionOutput(port) ] bucket = parser.OFPBucket( weight=1, # Equal weight distribution watch_port=port, # Monitor this port for liveness watch_group=ofproto.OFPG_ANY, actions=actions ) buckets.append(bucket) group_mod = parser.OFPGroupMod( datapath, ofproto.OFPGC_ADD, ofproto.OFPGT_SELECT, # Type: SELECT group_id, buckets ) datapath.send_msg(group_mod) # Example: ECMP to 10.0.0.0/8 via 4 pathsnext_hops = [ (2, "00:00:00:00:00:02"), # Path 1 (3, "00:00:00:00:00:03"), # Path 2 (4, "00:00:00:00:00:04"), # Path 3 (5, "00:00:00:00:00:05"), # Path 4]create_ecmp_group(datapath, group_id=200, next_hops=next_hops)FAST_FAILOVER Group: Sub-50ms Recovery
The FAST_FAILOVER group executes the first bucket whose watch_port or watch_group is live. This enables dataplane-only failover without controller involvement:
12345678910111213141516171819202122232425262728293031323334353637
# Fast failover: Primary path on port 2, backup on port 3# Switch monitors link state; no controller RTT on failure def create_failover_group(self, datapath, group_id, primary_port, backup_port): ofproto = datapath.ofproto parser = datapath.ofproto_parser # Primary bucket: Used when primary_port is UP primary_bucket = parser.OFPBucket( weight=0, # Not used for FAST_FAILOVER watch_port=primary_port, # Monitor this port watch_group=ofproto.OFPG_ANY, actions=[parser.OFPActionOutput(primary_port)] ) # Backup bucket: Used when primary_port is DOWN backup_bucket = parser.OFPBucket( weight=0, watch_port=backup_port, watch_group=ofproto.OFPG_ANY, actions=[parser.OFPActionOutput(backup_port)] ) # Order matters! First live bucket is selected buckets = [primary_bucket, backup_bucket] group_mod = parser.OFPGroupMod( datapath, ofproto.OFPGC_ADD, ofproto.OFPGT_FF, # Type: FAST_FAILOVER group_id, buckets ) datapath.send_msg(group_mod) # When port 2 goes down, traffic automatically switches to port 3# Recovery time is hardware link detection (~10-50ms), not controller RTTFAST_FAILOVER achieves sub-50ms recovery because the switch monitors link state directly in hardware. When a link fails, the switch immediately selects the next live bucket—no controller communication required. This is critical for meeting telephony (99.999% uptime) and carrier-grade requirements.
The meter table (OpenFlow 1.3+) enables rate limiting directly in the dataplane. Meters can measure packet rates, byte rates, or both, applying configured actions when thresholds are exceeded.
Meter Structure
Each meter entry contains:
Meter Bands
Bands define rate thresholds and actions when exceeded:
| Band Type | Action | Description |
|---|---|---|
| DROP | Discard packet | Simple rate limiting—excess packets are dropped |
| DSCP_REMARK | Lower DSCP precedence | Remark for downstream QoS handling instead of drop |
| EXPERIMENTER | Vendor-specific | Custom rate limiting behavior |
1234567891011121314151617181920212223242526272829303132333435363738
# Create a meter for rate limiting to 10 Mbpsdef create_rate_limit_meter(self, datapath, meter_id, rate_kbps): ofproto = datapath.ofproto parser = datapath.ofproto_parser # Band: Drop packets exceeding rate bands = [ parser.OFPMeterBandDrop( rate=rate_kbps, # Rate in kb/s burst_size=rate_kbps # Burst tolerance ) ] meter_mod = parser.OFPMeterMod( datapath, command=ofproto.OFPMC_ADD, flags=ofproto.OFPMF_KBPS, # Rate in kb/s (not packets) meter_id=meter_id, bands=bands ) datapath.send_msg(meter_mod) # Create 10 Mbps metercreate_rate_limit_meter(datapath, meter_id=1, rate_kbps=10000) # Apply meter to traffic from specific hostmatch = parser.OFPMatch( eth_type=0x0800, ipv4_src="192.168.1.100") instructions = [ parser.OFPInstructionMeter(1), # Apply meter 1 parser.OFPInstructionActions( ofproto.OFPIT_APPLY_ACTIONS, [parser.OFPActionOutput(2)] )]Multi-Band Meters
Meters can have multiple bands for hierarchical rate limiting:
1234567891011121314151617181920212223242526272829303132333435
# Three-color marker style meter:# - Up to 5 Mbps: Green (pass unchanged)# - 5-10 Mbps: Yellow (remark to AF12)# - Above 10 Mbps: Red (drop) def create_three_color_meter(self, datapath, meter_id): ofproto = datapath.ofproto parser = datapath.ofproto_parser bands = [ # Yellow band: 5-10 Mbps, remark DSCP parser.OFPMeterBandDscpRemark( rate=5000, # 5 Mbps threshold burst_size=5000, prec_level=1 # Decrease DSCP by 1 precedence level ), # Red band: Above 10 Mbps, drop parser.OFPMeterBandDrop( rate=10000, # 10 Mbps threshold burst_size=10000 ) ] meter_mod = parser.OFPMeterMod( datapath, command=ofproto.OFPMC_ADD, flags=ofproto.OFPMF_KBPS, meter_id=meter_id, bands=bands ) datapath.send_msg(meter_mod) # Traffic below 5 Mbps: Passes unchanged# Traffic 5-10 Mbps: DSCP decremented (yellow)# Traffic above 10 Mbps: Dropped (red)Hardware meters may have accuracy limitations. Token bucket implementations introduce burst tolerance. Time granularity affects rate measurement precision. For critical rate limiting, verify behavior under load and consider combining with queue-based traffic shaping.
Let's bring together all match-action concepts in a complete, realistic example: a multi-tenant datacenter access switch implementing VLAN isolation, rate limiting, and monitoring.
Scenario:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
class MultiTenantSwitch: """ Complete multi-tenant datacenter access switch policy. Implements: VLAN isolation, rate limiting, monitoring, security. """ TENANT_A_PORTS = [1, 2, 3] TENANT_A_VLAN = 100 TENANT_B_PORTS = [4, 5, 6] TENANT_B_VLAN = 200 TRUNK_PORT = 24 IDS_PORT = 23 RATE_LIMIT_KBPS = 100000 # 100 Mbps def __init__(self, datapath): self.datapath = datapath self.ofproto = datapath.ofproto self.parser = datapath.ofproto_parser def install_policy(self): """Install complete tenant isolation policy.""" # Step 1: Create rate limiting meters self._create_meters() # Step 2: Create monitoring group (all traffic to IDS) self._create_monitoring_group() # Step 3: Install tenant access rules self._install_tenant_a_rules() self._install_tenant_b_rules() # Step 4: Install trunk rules self._install_trunk_rules() # Step 5: Install security catch-all (drop inter-tenant) self._install_security_rules() def _create_meters(self): """Create per-tenant rate limiting meters.""" for meter_id in [100, 200]: # Meter IDs match VLAN IDs bands = [self.parser.OFPMeterBandDrop( rate=self.RATE_LIMIT_KBPS, burst_size=self.RATE_LIMIT_KBPS )] self.datapath.send_msg(self.parser.OFPMeterMod( self.datapath, self.ofproto.OFPMC_ADD, self.ofproto.OFPMF_KBPS, meter_id, bands )) def _create_monitoring_group(self): """ALL group for traffic mirroring to IDS.""" buckets = [ self.parser.OFPBucket(actions=[ self.parser.OFPActionOutput(self.IDS_PORT) ]), self.parser.OFPBucket(actions=[ self.parser.OFPActionOutput(self.TRUNK_PORT) ]) ] self.datapath.send_msg(self.parser.OFPGroupMod( self.datapath, self.ofproto.OFPGC_ADD, self.ofproto.OFPGT_ALL, 1, buckets )) def _install_tenant_a_rules(self): """Install rules for Tenant A access ports.""" for port in self.TENANT_A_PORTS: # Ingress: Tag untagged, apply meter, forward via group match = self.parser.OFPMatch( in_port=port, vlan_vid=0x0000 # Untagged ) instructions = [ self.parser.OFPInstructionMeter(100), # Rate limit self.parser.OFPInstructionActions( self.ofproto.OFPIT_APPLY_ACTIONS, [ self.parser.OFPActionPushVlan(0x8100), self.parser.OFPActionSetField(vlan_vid=0x1064), # VLAN 100 self.parser.OFPActionGroup(1) # Monitor + forward ] ) ] self._install_flow(match, instructions, priority=100) # Egress: Strip VLAN for access port delivery match_egress = self.parser.OFPMatch( in_port=self.TRUNK_PORT, vlan_vid=0x1064, # VLAN 100 eth_dst=self._get_port_mac(port) # Destination on this port ) actions_egress = [ self.parser.OFPActionPopVlan(), self.parser.OFPActionOutput(port) ] instructions_egress = [ self.parser.OFPInstructionActions( self.ofproto.OFPIT_APPLY_ACTIONS, actions_egress ) ] self._install_flow(match_egress, instructions_egress, priority=100) def _install_tenant_b_rules(self): """Install rules for Tenant B access ports.""" for port in self.TENANT_B_PORTS: match = self.parser.OFPMatch( in_port=port, vlan_vid=0x0000 ) instructions = [ self.parser.OFPInstructionMeter(200), # Rate limit self.parser.OFPInstructionActions( self.ofproto.OFPIT_APPLY_ACTIONS, [ self.parser.OFPActionPushVlan(0x8100), self.parser.OFPActionSetField(vlan_vid=0x10C8), # VLAN 200 self.parser.OFPActionGroup(1) # Monitor + forward ] ) ] self._install_flow(match, instructions, priority=100) def _install_trunk_rules(self): """Install trunk port rules - allow tagged traffic.""" for vlan in [0x1064, 0x10C8]: # VLAN 100, 200 match = self.parser.OFPMatch( in_port=self.TRUNK_PORT, vlan_vid=vlan ) instructions = [ self.parser.OFPInstructionActions( self.ofproto.OFPIT_APPLY_ACTIONS, [self.parser.OFPActionOutput(self.ofproto.OFPP_NORMAL)] ) ] self._install_flow(match, instructions, priority=90) def _install_security_rules(self): """Catch-all: Drop any traffic that bypasses tenant rules.""" # This catches any cross-tenant traffic attempts match = self.parser.OFPMatch() # Wildcard instructions = [] # No instructions = drop self._install_flow(match, instructions, priority=0) def _install_flow(self, match, instructions, priority): """Helper to install a flow entry.""" mod = self.parser.OFPFlowMod( datapath=self.datapath, priority=priority, match=match, instructions=instructions ) self.datapath.send_msg(mod) def _get_port_mac(self, port): """Return MAC for port (simplified - would query actual MACs).""" return f"00:00:00:00:00:{port:02x}"Match-action rules are the fundamental building blocks of OpenFlow policies. By mastering actions, instructions, and composition patterns, you can express virtually any network behavior.
What's Next:
With flow tables and match-action rules understood, we'll explore controller communication—how the SDN controller and OpenFlow switches interact to implement reactive and proactive forwarding. You'll learn about asynchronous message handling, flow installation strategies, and controller scalability patterns.
You now possess comprehensive knowledge of OpenFlow match-action rules—the complete action vocabulary, instruction semantics, and composition patterns. This knowledge enables you to express any network policy in OpenFlow. Next, we examine controller-switch communication patterns.