Loading learning content...
Understanding individual VNFs is essential, but VNFs don't operate in isolation. They require a sophisticated infrastructure to provide compute, storage, and networking resources, along with management and orchestration systems to deploy, monitor, and maintain them. The ETSI NFV Architectural Framework defines these components and their interactions, providing a standardized blueprint for NFV deployments.
This architecture is not merely theoretical—it's the foundation upon which major telecommunications operators and cloud providers have built production NFV infrastructure serving millions of users. Understanding this architecture is essential for anyone designing, deploying, or operating NFV systems.
By the end of this page, you will understand the complete ETSI NFV architectural framework, including NFVI (NFV Infrastructure) and its layered structure, MANO (Management and Orchestration) and its three key components (NFVO, VNFM, VIM), reference points that define interfaces between components, and how these elements interact to enable VNF deployment and operation.
The ETSI NFV architecture defines a comprehensive framework for virtualizing network functions. At the highest level, it consists of three main domains:
1. Virtual Network Functions (VNFs): The software implementations of network functions that we explored in the previous page.
2. NFV Infrastructure (NFVI): The physical and virtual resources that host VNFs—compute, storage, and networking.
3. NFV Management and Orchestration (MANO): The systems that manage VNF lifecycle and orchestrate resources.
These three domains interact through well-defined interfaces called reference points, enabling multi-vendor interoperability and modular system design.
The Reference Architecture Elements:
Let's understand each major element before diving into details:
OSS/BSS (Operations Support Systems / Business Support Systems):
NFV Orchestrator (NFVO):
VNF Manager (VNFM):
Virtualized Infrastructure Manager (VIM):
NFVI (NFV Infrastructure):
The ETSI NFV framework is specified across multiple document groups: GS NFV (base specifications), GS NFV-IFA (interface and architecture), GS NFV-SOL (solutions and protocols), and GS NFV-SEC (security). Production implementations must navigate these specifications carefully, as real-world requirements often require combining elements from multiple specification groups.
The NFV Infrastructure is the foundation upon which all VNFs run. It encompasses all hardware and software components that create the environment for deploying and executing VNFs. NFVI is conceptually divided into three layers:
Layer 1: Hardware Resources
The physical infrastructure providing raw computing power:
Layer 2: Virtualization Layer
The software layer that abstracts physical resources into virtual resources:
Layer 3: Virtual Resources
The resources presented to VNFs:
| Layer | Components | Responsibilities | Key Technologies |
|---|---|---|---|
| Hardware Resources | Servers, storage, network equipment | Provide raw compute, storage, network capacity | x86 servers, NVMe SSDs, 100GbE NICs |
| Virtualization Layer | Hypervisor, vSwitch, virtual storage | Abstract hardware into virtual resources | KVM, OVS-DPDK, Ceph |
| Virtual Resources | VMs, containers, virtual NICs | Provide execution environment for VNFs | libvirt, Kubernetes pods, VirtIO |
NFVI PoP (Point of Presence):
NFVI resources are organized into Points of Presence (PoPs)—physical locations housing NFVI hardware. An operator might have:
Each PoP contains:
NFVI Compute Node Architecture:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
# Example NFVI Compute Node Specification# Typical configuration for carrier-grade NFV deployment compute_node_spec: hardware: # Server Platform platform: "Dell PowerEdge R750" # Or equivalent: HP DL380, Lenovo SR650 processors: count: 2 model: "Intel Xeon Gold 6348 (Ice Lake)" cores_per_socket: 28 threads_per_core: 2 base_frequency: "2.6 GHz" turbo_frequency: "3.5 GHz" features: - AVX-512 # Vector processing for crypto - AES-NI # Hardware crypto acceleration - Intel VT-x # Virtualization extensions - Intel VT-d # I/O virtualization (SR-IOV) memory: total: "512 GB" type: "DDR4-3200 ECC RDIMM" channels: 16 # 8 per socket numa_nodes: 2 huge_pages: enabled: true size: "1GB" count: 480 # Reserve for VNFs storage: boot_drives: - type: "M.2 NVMe SSD" capacity: "480 GB" raid: "RAID-1" vnf_storage: - type: "NVMe SSD" capacity: "3.84 TB" count: 4 raid: "RAID-10" network: management: - type: "1GbE" count: 2 purpose: "IPMI, management" data: - type: "100GbE" model: "Intel E810" count: 2 features: - "SR-IOV (128 VFs per port)" - "DPDK support" - "Hardware timestamping" software: os: "Ubuntu 22.04 LTS" kernel: "5.15.0-generic" # Or RHEL 8.x / CentOS Stream hypervisor: type: "KVM/QEMU" version: "QEMU 6.2" libvirt: "8.0" acceleration: dpdk: enabled: true version: "21.11 LTS" driver: "vfio-pci" ovs_dpdk: enabled: true version: "2.17" pmd_cores: [2, 3, 4, 5] # Poll Mode Driver cores sr_iov: enabled: true vf_count: 64 # Per physical function tuning: cpu_governor: "performance" numa_balancing: "disabled" # Manual NUMA placement transparent_hugepages: "never" # Explicit huge pages only isolcpus: "4-55" # Isolate cores for VNF use nohz_full: "4-55" # Disable timer ticks on VNF coresDefault OS configurations are optimized for general-purpose computing, not NFV workloads. Proper NFVI tuning—CPU isolation, NUMA awareness, huge pages, DPDK—can improve VNF performance by 10-100x. Carrier-grade NFV deployments require systematic performance engineering at the NFVI layer.
The Virtualized Infrastructure Manager (VIM) is responsible for controlling and managing the NFVI compute, storage, and networking resources within an NFVI PoP. It's the interface between the upper management layers and the actual virtual infrastructure.
VIM Responsibilities:
OpenStack as the Dominant VIM:
While the ETSI framework is VIM-agnostic, OpenStack has become the de facto standard VIM for NFV deployments. Its modular architecture aligns well with NFV requirements:
| Component | NFV Role | Key Features for NFV |
|---|---|---|
| Nova | Compute management | CPU pinning, NUMA topology, huge pages, PCI passthrough |
| Neutron | Network management | OVS-DPDK, SR-IOV, multiple provider networks, trunk ports |
| Cinder | Block storage | Volume types, encryption, high-performance backends |
| Glance | Image management | VNF image storage, image properties, multi-format support |
| Keystone | Identity/authentication | Service accounts, project isolation |
| Heat | Template orchestration | HOT templates for VNF deployment |
| Tacker | NFV orchestration | VNFD/NSD processing, VNF lifecycle (optional VNFM/NFVO) |
| Barbican | Secrets management | Certificate storage, key management |
VIM-VNF Interaction:
The VIM provides resources to VNFs but typically doesn't interact with VNF application logic directly. The interaction model:
Multi-VIM Deployments:
Large operators often deploy multiple VIMs:
The NFVO coordinates across multiple VIMs, presenting a unified view to upper layers while managing the complexity of multi-VIM resource allocation.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121
# Conceptual VIM interaction for VNF deployment# Shows the sequence of VIM API calls during VNF instantiation from openstack import connectionimport time def deploy_vnf_via_vim(vnf_spec): """ Deploy a VNF by interacting with VIM (OpenStack) APIs This represents the VNFM's interaction with VIM """ # Connect to VIM (OpenStack) conn = connection.Connection( auth_url="https://vim.example.com:5000/v3", project_name="nfv-project", username="vnfm-service", password="<service-password>", region_name="region-01" ) # Step 1: Verify/upload VNF image (Glance) print("Step 1: Checking VNF image availability...") image = conn.image.find_image(vnf_spec['image_name']) if not image: print(f" Uploading image: {vnf_spec['image_name']}") image = conn.image.create_image( name=vnf_spec['image_name'], disk_format='qcow2', container_format='bare', data=open(vnf_spec['image_path'], 'rb') ) # Step 2: Create networks (Neutron) print("Step 2: Creating virtual networks...") networks = {} for net_spec in vnf_spec['networks']: network = conn.network.create_network( name=net_spec['name'], provider_network_type=net_spec.get('type', 'vxlan'), provider_segmentation_id=net_spec.get('segment_id') ) subnet = conn.network.create_subnet( name=f"{net_spec['name']}-subnet", network_id=network.id, cidr=net_spec['cidr'], ip_version=4, enable_dhcp=True ) networks[net_spec['name']] = {'network': network, 'subnet': subnet} # Step 3: Create security groups (Neutron) print("Step 3: Creating security groups...") sg = conn.network.create_security_group( name=f"{vnf_spec['name']}-sg", description="VNF security group" ) for rule in vnf_spec['security_rules']: conn.network.create_security_group_rule( security_group_id=sg.id, direction=rule['direction'], protocol=rule['protocol'], port_range_min=rule.get('port_min'), port_range_max=rule.get('port_max'), remote_ip_prefix=rule.get('source', '0.0.0.0/0') ) # Step 4: Create ports with required features (Neutron) print("Step 4: Creating network ports...") ports = [] for port_spec in vnf_spec['ports']: port_config = { 'name': port_spec['name'], 'network_id': networks[port_spec['network']]['network'].id, 'security_groups': [sg.id] if port_spec.get('security') else [] } # SR-IOV configuration for data plane ports if port_spec.get('sriov'): port_config['binding:vnic_type'] = 'direct' port = conn.network.create_port(**port_config) ports.append(port) # Step 5: Create server with NFV-optimized configuration (Nova) print("Step 5: Creating VNF instance...") server = conn.compute.create_server( name=vnf_spec['name'], image_id=image.id, flavor_id=vnf_spec['flavor_id'], # NFV-optimized flavor networks=[{'port': p.id} for p in ports], availability_zone=vnf_spec.get('availability_zone', 'nova'), user_data=vnf_spec.get('cloud_init'), # Scheduler hints for HA scheduler_hints={ 'different_host': vnf_spec.get('anti_affinity_group', []) } ) # Step 6: Wait for VNF to become active print("Step 6: Waiting for VNF to become active...") timeout = 300 # 5 minutes start_time = time.time() while time.time() - start_time < timeout: server = conn.compute.get_server(server.id) if server.status == 'ACTIVE': print(f" VNF {vnf_spec['name']} is ACTIVE") break elif server.status == 'ERROR': raise Exception(f"VNF deployment failed: {server.fault}") time.sleep(5) else: raise Exception("VNF deployment timed out") # Return deployment result return { 'vnf_id': server.id, 'status': server.status, 'addresses': server.addresses, 'ports': [p.id for p in ports], 'networks': [n['network'].id for n in networks.values()] }For cloud-native VNFs (CNFs), Kubernetes increasingly serves as the VIM. Kubernetes native resource management (pods, services, volumes) replaces OpenStack APIs, while Container Network Interface (CNI) plugins provide networking. Many operators now run hybrid environments: OpenStack for VM-based VNFs and Kubernetes for container-based CNFs, orchestrated by a unified NFVO.
The VNF Manager (VNFM) is responsible for the lifecycle management of VNF instances. It's the component that translates VNF deployment requests into actual infrastructure actions, manages VNF configuration, and ensures VNFs remain healthy during operation.
VNFM Responsibilities:
Generic vs. Specific VNFM:
The ETSI architecture supports two VNFM approaches:
Generic VNFM:
Specific VNFM:
Practical Reality:
Most production environments use a hybrid approach:
VNFM Implementations:
Several VNFM implementations are available:
VNFM-VNF Communication:
The VNFM communicates with VNFs through several mechanisms:
In large deployments with thousands of VNF instances, VNFM becomes a potential bottleneck. A single VNFM managing thousands of VNFs must handle concurrent lifecycle operations, health monitoring for all instances, and configuration updates. Production deployments require VNFM clustering, careful rate limiting, and efficient database backends.
The NFV Orchestrator (NFVO) sits at the top of the MANO architecture, providing two critical functions:
Network Service Concept:
A Network Service (NS) is an end-to-end service composed of:
For example, a mobile core network service might include a vEPC, vIMS, and vSBC connected via virtual links, with connections to the radio access network.
NFVO Functions:
Network Service Lifecycle Management:
Resource Orchestration:
Catalog Management:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111
# Example Network Service Descriptor (NSD)# Defines a mobile core network service composed of multiple VNFs tosca_definitions_version: tosca_simple_yaml_1_3 description: 4G Mobile Core Network Service metadata: template_name: mobile-core-ns template_version: "1.0.0" template_author: NetworkOperator topology_template: node_templates: # VNF 1: Virtualized Evolved Packet Core vEPC: type: tosca.nodes.nfv.VNF properties: descriptor_id: "vEPC-v2.1" provider: "Vendor-A" product_name: "Virtual EPC" software_version: "2.1.0" flavour_id: "large" requirements: - virtual_link: internal_network # VNF 2: Virtualized IMS Core vIMS: type: tosca.nodes.nfv.VNF properties: descriptor_id: "vIMS-v3.0" provider: "Vendor-B" product_name: "Cloud IMS" software_version: "3.0.2" flavour_id: "medium" requirements: - virtual_link: internal_network - virtual_link: sip_network # VNF 3: Session Border Controller vSBC: type: tosca.nodes.nfv.VNF properties: descriptor_id: "vSBC-v1.5" provider: "Vendor-C" product_name: "Virtual SBC" software_version: "1.5.1" flavour_id: "medium" requirements: - virtual_link: sip_network - virtual_link: external_network # Internal VL: Connects EPC and IMS internal_network: type: tosca.nodes.nfv.NsVirtualLink properties: connectivity_type: layer_protocols: [ipv4] capabilities: virtual_linkable: properties: bitrate_requirement: 10 Gbps # SIP VL: IMS to SBC communication sip_network: type: tosca.nodes.nfv.NsVirtualLink properties: connectivity_type: layer_protocols: [ipv4] capabilities: virtual_linkable: properties: bitrate_requirement: 1 Gbps # External VL: Connection to external SIP peers external_network: type: tosca.nodes.nfv.NsVirtualLink properties: connectivity_type: layer_protocols: [ipv4] requirements: - dependency: ext_connection_point # Policies for NS-level behavior policies: - ns_scaling: type: tosca.policies.nfv.NsScalingAspects properties: aspects: capacity_scaling: name: "Core capacity scaling" description: "Scale EPC and IMS together" scaling_level: 3 vnf_to_level_mapping: vEPC: [1, 2, 3] vIMS: [1, 2, 3] - deployment_location: type: tosca.policies.Placement targets: [vEPC, vIMS, vSBC] properties: region: "us-east" availability_zone: "zone-1" # Connection to external networks substitution_mappings: node_type: tosca.nodes.nfv.NS requirements: ran_connection: [vEPC, s1_port] internet_connection: [vSBC, external_port]Service Function Chaining (SFC):
A critical NFVO capability is Service Function Chaining—the ability to define traffic paths through a sequence of VNFs:
Traffic → Firewall → DPI → NAT → Load Balancer → Destination
SFC requires:
NFVO Implementations:
The NFVO integrates with OSS/BSS, multiple VNFMs, and multiple VIMs—making it one of the most integration-intensive components in the NFV architecture. Real-world deployments often spend more time on integration than on configuration, as each interface (Or-Vnfm, Or-Vi, Os-Ma-Nfvo) has variations between implementations despite theoretical standardization.
The ETSI NFV architecture defines reference points—standardized interfaces between architectural components. These reference points enable multi-vendor interoperability, allowing operators to mix components from different vendors.
Major Reference Points:
| Reference Point | Connects | Primary Functions | Typical Protocol |
|---|---|---|---|
| Os-Ma-Nfvo | OSS/BSS ↔ NFVO | NS lifecycle requests, resource info | REST API |
| Or-Vnfm | NFVO ↔ VNFM | VNF lifecycle coordination | REST (SOL003) |
| Or-Vi | NFVO ↔ VIM | Cross-VIM resource orchestration | REST (SOL005) |
| Vi-Vnfm | VIM ↔ VNFM | VNF resource requests | OpenStack API |
| Ve-Vnfm-em | VNFM ↔ VNF (EM) | VNF configuration and monitoring | NETCONF, REST |
| Ve-Vnfm-vnf | VNFM ↔ VNF | Direct VNF interface (indicators) | VNF-specific |
| Nf-Vi | VNF ↔ NFVI | Access to virtualized resources | SR-IOV, VirtIO |
| Vn-Nf | VNF ↔ VNF | Inter-VNF communication | VLs, IP networks |
ETSI SOL Specifications:
The practical implementation of reference points is defined in ETSI SOL (Solutions) specifications:
SOL001: VNFD based on TOSCA
SOL002: Ve-Vnfm Reference Point Interface
SOL003: Or-Vnfm Reference Point Interface
SOL004: VNF Package Specification
SOL005: Or-Vi Reference Point Interface
SOL007: NSD based on TOSCA
While ETSI specifications aim for interoperability, practical experience shows that vendor implementations often have subtle incompatibilities. Extensions, optional features, and interpretation differences mean that component substitution rarely works 'out of the box.' Thorough integration testing is essential when combining components from different vendors.
Understanding the architectural framework is essential, but real deployments require adapting the reference architecture to specific operational requirements. Several deployment patterns have emerged:
Pattern 1: Centralized MANO
All MANO components (NFVO, VNFM, VIM) deployed centrally, managing distributed NFVI:
Pattern 2: Distributed MANO
MANO components distributed across locations with hierarchy:
Central NFVO with global view
Regional/PoP VNFMs and VIMs
Local autonomy with central coordination
Advantages: Resilience, reduced WAN dependency, scalability
Challenges: Consistency, synchronization complexity
Use Case: Large operators with multiple PoPs
Pattern 3: Hybrid Cloud NFV
NFV extended to public cloud:
Private NFVI for sensitive VNFs
Public cloud for burst capacity or specific workloads
Unified orchestration across environments
Advantages: Flexibility, burst capacity, reduced CapEx
Challenges: Security, connectivity, consistent performance
Use Case: Operators exploring cloud strategies
High Availability Considerations:
Production NFV deployments require HA at multiple levels:
MANO HA:
NFVI HA:
VNF HA:
Disaster Recovery:
Many organizations start with centralized MANO and evolve to distributed architectures as they gain experience. The complexity of distributed MANO—synchronization, conflict resolution, partial failure handling—is substantial. Master centralized operations before introducing distribution, and introduce distribution incrementally based on demonstrated needs.
We've comprehensively explored the ETSI NFV architectural framework. Let's consolidate the key concepts:
What's Next:
With the NFV architecture understood, we'll explore the relationship between NFV and SDN—two distinct but complementary technologies that are often deployed together. Understanding how SDN and NFV interact is crucial for designing modern, software-defined network infrastructure.
You now understand the complete ETSI NFV architectural framework—NFVI, MANO components, reference points, and deployment patterns. This knowledge prepares you to design NFV infrastructure, evaluate MANO solutions, and understand how the pieces fit together in production deployments.