Loading learning content...
Serverless computing changed how we deploy applications—no servers to manage, automatic scaling, pay-per-execution pricing. But serverless had a geographic limitation: functions ran in cloud regions, which meant latency for globally distributed users.
Edge functions extend the serverless model to the network edge. Instead of code executing in a handful of data center regions, your functions run in hundreds of locations worldwide—often within 50 kilometers of any user. The same developer experience (write code, push, done) now delivers sub-50ms response times globally.
This page explores the leading edge function platforms, their underlying architectures, and the patterns for building effective edge-native applications. We'll go deep on Cloudflare Workers, AWS Lambda@Edge, and emerging alternatives—understanding not just how to use them, but why they work the way they do.
By the end of this page, you will understand the technical architecture of major edge function platforms, their execution models and isolation mechanisms, platform-specific constraints and capabilities, comparison criteria for platform selection, and practical patterns for edge function development. You'll be equipped to choose the right platform for your use case and implement edge functions effectively.
Edge functions are serverless compute units that execute at network edge locations, responding to events (HTTP requests, WebSocket connections, scheduled triggers) with ultra-low latency. They combine the operational simplicity of serverless with the performance benefits of geographic distribution.
Defining Characteristics of Edge Functions:
Edge Functions vs. Regional Serverless:
Understanding the trade-offs between edge functions and traditional regional serverless (like standard AWS Lambda or Cloud Functions) is essential for architectural decisions.
| Dimension | Edge Functions | Regional Serverless |
|---|---|---|
| Locations | 100-300+ edge PoPs | 20-30 cloud regions |
| Latency (P50) | 5-20ms globally | 50-200ms for distant users |
| Cold Start | 0-5ms (V8 isolates) | 100-3000ms (container spin-up) |
| Max Execution | 10-30 seconds | 15 minutes (Lambda), 9 min (Cloud Functions) |
| Max Memory | 128MB-1GB typical | Up to 10GB |
| Runtime Support | JavaScript/WebAssembly primary | Many languages natively |
| State Access | Edge KV/Durable Objects | Any cloud service |
| Cost Model | Per-request + CPU time | Per-request + duration + memory |
Edge functions excel for request/response transformations, personalization, A/B testing, authentication, and API gateways. Regional serverless is better for long-running processes, heavy computation, complex state management, and workloads requiring diverse cloud service integrations. Many architectures use both: edge functions for the hot path, regional serverless for background processing.
Cloudflare Workers is the pioneering and most mature edge function platform, running on Cloudflare's global network spanning 300+ cities in 100+ countries. Understanding its architecture reveals the innovations that enable sub-millisecond cold starts and global scale.
V8 Isolate Architecture:
The key innovation of Workers is its use of V8 isolates rather than containers or VMs for function isolation. V8 is Chrome's JavaScript engine, and isolates are lightweight execution contexts within V8:
Request Lifecycle in Cloudflare Workers:
Time breakdown for a cache miss:
Workers provides several state management primitives: Workers KV for eventually-consistent global key-value storage (reads in <50ms globally), Durable Objects for strongly-consistent, single-threaded state coordination, R2 for S3-compatible object storage at the edge, and D1 for SQLite at the edge. Each serves different consistency and latency requirements.
Resource Limits and Constraints:
Workers imposes constraints that shape what workloads are suitable:
| Resource | Free Plan | Paid Plan | Enterprise |
|---|---|---|---|
| CPU Time (per request) | 10ms | 30-50ms | Up to 15min (via Cron) |
| Memory | 128MB | 128MB | Variable |
| Script Size | 1MB (compressed) | 5MB | 10MB+ |
| Requests/day | 100,000 | Unlimited | Unlimited |
| Subrequests (fetch) | 50 | 1000 | 1000+ |
| Environment Variables | 64 | 64 | Custom |
| KV Operations | 1000 reads/day | 25M reads/month | Custom |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061
// A production-ready Cloudflare Worker handling personalized contentexport default { async fetch(request, env, ctx) { const url = new URL(request.url); // Geographic personalization using Cloudflare's request properties const country = request.cf?.country || 'US'; const city = request.cf?.city || 'Unknown'; const colo = request.cf?.colo; // Edge location code (e.g., 'DFW', 'LHR') // Check for authentication token const authToken = request.headers.get('Authorization'); if (url.pathname.startsWith('/api/') && !authToken) { return new Response('Unauthorized', { status: 401 }); } // Rate limiting using Workers KV const clientIP = request.headers.get('CF-Connecting-IP'); const rateLimitKey = `ratelimit:${clientIP}:${Math.floor(Date.now() / 60000)}`; const requestCount = parseInt(await env.RATE_LIMIT_KV.get(rateLimitKey)) || 0; if (requestCount > 100) { return new Response('Rate limit exceeded', { status: 429, headers: { 'Retry-After': '60' } }); } // Increment counter asynchronously (non-blocking) ctx.waitUntil(env.RATE_LIMIT_KV.put(rateLimitKey, String(requestCount + 1), { expirationTtl: 120 })); // A/B test assignment based on consistent hashing const abGroup = hashToGroup(clientIP, ['control', 'variant-a', 'variant-b']); // Fetch from origin with modified headers const originRequest = new Request(request); originRequest.headers.set('X-AB-Group', abGroup); originRequest.headers.set('X-User-Country', country); originRequest.headers.set('X-Edge-Location', colo); const response = await fetch(originRequest); // Clone and modify response const modifiedResponse = new Response(response.body, response); modifiedResponse.headers.set('X-Served-From', colo); modifiedResponse.headers.set('X-AB-Group', abGroup); return modifiedResponse; }}; function hashToGroup(input, groups) { let hash = 0; for (let i = 0; i < input.length; i++) { hash = ((hash << 5) - hash) + input.charCodeAt(i); hash = hash & hash; } return groups[Math.abs(hash) % groups.length];}Lambda@Edge is AWS's edge function offering, tightly integrated with CloudFront (their CDN). Unlike Cloudflare Workers' V8 isolate model, Lambda@Edge uses the same container-based architecture as regional Lambda, extended to edge locations.
Architectural Model:
Lambda@Edge functions are triggered at four points in the CloudFront request lifecycle:
Lambda@Edge vs. CloudFront Functions:
AWS offers two edge compute options—understanding when to use each is critical:
| Aspect | CloudFront Functions | Lambda@Edge |
|---|---|---|
| Trigger Points | Viewer request/response only | All four trigger points |
| Execution Location | All 400+ CloudFront PoPs | Regional edge caches (~13 locations) |
| Runtime | JavaScript (ES5) | Node.js, Python |
| Max Execution | 1ms | 5-30 seconds (varies by trigger) |
| Max Memory | 2MB | 128-10240MB |
| Max Package Size | 10KB | 50MB (250MB unzipped) |
| Network Access | No | Yes |
| Cost | ~$0.10 per 1M invocations | ~$0.60 per 1M + duration |
| Cold Start | <1ms (always warm) | 100-500ms (container-based) |
| Use Case | Simple transforms, headers | Complex logic, origin modification |
Lambda@Edge functions must be deployed in us-east-1 (N. Virginia) and are replicated globally. Updates take 5-15 minutes to propagate. Unlike regional Lambda, you cannot use VPC, environment variables, or Lambda layers. These constraints stem from the replication model and should inform your architecture.
Lambda@Edge Resource Limits by Trigger:
| Limit | Viewer Triggers | Origin Triggers |
|---|---|---|
| Max Execution Time | 5 seconds | 30 seconds |
| Max Memory | 128MB | 10240MB (10GB) |
| Max Response Size | 40KB | 1MB |
| Max Request Body | 40KB (can access) | 1MB (can access) |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879
// Lambda@Edge: Viewer Request for A/B testing and authenticationexports.handler = async (event) => { const request = event.Records[0].cf.request; const headers = request.headers; // Authentication check const authHeader = headers['authorization']; if (!authHeader || !authHeader[0]) { // Check for protected paths if (request.uri.startsWith('/api/private/')) { return { status: '401', statusDescription: 'Unauthorized', headers: { 'content-type': [{ value: 'application/json' }], 'www-authenticate': [{ value: 'Bearer realm="api"' }] }, body: JSON.stringify({ error: 'Authentication required' }) }; } } // A/B test assignment using consistent hashing const viewerIp = headers['x-forwarded-for'] ? headers['x-forwarded-for'][0].value.split(',')[0] : 'unknown'; const experimentGroup = assignGroup(viewerIp, { 'control': 50, 'variant-new-ui': 30, 'variant-fast-checkout': 20 }); // Add experiment header for origin processing request.headers['x-experiment-group'] = [{ value: experimentGroup }]; // URL rewriting based on experiment if (experimentGroup === 'variant-new-ui' && request.uri === '/') { request.uri = '/experiments/new-ui/index.html'; } // Geo-based content selection const country = headers['cloudfront-viewer-country']; if (country && country[0]) { request.headers['x-viewer-country'] = [{ value: country[0].value }]; // EU visitors get GDPR-compliant version const euCountries = ['DE', 'FR', 'IT', 'ES', 'NL', 'BE', 'AT', 'PL']; if (euCountries.includes(country[0].value)) { request.headers['x-gdpr-mode'] = [{ value: 'true' }]; } } return request;}; // Consistent hashing for A/B assignmentfunction assignGroup(identifier, weights) { // Simple hash function let hash = 0; for (let i = 0; i < identifier.length; i++) { hash = ((hash << 5) - hash) + identifier.charCodeAt(i); hash = hash & hash; } // Normalize to 0-100 const bucket = Math.abs(hash) % 100; // Assign to group based on weights let cumulative = 0; for (const [group, weight] of Object.entries(weights)) { cumulative += weight; if (bucket < cumulative) { return group; } } return Object.keys(weights)[0]; // Fallback}Beyond Cloudflare and AWS, several other platforms offer edge function capabilities with distinct architectures and trade-offs:
| Platform | Locations | Architecture | Primary Language | Best For |
|---|---|---|---|---|
| Cloudflare Workers | 300+ | V8 Isolates | JavaScript + Wasm | General-purpose edge compute |
| Lambda@Edge | ~13 regional | Containers | Node.js/Python | AWS integration, complex logic |
| CloudFront Functions | 400+ (viewer only) | JavaScript VM | JavaScript (ES5) | Simple, fast transforms |
| Vercel Edge | 300+ (CF network) | V8 Isolates | JavaScript/TypeScript | Next.js applications |
| Netlify Edge | ~100+ (Deno network) | V8/Deno | TypeScript/JavaScript | JAMstack augmentation |
| Fastly Compute | ~80 high-quality | Wasm (Wasmtime) | Rust/Go/Any→Wasm | Performance-critical workloads |
| Deno Deploy | 35+ | V8/Deno | TypeScript/JavaScript | Deno-native applications |
Choose based on: (1) Existing infrastructure—AWS shops often prefer Lambda@Edge despite limitations; (2) Network size requirements—global consumer apps need 200+ locations; (3) Language requirements—non-JavaScript stacks favor Fastly's Wasm approach; (4) Feature integration—Next.js teams may value Vercel's seamless experience; (5) Latency sensitivity—smaller networks with premium locations may suffice for P99-focused applications.
Edge function platforms achieve multi-tenancy through different isolation mechanisms. Understanding these mechanisms is crucial for reasoning about security, performance, and cold start characteristics.
WebAssembly as an Isolation Mechanism:
WebAssembly (Wasm) offers a third isolation model, used by Fastly and increasingly supported by Workers:
Cold Start Comparison:
| Isolation Model | Cold Start (P50) | Cold Start (P99) | Memory Overhead | Isolation Strength |
|---|---|---|---|---|
| V8 Isolate | 0-5ms | 5-10ms | <5MB | High (V8 guarantees) |
| Container/microVM | 100-500ms | 500-3000ms | 50-500MB | Very High (OS-level) |
| WebAssembly | 1-10ms | 10-30ms | 1-10MB | High (Wasm sandbox) |
| CloudFront Functions | <1ms | 1-2ms | <1MB | Medium (limited VM) |
V8 isolates trade language flexibility for startup speed—JavaScript (and increasingly Wasm) only. Containers trade startup speed for full language support and stronger isolation. CloudFront Functions trade capability for extreme simplicity and speed. Your isolation needs depend on workload sensitivity, language requirements, and latency budgets.
Over the evolution of edge computing, common patterns have emerged for leveraging edge functions effectively. These patterns represent proven approaches to solving recurring problems:
Start with patterns that offload work from origins (authentication, rate limiting) or reduce latency for personalization. These provide immediate, measurable value. More complex patterns (response aggregation, edge-rendered content) require more sophisticated state management and error handling—add them incrementally as your edge architecture matures.
Building effective edge functions requires adapting development practices to the constraints and characteristics of edge environments:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
// Cloudflare Worker implementing best practices // Global scope: executes once per isolate, not per requestconst CONFIG = { TIMEOUT_MS: 5000, MAX_SUBREQUESTS: 5, CACHE_TTL_SECONDS: 300,}; // Pre-compile regex patterns at module scopeconst PATH_PATTERNS = { api: /^\/api\/v[1-3]\//, static: /\.(js|css|png|jpg|webp|woff2)$/,}; export default { async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { const url = new URL(request.url); // BEST PRACTICE: Early exit for simple cases if (PATH_PATTERNS.static.test(url.pathname)) { // Static assets: pass through to origin/cache return fetch(request); } // BEST PRACTICE: Wrap main logic in try-catch with fallback try { return await handleRequest(request, env, ctx); } catch (error) { // BEST PRACTICE: Fail open - fall back to origin console.error('Edge function error:', error); return fetch(request); } }}; async function handleRequest( request: Request, env: Env, ctx: ExecutionContext): Promise<Response> { // BEST PRACTICE: Validate inputs early const authResult = validateAuth(request, env); if (!authResult.valid) { return new Response(JSON.stringify({ error: authResult.error }), { status: 401, headers: { 'Content-Type': 'application/json' } }); } // BEST PRACTICE: Check cache before expensive operations const cacheKey = request.url; const cache = caches.default; const cachedResponse = await cache.match(cacheKey); if (cachedResponse) { return cachedResponse; } // BEST PRACTICE: Timeout wrapper for origin fetches const response = await fetchWithTimeout(request, CONFIG.TIMEOUT_MS); // BEST PRACTICE: Clone before caching (response can only be consumed once) const responseToCache = response.clone(); // BEST PRACTICE: Non-blocking cache write ctx.waitUntil( cache.put(cacheKey, responseToCache) ); return response;} // BEST PRACTICE: Reusable timeout wrapperasync function fetchWithTimeout( request: Request, timeoutMs: number): Promise<Response> { const controller = new AbortController(); const timeoutId = setTimeout(() => controller.abort(), timeoutMs); try { const response = await fetch(request, { signal: controller.signal }); return response; } finally { clearTimeout(timeoutId); }} // BEST PRACTICE: Centralized auth validationfunction validateAuth( request: Request, env: Env): { valid: boolean; error?: string } { const authHeader = request.headers.get('Authorization'); if (!authHeader) { return { valid: false, error: 'Missing authorization header' }; } // Token validation logic... return { valid: true };} interface Env { API_KEY: string; CACHE_KV: KVNamespace;}We've explored the landscape of edge function platforms—from the pioneering V8-based architecture of Cloudflare Workers to the container-based approach of Lambda@Edge and the emerging alternatives. Let's consolidate the key insights:
What's Next:
Now that we understand the edge function platforms and patterns, the next page examines edge use cases in depth. We'll explore specific applications—from real-time personalization to IoT processing—and analyze why edge computing provides unique value for each scenario.
You now understand the major edge function platforms, their architectures, constraints, and development patterns. You can evaluate platforms based on your requirements and apply proven patterns for security, personalization, and API gateway use cases. Next, we'll explore where edge computing provides the most compelling value.