Loading learning content...
Every production system requires tasks that run on a schedule—not in response to user actions or external events, but at specific times or intervals. Report generation, data synchronization, cleanup operations, health checks, and batch processing all fall into this category. In traditional architectures, these tasks often run on dedicated servers or as cron jobs on VMs.
Serverless computing transforms scheduled task execution. Instead of maintaining always-on infrastructure for tasks that might run for minutes per day, cloud schedulers trigger serverless functions exactly when needed. You pay only for the compute time used during execution, not for the hours or days between runs.
This page provides a comprehensive guide to implementing scheduled tasks in serverless environments. We'll cover scheduling mechanisms, cron expression syntax, batch processing patterns, error handling strategies, and operational best practices that ensure your scheduled workloads run reliably at scale.
By the end of this page, you will understand: (1) How cloud schedulers trigger serverless functions, (2) Cron expression syntax for flexible scheduling, (3) Patterns for batch processing within Lambda execution limits, (4) Strategies for long-running scheduled tasks, (5) Error handling and alerting for scheduled workloads, and (6) Operational best practices for production scheduled tasks.
Cloud providers offer managed scheduling services that trigger Lambda functions at specified times or intervals. These schedulers replace traditional cron daemons and task scheduling systems with fully managed, serverless alternatives.
AWS Scheduling Options:
EventBridge Scheduler (Recommended):
EventBridge Rules (Classic):
CloudWatch Events (Legacy):
| Provider | Service | Key Features | Minimum Interval |
|---|---|---|---|
| AWS | EventBridge Scheduler | Timezone, retries, DLQ, one-time | 1 minute |
| AWS | EventBridge Rules | Event patterns + schedules | 1 minute |
| Azure | Timer Trigger | NCRONTAB expressions, bindings | 1 second (not recommended) |
| GCP | Cloud Scheduler | HTTP, Pub/Sub, App Engine targets | 1 minute |
| GCP | Cloud Tasks | Task queuing with scheduled delivery | Variable |
Scheduling Terminology:
Rate Expression: Specifies fixed intervals (every 5 minutes, every 1 hour). Simple but less flexible:
rate(5 minutes)
rate(1 hour)
rate(7 days)
Cron Expression: Specifies exact times using six fields (minute, hour, day-of-month, month, day-of-week, year). More complex but highly flexible:
cron(0 12 * * ? *) # Every day at 12:00 UTC
cron(0 8 ? * MON-FRI *) # Weekdays at 8:00 UTC
cron(0 0 1 * ? *) # First day of every month at midnight
One-Time Schedule: Triggers exactly once at a specified time, useful for delayed processing or scheduled events:
at(2024-12-31T23:59:00)
EventBridge Scheduler is AWS's newest and most capable scheduling service. It supports timezone-aware schedules, flexible retry policies, and dead letter queues out of the box. Unless you need tight integration with EventBridge event patterns, choose EventBridge Scheduler for new scheduled tasks.
Cron expressions provide powerful scheduling flexibility, but their syntax can be confusing. AWS uses a six-field format that differs slightly from traditional Unix cron.
AWS Cron Format:
cron(minute hour day-of-month month day-of-week year)
0-59 0-23 1-31 1-12|JAN-DEC 1-7|SUN-SAT *|1970-2199
Field Descriptions:
| Field | Values | Special Characters | Notes |
|---|---|---|---|
| Minute | 0-59 | , - * / | - |
| Hour | 0-23 | , - * / | UTC by default |
| Day-of-month | 1-31 | , - * / ? L W | ? = no specific value |
| Month | 1-12 or JAN-DEC | , - * / | - |
| Day-of-week | 1-7 or SUN-SAT | , - * / ? L # | 1 = Sunday |
| Year | 1970-2199 or * | , - * / | Typically * |
12345678910111213141516171819202122
# Basic Schedulescron(0 12 * * ? *) # Every day at 12:00 PM UTCcron(0 18 ? * MON-FRI *) # Weekdays at 6:00 PM UTCcron(0 8 1 * ? *) # First of every month at 8:00 AM UTCcron(0 0 ? * SUN *) # Every Sunday at midnight UTC # Complex Schedulescron(0/15 * * * ? *) # Every 15 minutes (0, 15, 30, 45)cron(0 9-17 ? * MON-FRI *)# Every hour 9 AM - 5 PM on weekdayscron(30 4 1,15 * ? *) # 4:30 AM on 1st and 15th of each monthcron(0 0 ? * 6#1 *) # First Friday of every month at midnightcron(0 23 L * ? *) # Last day of every month at 11 PM # Special Characters Explained# * Match any value# ? No specific value (for day-of-month/day-of-week conflict)# - Range (MON-FRI, 1-5)# , List (1,15,28)# / Increment (0/15 = every 15 starting at 0)# L Last (last day of month, last Friday)# W Weekday nearest to specified day# # Nth occurrence (6#1 = first Friday)Common Scheduling Patterns:
Interval-Based:
cron(0/5 * * * ? *) — Every 5 minutescron(0 * * * ? *) — Every hour on the hourcron(0 0/6 * * ? *) — Every 6 hoursBusiness Hours:
cron(0 9-17 ? * MON-FRI *) — Every hour during business hourscron(0,30 9-17 ? * MON-FRI *) — Every 30 minutes during business hoursDaily Reports:
cron(0 6 * * ? *) — Daily at 6 AM UTCcron(0 23 * * ? *) — Daily at 11 PM UTC (end of day processing)Weekly/Monthly:
cron(0 10 ? * MON *) — Every Monday at 10 AMcron(0 0 1 * ? *) — First of month at midnightcron(0 0 L * ? *) — Last day of month at midnightYou cannot specify both day-of-month and day-of-week simultaneously in AWS cron. One must be '?'. For example, 'cron(0 12 15 * FRI *)' is invalid—you can't say 'the 15th if it's a Friday'. Use '?' for the field you're not constraining: 'cron(0 12 15 * ? *)' for the 15th, or 'cron(0 12 ? * FRI *)' for Fridays.
| Aspect | EventBridge Scheduler | EventBridge Rules |
|---|---|---|
| Default timezone | UTC | UTC only |
| Custom timezone | Yes (IANA format) | No |
| DST handling | Automatic adjustment | N/A |
| Example | America/New_York, Europe/London |
Implementing scheduled Lambda functions requires attention to the unique characteristics of time-triggered execution: there's no user request context, execution might overlap between invocations, and long-running tasks must work within Lambda's timeout constraints.
Basic Scheduled Function Structure:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
import { ScheduledHandler } from "aws-lambda";import { DynamoDBClient, ScanCommand } from "@aws-sdk/client-dynamodb";import { SESClient, SendEmailCommand } from "@aws-sdk/client-ses"; const dynamodb = new DynamoDBClient({});const ses = new SESClient({}); interface DailyReportContext { scheduledTime: string; executionId: string;} export const handler: ScheduledHandler = async (event) => { const context: DailyReportContext = { scheduledTime: event.time, executionId: event.id }; console.log(JSON.stringify({ level: "INFO", message: "Starting daily report generation", ...context })); try { // 1. Gather data const reportData = await gatherReportData(); // 2. Generate report const report = await generateReport(reportData); // 3. Send report await sendReport(report); console.log(JSON.stringify({ level: "INFO", message: "Daily report completed successfully", recordsProcessed: reportData.length, ...context })); } catch (error) { console.error(JSON.stringify({ level: "ERROR", message: "Daily report failed", error: (error as Error).message, stack: (error as Error).stack, ...context })); throw error; // Rethrow to trigger retry/DLQ }}; async function gatherReportData(): Promise<any[]> { const yesterday = new Date(); yesterday.setDate(yesterday.getDate() - 1); const dateKey = yesterday.toISOString().split('T')[0]; const result = await dynamodb.send(new ScanCommand({ TableName: process.env.ORDERS_TABLE!, FilterExpression: "orderDate = :date", ExpressionAttributeValues: { ":date": { S: dateKey } } })); return result.Items || [];} async function generateReport(data: any[]): Promise<string> { const totalOrders = data.length; const totalRevenue = data.reduce((sum, item) => sum + parseFloat(item.total?.N || "0"), 0); return `Daily Sales Report==================Date: ${new Date().toISOString().split('T')[0]}Total Orders: ${totalOrders}Total Revenue: $${totalRevenue.toFixed(2)} `.trim();} async function sendReport(report: string): Promise<void> { await ses.send(new SendEmailCommand({ Source: process.env.SENDER_EMAIL!, Destination: { ToAddresses: [process.env.REPORT_RECIPIENT!] }, Message: { Subject: { Data: "Daily Sales Report" }, Body: { Text: { Data: report } } } }));}Infrastructure as Code (CDK):
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
import * as cdk from "aws-cdk-lib";import * as lambda from "aws-cdk-lib/aws-lambda";import * as scheduler from "aws-cdk-lib/aws-scheduler";import * as targets from "aws-cdk-lib/aws-scheduler-targets";import * as iam from "aws-cdk-lib/aws-iam"; export class ScheduledTaskStack extends cdk.Stack { constructor(scope: cdk.App, id: string, props?: cdk.StackProps) { super(scope, id, props); // Lambda function for the scheduled task const reportFunction = new lambda.Function(this, "DailyReportFunction", { runtime: lambda.Runtime.NODEJS_18_X, handler: "daily-report.handler", code: lambda.Code.fromAsset("functions"), timeout: cdk.Duration.minutes(5), memorySize: 512, environment: { ORDERS_TABLE: "orders", SENDER_EMAIL: "reports@example.com", REPORT_RECIPIENT: "team@example.com" } }); // Dead letter queue for failed invocations const dlq = new cdk.aws_sqs.Queue(this, "ReportDLQ", { queueName: "daily-report-dlq", retentionPeriod: cdk.Duration.days(14) }); // EventBridge Scheduler with timezone and retry policy new scheduler.CfnSchedule(this, "DailyReportSchedule", { name: "daily-sales-report", description: "Generate and send daily sales report", // Run at 6 AM Eastern time every day scheduleExpression: "cron(0 6 * * ? *)", scheduleExpressionTimezone: "America/New_York", flexibleTimeWindow: { mode: "FLEXIBLE", maximumWindowInMinutes: 15 // Spread load across 15-minute window }, target: { arn: reportFunction.functionArn, roleArn: schedulerRole.roleArn, retryPolicy: { maximumEventAgeInSeconds: 3600, // 1 hour max maximumRetryAttempts: 3 }, deadLetterConfig: { arn: dlq.queueArn } } }); }}EventBridge Scheduler's flexible time window spreads invocations across a time range rather than triggering all at once. If many scheduled tasks run at the same time (e.g., hourly at :00), enabling flexible windows reduces thundering herd effects and infrastructure strain.
Many scheduled tasks involve processing large datasets—more data than can be processed within a single Lambda invocation (15-minute timeout). Several patterns enable effective batch processing in serverless architectures.
Pattern 1: Fan-Out with SQS
The scheduler triggers a "coordinator" function that queries for work and distributes it across worker functions via SQS:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071
import { ScheduledHandler } from "aws-lambda";import { SQSClient, SendMessageBatchCommand } from "@aws-sdk/client-sqs";import { DynamoDBClient, ScanCommand } from "@aws-sdk/client-dynamodb"; const sqs = new SQSClient({});const dynamodb = new DynamoDBClient({});const BATCH_SIZE = 25; // Items per workerconst SQS_BATCH_SIZE = 10; // SQS max batch size export const coordinator: ScheduledHandler = async (event) => { console.log("Starting batch coordination", { scheduledTime: event.time }); // 1. Query all items that need processing const itemsToProcess = await getItemsToProcess(); console.log(`Found ${itemsToProcess.length} items to process`); if (itemsToProcess.length === 0) { console.log("No items to process, exiting"); return; } // 2. Split into batches const batches = chunkArray(itemsToProcess, BATCH_SIZE); console.log(`Created ${batches.length} batches`); // 3. Send batches to SQS const sqsBatches = chunkArray(batches, SQS_BATCH_SIZE); for (const sqsBatch of sqsBatches) { await sqs.send(new SendMessageBatchCommand({ QueueUrl: process.env.WORKER_QUEUE_URL!, Entries: sqsBatch.map((batch, index) => ({ Id: `batch-${Date.now()}-${index}`, MessageBody: JSON.stringify({ batchId: `${event.time}-${index}`, items: batch }) })) })); } console.log(`Dispatched ${batches.length} batches to worker queue`);}; async function getItemsToProcess(): Promise<any[]> { const items: any[] = []; let lastKey: Record<string, any> | undefined; do { const result = await dynamodb.send(new ScanCommand({ TableName: process.env.ITEMS_TABLE!, FilterExpression: "#status = :pending", ExpressionAttributeNames: { "#status": "status" }, ExpressionAttributeValues: { ":pending": { S: "pending" } }, ExclusiveStartKey: lastKey })); items.push(...(result.Items || [])); lastKey = result.LastEvaluatedKey; } while (lastKey); return items;} function chunkArray<T>(array: T[], size: number): T[][] { const chunks: T[][] = []; for (let i = 0; i < array.length; i += size) { chunks.push(array.slice(i, i + size)); } return chunks;}Pattern 2: Step Functions for Complex Orchestration
For batch processing with complex logic, error handling, or human approval steps, Step Functions provides visual workflow orchestration:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465
{ "Comment": "Batch processing workflow", "StartAt": "GetItemsToProcess", "States": { "GetItemsToProcess": { "Type": "Task", "Resource": "arn:aws:lambda:us-east-1:123456789:function:get-items", "Next": "CheckItemCount" }, "CheckItemCount": { "Type": "Choice", "Choices": [ { "Variable": "$.itemCount", "NumericEquals": 0, "Next": "NoItemsToProcess" } ], "Default": "ProcessItemsInParallel" }, "ProcessItemsInParallel": { "Type": "Map", "ItemsPath": "$.items", "MaxConcurrency": 10, "Iterator": { "StartAt": "ProcessItem", "States": { "ProcessItem": { "Type": "Task", "Resource": "arn:aws:lambda:us-east-1:123456789:function:process-item", "Retry": [ { "ErrorEquals": ["RetryableError"], "IntervalSeconds": 5, "MaxAttempts": 3, "BackoffRate": 2 } ], "Catch": [ { "ErrorEquals": ["States.ALL"], "Next": "MarkItemFailed" } ], "End": true }, "MarkItemFailed": { "Type": "Task", "Resource": "arn:aws:lambda:us-east-1:123456789:function:mark-failed", "End": true } } }, "Next": "GenerateReport" }, "GenerateReport": { "Type": "Task", "Resource": "arn:aws:lambda:us-east-1:123456789:function:generate-report", "End": true }, "NoItemsToProcess": { "Type": "Succeed" } }}Use SQS fan-out for simple, homogeneous batch processing where all items are processed the same way. Use Step Functions when you need complex orchestration, conditional logic, error handling with rollback, or visibility into workflow execution state.
Lambda's 15-minute timeout presents challenges for tasks that inherently take longer. Several strategies enable longer execution times while maintaining serverless benefits.
Strategy 1: Continuation Tokens
Process data in chunks, saving progress after each chunk. If the function times out, the next invocation continues from where the previous left off:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192
import { ScheduledHandler, Context } from "aws-lambda";import { DynamoDBClient, GetItemCommand, PutItemCommand } from "@aws-sdk/client-dynamodb"; const dynamodb = new DynamoDBClient({});const CHUNK_SIZE = 1000;const MIN_REMAINING_TIME_MS = 60000; // 1 minute buffer interface JobState { jobId: string; lastProcessedId: string | null; processedCount: number; status: "in_progress" | "completed";} export const handler: ScheduledHandler = async (event, context: Context) => { const jobId = `job-${new Date().toISOString().split('T')[0]}`; // Load existing state or initialize let state = await loadJobState(jobId) || { jobId, lastProcessedId: null, processedCount: 0, status: "in_progress" as const }; if (state.status === "completed") { console.log(`Job ${jobId} already completed`); return; } console.log(`Resuming job ${jobId} from ${state.lastProcessedId || 'beginning'}`); while (context.getRemainingTimeInMillis() > MIN_REMAINING_TIME_MS) { // Get next chunk of items const items = await getNextChunk(state.lastProcessedId, CHUNK_SIZE); if (items.length === 0) { // No more items - mark complete state.status = "completed"; await saveJobState(state); console.log(`Job ${jobId} completed. Processed ${state.processedCount} items total.`); return; } // Process chunk for (const item of items) { await processItem(item); state.lastProcessedId = item.id; state.processedCount++; } // Save progress after each chunk await saveJobState(state); console.log(`Processed chunk. Total: ${state.processedCount}`); } // Running low on time - save state and exit await saveJobState(state); console.log(`Pausing job ${jobId} at ${state.processedCount} items. Will resume next invocation.`); // Re-trigger self for continuation await triggerContinuation(jobId);}; async function loadJobState(jobId: string): Promise<JobState | null> { const result = await dynamodb.send(new GetItemCommand({ TableName: process.env.JOB_STATE_TABLE!, Key: { jobId: { S: jobId } } })); if (!result.Item) return null; return { jobId: result.Item.jobId.S!, lastProcessedId: result.Item.lastProcessedId?.S || null, processedCount: parseInt(result.Item.processedCount.N!), status: result.Item.status.S as "in_progress" | "completed" };} async function saveJobState(state: JobState): Promise<void> { await dynamodb.send(new PutItemCommand({ TableName: process.env.JOB_STATE_TABLE!, Item: { jobId: { S: state.jobId }, lastProcessedId: state.lastProcessedId ? { S: state.lastProcessedId } : { NULL: true }, processedCount: { N: state.processedCount.toString() }, status: { S: state.status }, updatedAt: { S: new Date().toISOString() } } }));}Strategy 2: AWS Fargate for Truly Long Tasks
For tasks that might run for hours, Fargate provides serverless containers without Lambda's timeout:
Strategy 3: Recursive Invocation
A function invokes itself asynchronously before timing out, passing continuation state. This extends processing indefinitely but adds complexity and cost.
| Strategy | Max Duration | Complexity | Best For |
|---|---|---|---|
| Continuation Tokens | Multiple 15-min chunks | Medium | Resumable batch processing |
| Step Functions | 1 year workflow | Medium | Complex multi-step workflows |
| Fargate Tasks | Hours/days | Higher | ML training, data processing |
| Recursive Invocation | Unlimited | High | Avoid—harder to debug/monitor |
Always persist progress to external storage (DynamoDB, S3) rather than relying on function state. Lambda containers are ephemeral—if a container is recycled or a function fails, in-memory state is lost. External progress tracking enables recovery and visibility.
Scheduled tasks run without user interaction, making robust error handling and monitoring critical. Unlike API endpoints where users report issues, failed scheduled tasks may go unnoticed for hours or days without proper observability.
Error Handling Strategies:
Built-in Retries: EventBridge Scheduler supports configurable retry policies:
Graceful Degradation: Design tasks to handle partial failures:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103
import { ScheduledHandler } from "aws-lambda";import { SNSClient, PublishCommand } from "@aws-sdk/client-sns"; const sns = new SNSClient({}); interface ProcessingResult { totalItems: number; successCount: number; failureCount: number; failures: Array<{ id: string; error: string }>;} export const handler: ScheduledHandler = async (event) => { const result: ProcessingResult = { totalItems: 0, successCount: 0, failureCount: 0, failures: [] }; const startTime = Date.now(); console.log(JSON.stringify({ level: "INFO", message: "Starting scheduled task", scheduledTime: event.time })); try { const items = await getItemsToProcess(); result.totalItems = items.length; // Process items individually, capturing failures for (const item of items) { try { await processItem(item); result.successCount++; } catch (error) { result.failureCount++; result.failures.push({ id: item.id, error: (error as Error).message }); // Log individual failure but continue processing console.error(JSON.stringify({ level: "ERROR", message: "Failed to process item", itemId: item.id, error: (error as Error).message })); } } // Log summary const duration = Date.now() - startTime; console.log(JSON.stringify({ level: result.failureCount > 0 ? "WARN" : "INFO", message: "Scheduled task completed", ...result, durationMs: duration })); // Alert if failures exceed threshold if (result.failureCount > 0) { await sendFailureAlert(result); } // Throw if too many failures (trigger retry/DLQ) const failureRate = result.failureCount / result.totalItems; if (failureRate > 0.1) { // >10% failure rate throw new Error(`High failure rate: ${(failureRate * 100).toFixed(1)}%`); } } catch (error) { console.error(JSON.stringify({ level: "ERROR", message: "Scheduled task failed", error: (error as Error).message, stack: (error as Error).stack })); throw error; }}; async function sendFailureAlert(result: ProcessingResult): Promise<void> { const message = `Scheduled Task Alert====================Total Items: ${result.totalItems}Successes: ${result.successCount}Failures: ${result.failureCount} Failed Items:${result.failures.slice(0, 10).map(f => `- ${f.id}: ${f.error}`).join('\n')}${result.failures.length > 10 ? `... and ${result.failures.length - 10} more` : ''} `.trim(); await sns.send(new PublishCommand({ TopicArn: process.env.ALERT_TOPIC_ARN!, Subject: `[ALERT] Scheduled Task Failures: ${result.failureCount}/${result.totalItems}`, Message: message }));}Monitoring and Alerting:
Critical metrics for scheduled tasks:
Alert Conditions:
The most dangerous failure mode for scheduled tasks is silent failure—the task doesn't run but nothing alerts you. Always monitor for missing invocations, not just failed ones. CloudWatch alarms on 'Invocations < 1' for expected schedules can catch this.
Scheduled tasks power a wide variety of backend operations. Understanding common patterns helps you design robust solutions for your specific requirements.
Data Aggregation and Reporting:
123456789101112131415161718192021222324252627
// Aggregate daily metrics into summary tableexport const handler: ScheduledHandler = async (event) => { const yesterday = new Date(); yesterday.setDate(yesterday.getDate() - 1); const dateKey = yesterday.toISOString().split('T')[0]; // Query raw events const pageViews = await countEvents("page_view", dateKey); const signups = await countEvents("user_signup", dateKey); const purchases = await countEvents("purchase", dateKey); const revenue = await sumEventField("purchase", "amount", dateKey); // Write daily summary await dynamodb.send(new PutItemCommand({ TableName: "daily_metrics", Item: { date: { S: dateKey }, pageViews: { N: pageViews.toString() }, signups: { N: signups.toString() }, purchases: { N: purchases.toString() }, revenue: { N: revenue.toFixed(2) }, calculatedAt: { S: new Date().toISOString() } } })); console.log(`Aggregated metrics for ${dateKey}`);};Cleanup and Maintenance:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
// Delete expired user sessions (retention: 30 days)export const handler: ScheduledHandler = async (event) => { const cutoffDate = new Date(); cutoffDate.setDate(cutoffDate.getDate() - 30); const cutoffTimestamp = cutoffDate.toISOString(); let deletedCount = 0; let lastKey: Record<string, any> | undefined; do { // Find expired sessions const result = await dynamodb.send(new ScanCommand({ TableName: "user_sessions", FilterExpression: "lastActivity < :cutoff", ExpressionAttributeValues: { ":cutoff": { S: cutoffTimestamp } }, ExclusiveStartKey: lastKey })); // Batch delete expired sessions if (result.Items && result.Items.length > 0) { const deleteRequests = result.Items.map(item => ({ DeleteRequest: { Key: { sessionId: item.sessionId } } })); // BatchWrite in chunks of 25 for (let i = 0; i < deleteRequests.length; i += 25) { await dynamodb.send(new BatchWriteItemCommand({ RequestItems: { "user_sessions": deleteRequests.slice(i, i + 25) } })); } deletedCount += result.Items.length; } lastKey = result.LastEvaluatedKey; } while (lastKey); console.log(`Deleted ${deletedCount} expired sessions`);};External System Synchronization:
Health Checks and Notifications:
| Use Case | Suggested Schedule | Considerations |
|---|---|---|
| Daily reports | cron(0 6 * * ? *) | Run after midnight data closes |
| Hourly aggregation | cron(5 * * * ? *) | Offset from :00 to avoid congestion |
| Cache refresh | rate(15 minutes) | Balance freshness vs. cost |
| Cleanup jobs | cron(0 3 * * ? *) | Run during low-traffic hours |
| Health checks | rate(5 minutes) | More frequent for critical services |
| Weekly summary | cron(0 9 ? * MON *) | Monday morning for weekly review |
Avoid scheduling tasks at common times like :00, :15, :30, or :45 minutes. Many systems trigger at these times, causing congestion. Offset your schedules (e.g., :05, :17, :42) to reduce resource contention and thundering herd effects.
Scheduled tasks are a fundamental serverless pattern, enabling batch processing, maintenance operations, and time-based workflows without maintaining always-on infrastructure. By understanding scheduling mechanisms, processing patterns, and operational practices, you can build reliable automated systems.
Let's consolidate the key takeaways:
What's Next:
With scheduled tasks covered, we'll explore Data Processing Pipelines—another powerful serverless pattern for handling streaming data, ETL workflows, and real-time analytics using serverless components.
You now have comprehensive knowledge of implementing scheduled tasks in serverless architectures. From cron expressions to batch processing patterns to operational monitoring—these patterns enable you to automate backend operations efficiently, paying only for the compute time actually used.