Loading learning content...
Unit tests verify components. Integration tests verify infrastructure connections. Contract tests verify pairwise compatibility. But none of these tell you whether a user can actually complete their journey through your system. End-to-end (E2E) tests verify that the entire system works together to deliver real user value.
In a microservices architecture, E2E testing is both critically important and notoriously challenging. When dozens of services, databases, message brokers, and external dependencies must coordinate to process a single user request, E2E tests provide the only definitive proof that the system works as a whole—but they also introduce the slowest, flakiest, and most expensive tests in your pyramid.
By the end of this page, you will understand when E2E tests are worth their cost, how to design E2E tests that provide maximum value with minimum maintenance burden, strategies for managing test environments, and how to keep E2E tests fast and reliable enough to run in CI.
E2E tests occupy the top of the testing pyramid—fewest in number, slowest to run, but highest in confidence. They verify what no other test type can: that the complete system delivers the intended user experience.
What E2E Tests Uniquely Verify:
| Bug Type | Unit Test | Integration Test | Contract Test | E2E Test |
|---|---|---|---|---|
| Logic error in function | ✓ | — | — | ✓ |
| Wrong SQL query syntax | — | ✓ | — | ✓ |
| API response missing field | — | — | ✓ | ✓ |
| Service A calls wrong endpoint on B | — | — | — | ✓ |
| Race condition across services | — | — | — | ✓ |
| Load balancer misconfiguration | — | — | — | ✓ |
| Missing environment variable | — | — | — | ✓ |
The E2E Testing Dilemma:
E2E tests provide irreplaceable confidence, but they come with significant costs:
The goal isn't to maximize E2E coverage—it's to maximize confidence per E2E test while keeping the total count manageable.
Teams new to testing often invert the pyramid: mostly E2E tests, few unit tests. This 'ice cream cone' pattern leads to slow CI pipelines, flaky builds, and tests that break faster than they can be fixed. A healthy ratio: 70% unit, 20% integration/contract, 10% E2E. Keep E2E tests precious and focused.
Effective E2E tests focus on user journeys, not technical verifications. Each test should represent a real scenario that delivers business value when it works—and causes business impact when it doesn't.
Principles for E2E Test Design:
Test user journeys, not features: 'User can complete checkout' not 'Payment service processes card'
Focus on critical paths: Test the flows that, if broken, would cause significant business impact
Keep tests independent: Each test sets up its own data and cleans up after itself
Use realistic (but controlled) data: Real-world-like data patterns, but deterministic test fixtures
Assert on business outcomes: 'Order appears in user's order history' not 'OrderCreated event has correct schema'
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113
import { test, expect, Page } from "@playwright/test"; // Well-designed E2E test: Tests a complete user journeytest.describe("Checkout Journey", () => { let testUser: TestUser; let testProducts: TestProduct[]; test.beforeAll(async () => { // Create isolated test data for this test suite testUser = await TestDataFactory.createUser({ email: `checkout-test-${Date.now()}@test.com`, verified: true, paymentMethods: [{ type: "card", last4: "4242", brand: "visa", }], }); testProducts = await TestDataFactory.createProducts([ { name: "Test Widget", price: 29.99, stock: 100 }, { name: "Test Gadget", price: 49.99, stock: 50 }, ]); }); test.afterAll(async () => { // Clean up test data await TestDataFactory.cleanupUser(testUser.id); await TestDataFactory.cleanupProducts(testProducts.map(p => p.id)); }); test("complete checkout with saved payment method", async ({ page }) => { // ARRANGE: Login await loginAs(page, testUser); // ACT: Complete the shopping journey await page.goto("/products"); await page.click(`[data-product-id="${testProducts[0].id}"]`); await page.click('[data-action="add-to-cart"]'); await page.click('[data-action="view-cart"]'); await expect(page.locator('[data-testid="cart-item"]')).toHaveCount(1); await page.click('[data-action="proceed-to-checkout"]'); // Select saved payment method await page.click('[data-payment-method="card-4242"]'); // Confirm order await page.click('[data-action="place-order"]'); // ASSERT: Verify order confirmation await expect(page.locator('[data-testid="order-confirmation"]')).toBeVisible(); await expect(page.locator('[data-testid="order-total"]')).toContainText("29.99"); // Verify order appears in order history await page.goto("/orders"); const latestOrder = page.locator('[data-testid="order-row"]').first(); await expect(latestOrder).toContainText("29.99"); await expect(latestOrder).toContainText("Processing"); // Verify email was sent (via API or test mailbox) const confirmationEmail = await TestMailbox.waitForEmail({ to: testUser.email, subject: /Order Confirmation/, timeout: 10000, }); expect(confirmationEmail).toBeDefined(); }); test("checkout fails gracefully when payment is declined", async ({ page }) => { // Use a test card that will be declined const decliningUser = await TestDataFactory.createUser({ paymentMethods: [{ type: "card", last4: "0002", // Test card that declines brand: "visa", }], }); await loginAs(page, decliningUser); // Add item and proceed to checkout await page.goto(`/products/${testProducts[0].id}`); await page.click('[data-action="add-to-cart"]'); await page.click('[data-action="proceed-to-checkout"]'); await page.click('[data-payment-method="card-0002"]'); await page.click('[data-action="place-order"]'); // ASSERT: Appropriate error handling await expect(page.locator('[data-testid="payment-error"]')).toBeVisible(); await expect(page.locator('[data-testid="payment-error"]')).toContainText("declined"); // Cart should be preserved await page.click('[data-action="view-cart"]'); await expect(page.locator('[data-testid="cart-item"]')).toHaveCount(1); // No order should be created await page.goto("/orders"); await expect(page.locator('[data-testid="order-row"]')).toHaveCount(0); await TestDataFactory.cleanupUser(decliningUser.id); });}); // Helper functions for readable testsasync function loginAs(page: Page, user: TestUser): Promise<void> { await page.goto("/login"); await page.fill('[data-testid="email-input"]', user.email); await page.fill('[data-testid="password-input"]', user.password); await page.click('[data-action="login"]'); await expect(page.locator('[data-testid="user-menu"]')).toBeVisible();}Use data-testid attributes instead of CSS selectors or text content. CSS classes change for styling reasons. Text changes for UX reasons. Test IDs exist solely for testing and create a stable contract between frontend and test code. This dramatically reduces test maintenance.
E2E tests for microservices require careful architectural decisions about how to structure the test code, manage test data, and interact with the system under test.
Component Layers of E2E Test Architecture:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152
// Layer 1: Test Specifications (What to test)// These are the actual test files - focused on user intent// src/e2e/specs/checkout.spec.ts test("user can complete purchase", async () => { const user = await users.createVerified(); const product = await products.getAvailable(); await checkoutFlow.addToCart(product); await checkoutFlow.proceedToCheckout(); await checkoutFlow.completePayment(user.paymentMethod); await assertions.orderConfirmationDisplayed(); await assertions.orderInHistory();}); // Layer 2: Page Objects / Flow Objects (How to interact)// Encapsulate page interactions - shield tests from UI changes// src/e2e/flows/checkout-flow.ts export class CheckoutFlow { constructor(private page: Page) {} async addToCart(product: Product): Promise<void> { await this.page.goto(`/products/${product.id}`); await this.page.click('[data-action="add-to-cart"]'); await expect( this.page.locator('[data-testid="cart-added-notification"]') ).toBeVisible(); } async proceedToCheckout(): Promise<void> { await this.page.click('[data-action="view-cart"]'); await this.page.click('[data-action="proceed-to-checkout"]'); await expect(this.page.locator('[data-testid="checkout-form"]')).toBeVisible(); } async completePayment(method: PaymentMethod): Promise<void> { await this.page.click(`[data-payment-id="${method.id}"]`); await this.page.click('[data-action="place-order"]'); }} // Layer 3: Test Data Factory (What data to use)// Creates isolated, deterministic test data// src/e2e/data/test-data-factory.ts export class TestDataFactory { private readonly apiClient: AdminApiClient; private createdEntities: EntityReference[] = []; async createUser(overrides: Partial<UserData> = {}): Promise<TestUser> { const user = await this.apiClient.createUser({ email: `test-${randomId()}@e2e-tests.local`, password: "TestPassword123!", verified: true, ...overrides, }); this.createdEntities.push({ type: "user", id: user.id }); return user; } async createProduct(overrides: Partial<ProductData> = {}): Promise<TestProduct> { const product = await this.apiClient.createProduct({ name: `E2E Test Product ${randomId()}`, price: 29.99, stock: 100, ...overrides, }); this.createdEntities.push({ type: "product", id: product.id }); return product; } // Cleanup all entities created during test async cleanup(): Promise<void> { for (const entity of this.createdEntities.reverse()) { await this.apiClient.delete(entity.type, entity.id); } this.createdEntities = []; }} // Layer 4: Assertions (What to verify)// Business-focused assertions with meaningful error messages// src/e2e/assertions/order-assertions.ts export class OrderAssertions { constructor(private page: Page) {} async orderConfirmationDisplayed(): Promise<void> { const confirmation = this.page.locator('[data-testid="order-confirmation"]'); await expect(confirmation).toBeVisible({ timeout: 10000 }); await expect(confirmation.locator('[data-testid="order-number"]')).not.toBeEmpty(); await expect(confirmation.locator('[data-testid="estimated-delivery"]')).toBeVisible(); } async orderInHistory(orderId?: string): Promise<void> { await this.page.goto("/orders"); const orderRow = orderId ? this.page.locator(`[data-order-id="${orderId}"]`) : this.page.locator('[data-testid="order-row"]').first(); await expect(orderRow).toBeVisible(); await expect(orderRow.locator('[data-testid="order-status"]')).toHaveText(/Processing|Confirmed/); }} // Layer 5: API Client (Backdoor operations)// Direct API access for setup/teardown and assertions// src/e2e/api/admin-api-client.ts export class AdminApiClient { constructor( private baseUrl: string, private adminToken: string ) {} async createUser(data: CreateUserData): Promise<User> { const response = await fetch(`${this.baseUrl}/admin/users`, { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${this.adminToken}`, }, body: JSON.stringify(data), }); if (!response.ok) { throw new Error(`Failed to create user: ${response.status}`); } return response.json(); } async getOrderById(orderId: string): Promise<Order> { const response = await fetch(`${this.baseUrl}/admin/orders/${orderId}`, { headers: { "Authorization": `Bearer ${this.adminToken}` }, }); return response.json(); } async delete(entityType: string, id: string): Promise<void> { await fetch(`${this.baseUrl}/admin/${entityType}/${id}`, { method: "DELETE", headers: { "Authorization": `Bearer ${this.adminToken}` }, }); }}E2E tests often need to set up data (create users, products) and verify results (check database state) without going through the UI. Use 'backdoor' APIs (admin endpoints, direct database access) for setup and assertion while testing user journeys through the actual UI. This keeps tests focused on user experience while enabling efficient data management.
E2E tests require a complete running system. How you provide that system significantly impacts test reliability, cost, and development velocity. There are several approaches, each with trade-offs.
Environment Strategies:
| Approach | Fidelity | Cost | Isolation | Setup Time |
|---|---|---|---|---|
| Shared staging environment | High | Medium | None (conflicts) | Instant |
| On-demand environment per test run | High | High | Complete | 5-30 minutes |
| Preview environments per PR | High | Medium-High | Per PR | Minutes |
| Local Docker Compose | Medium | Low | Complete | 1-5 minutes |
| Production with feature flags | Highest | Highest | Careful design | N/A |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114
# Docker Compose for local E2E testing# docker-compose.e2e.ymlversion: '3.8' services: # API Gateway - entry point gateway: image: traefik:v2.10 ports: - "80:80" volumes: - ./traefik.yml:/etc/traefik/traefik.yml depends_on: - order-service - user-service - product-service # Microservices order-service: build: context: ./services/order dockerfile: Dockerfile environment: - DATABASE_URL=postgresql://postgres:postgres@order-db:5432/orders - KAFKA_BROKERS=kafka:9092 - USER_SERVICE_URL=http://user-service:3000 - PRODUCT_SERVICE_URL=http://product-service:3000 depends_on: - order-db - kafka user-service: build: context: ./services/user dockerfile: Dockerfile environment: - DATABASE_URL=postgresql://postgres:postgres@user-db:5432/users - REDIS_URL=redis://redis:6379 depends_on: - user-db - redis product-service: build: context: ./services/product dockerfile: Dockerfile environment: - DATABASE_URL=postgresql://postgres:postgres@product-db:5432/products - ELASTICSEARCH_URL=http://elasticsearch:9200 depends_on: - product-db - elasticsearch # Databases - each service has its own order-db: image: postgres:15 environment: POSTGRES_DB: orders POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres user-db: image: postgres:15 environment: POSTGRES_DB: users POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres product-db: image: postgres:15 environment: POSTGRES_DB: products POSTGRES_USER: postgres POSTGRES_PASSWORD: postgres # Infrastructure kafka: image: confluentinc/cp-kafka:7.4.0 environment: KAFKA_NODE_ID: 1 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093 KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092 KAFKA_PROCESS_ROLES: broker,controller KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093 KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER CLUSTER_ID: 'ciWo7IWazngRchmPES6q5A==' KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true' redis: image: redis:7-alpine elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch:8.9.0 environment: - discovery.type=single-node - xpack.security.enabled=false # Health check and wait script# scripts/wait-for-services.sh#!/bin/bashecho "Waiting for services to be healthy..." services=("http://localhost/health" "http://order-service:3000/health" "http://user-service:3000/health") for service in "${services[@]}"; do until curl -sf "$service" > /dev/null 2>&1; do echo "Waiting for $service..." sleep 2 done echo "$service is ready"done echo "All services are healthy!"Shared staging environments seem economical but cause constant pain: tests interfere with each other, manual testing conflicts with automation, leftover data causes mysterious failures, and debugging becomes impossible when multiple test runs overlap. Invest in isolated environments—the productivity gains far outweigh the infrastructure costs.
Microservices often process requests asynchronously. A user action might trigger immediate UI feedback, but the underlying processing happens via event-driven workflows. E2E tests must handle this asynchrony without resorting to fragile sleep statements.
Common Async Patterns in Microservices:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133
// BAD: Using arbitrary sleepstest("order is processed", async ({ page }) => { await page.click('[data-action="place-order"]'); await page.waitForTimeout(10000); // ❌ Fragile, slow, unreliable await expect(page.locator('[data-testid="order-status"]')).toHaveText("Confirmed");}); // GOOD: Polling with smart waitingtest("order is processed", async ({ page }) => { await page.click('[data-action="place-order"]'); // Wait for initial acknowledgment (synchronous) await expect(page.locator('[data-testid="order-received"]')).toBeVisible(); // Extract order ID for polling const orderId = await page.locator('[data-testid="order-id"]').textContent(); // Poll for final status via API (faster than UI polling) await waitForCondition(async () => { const order = await adminApi.getOrder(orderId!); return order.status === "CONFIRMED"; }, { timeout: 30000, interval: 1000, message: `Order ${orderId} did not reach CONFIRMED status`, }); // Verify UI reflects final state await page.reload(); await expect(page.locator('[data-testid="order-status"]')).toHaveText("Confirmed");}); // Reusable polling utilityasync function waitForCondition( condition: () => Promise<boolean>, options: { timeout: number; interval: number; message?: string; }): Promise<void> { const startTime = Date.now(); while (Date.now() - startTime < options.timeout) { try { if (await condition()) { return; } } catch (error) { // Condition threw - keep polling } await new Promise(resolve => setTimeout(resolve, options.interval)); } throw new Error(options.message || "Condition not met within timeout");} // Testing webhook/callback flowstest("payment webhook updates order status", async ({ page }) => { // Setup: Create order via UI await createOrderViaUI(page); const orderId = await getCreatedOrderId(page); // Action: Simulate webhook from payment provider await simulatePaymentWebhook({ orderId, event: "payment.succeeded", amount: 2999, }); // Assert: Order status updated await waitForCondition(async () => { const order = await adminApi.getOrder(orderId); return order.status === "PAID"; }, { timeout: 10000, interval: 500 }); // Verify user-facing update await page.goto(`/orders/${orderId}`); await expect(page.locator('[data-testid="payment-status"]')).toHaveText("Paid");}); // Testing email/notification flowstest("user receives confirmation email", async ({ page }) => { const user = await testData.createUserWithEmail("confirm-test@mailtest.local"); await loginAs(page, user); await completeCheckoutFlow(page); // Wait for email via test mailbox API const email = await testMailbox.waitForEmail({ to: user.email, subjectContains: "Order Confirmation", timeout: 30000, }); expect(email).toBeDefined(); expect(email.body).toContain("Thank you for your order"); expect(email.body).toContain("Order #");}); // Test mailbox helperclass TestMailbox { constructor(private apiUrl: string) {} async waitForEmail(criteria: { to: string; subjectContains?: string; timeout: number; }): Promise<Email | undefined> { return waitForCondition( async () => { const emails = await this.getEmails(criteria.to); const match = emails.find(e => !criteria.subjectContains || e.subject.includes(criteria.subjectContains) ); return match !== undefined; }, { timeout: criteria.timeout, interval: 1000 } ).then(() => this.getEmails(criteria.to).then(e => e.find(email => !criteria.subjectContains || email.subject.includes(criteria.subjectContains) ) )); } private async getEmails(address: string): Promise<Email[]> { const response = await fetch(`${this.apiUrl}/messages?to=${address}`); return response.json(); }}When waiting for backend processing to complete, poll via API rather than refreshing the UI. API calls are faster (no rendering), more reliable (simpler response format), and provide better error information. Use the UI only for final verification that the UI correctly reflects the backend state.
Flaky tests—tests that pass and fail randomly without code changes—are the nemesis of E2E testing. They erode trust in the test suite, waste developer time investigating false failures, and often lead teams to disable tests entirely. Proactively addressing flakiness sources is essential for sustainable E2E testing.
Common Causes of E2E Test Flakiness:
| Cause | Symptom | Solution |
|---|---|---|
| Race conditions | Tests fail on slow machines | Use explicit waits, never fixed sleeps |
| Shared state | Tests fail when run together | Isolate test data, clean up properly |
| Network variability | Timeouts, connection errors | Increase timeouts, add retries |
| Animation timing | Elements not clickable | Disable animations or wait for idle |
| Date/time sensitivity | Tests fail at certain times | Mock time or use relative dates |
| Third-party services | External service unavailable | Use mocks/stubs for external services |
| Browser state | Cookies/storage from other tests | Clear browser state between tests |
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122
// Pattern 1: Explicit waits for elements to be readyasync function clickWhenReady(page: Page, selector: string): Promise<void> { const element = page.locator(selector); // Wait for element to be visible await element.waitFor({ state: "visible" }); // Wait for element to be enabled await expect(element).toBeEnabled(); // Wait for any animations to complete await page.waitForTimeout(100); // Small buffer for animations // Scroll into view if needed await element.scrollIntoViewIfNeeded(); // Click with force if standard click fails (covered element) try { await element.click({ timeout: 5000 }); } catch (error) { await element.click({ force: true }); }} // Pattern 2: Retry wrapper for flaky operationsasync function withRetry<T>( operation: () => Promise<T>, options: { maxAttempts: number; delay: number } = { maxAttempts: 3, delay: 1000 }): Promise<T> { let lastError: Error | undefined; for (let attempt = 1; attempt <= options.maxAttempts; attempt++) { try { return await operation(); } catch (error) { lastError = error as Error; console.log(`Attempt ${attempt} failed: ${lastError.message}`); if (attempt < options.maxAttempts) { await new Promise(resolve => setTimeout(resolve, options.delay)); } } } throw lastError;} // Pattern 3: Disable animations in test environment// In your app's CSS or global test setup`/* test-environment.css */*, *::before, *::after { animation-duration: 0.01ms !important; animation-iteration-count: 1 !important; transition-duration: 0.01ms !important;}` // Or in Playwright configuse: { // Launch browser with reduced motion contextOptions: { reducedMotion: "reduce", },} // Pattern 4: Isolate test data with unique identifiersfunction createTestData(): TestOrder { const uniqueId = `${Date.now()}-${Math.random().toString(36).substr(2, 9)}`; return { customerEmail: `e2e-test-${uniqueId}@test.local`, externalReference: `E2E-${uniqueId}`, // Data that can be uniquely identified and cleaned up };} // Pattern 5: Clean browser state between teststest.beforeEach(async ({ page, context }) => { // Clear cookies await context.clearCookies(); // Clear localStorage and sessionStorage await page.evaluate(() => { localStorage.clear(); sessionStorage.clear(); }); // Clear IndexedDB if used await page.evaluate(async () => { const databases = await indexedDB.databases(); for (const db of databases) { indexedDB.deleteDatabase(db.name!); } });}); // Pattern 6: Mock external servicestest.beforeAll(async () => { // Intercept calls to external payment provider await page.route("**/api.stripe.com/**", async (route) => { if (route.request().url().includes("payment_intents")) { await route.fulfill({ status: 200, contentType: "application/json", body: JSON.stringify({ id: "pi_mock_123", status: "succeeded", amount: 2999, }), }); } });}); // Pattern 7: Use test IDs instead of fragile selectors// Instead of:await page.click(".btn.btn-primary.checkout"); // ❌ Fragileawait page.click("text=Complete Purchase"); // ❌ Changes with UX copy // Use:await page.click('[data-testid="checkout-submit"]'); // ✅ StableWhen you identify a flaky test, don't delete it—quarantine it. Move it to a separate suite that runs but doesn't block CI. This preserves the test's value while preventing it from disrupting the team. Fix quarantined tests in dedicated cleanup sessions, then move them back to the main suite.
E2E tests in CI/CD pipelines face unique challenges: they're slow, resource-intensive, and can create deployment bottlenecks. Strategic execution patterns keep E2E tests valuable without blocking developer velocity.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162
# GitHub Actions: Tiered E2E testing strategyname: E2E Tests on: push: branches: [main] pull_request: branches: [main] # Scheduled full E2E suite schedule: - cron: '0 */4 * * *' # Every 4 hours jobs: # Fast smoke tests on every PR smoke-tests: if: github.event_name == 'pull_request' runs-on: ubuntu-latest timeout-minutes: 15 steps: - uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - name: Install dependencies run: npm ci - name: Start services (lightweight) run: | docker compose -f docker-compose.e2e-light.yml up -d npm run wait-for-services - name: Run smoke tests run: npx playwright test --project=chromium --grep @smoke env: E2E_BASE_URL: http://localhost - name: Upload failure artifacts if: failure() uses: actions/upload-artifact@v4 with: name: smoke-test-failures path: | test-results/ playwright-report/ # Full E2E suite on main branch full-e2e: if: github.event_name == 'push' && github.ref == 'refs/heads/main' runs-on: ubuntu-latest timeout-minutes: 45 strategy: fail-fast: false matrix: shard: [1, 2, 3, 4] # Parallel test shards steps: - uses: actions/checkout@v4 - name: Setup Node uses: actions/setup-node@v4 with: node-version: '20' cache: 'npm' - name: Install dependencies run: npm ci - name: Install Playwright browsers run: npx playwright install --with-deps - name: Start full environment run: | docker compose -f docker-compose.e2e.yml up -d npm run wait-for-services npm run db:migrate:e2e npm run db:seed:e2e - name: Run E2E tests (shard) run: | npx playwright test --shard=${{ matrix.shard }}/4 env: E2E_BASE_URL: http://localhost CI: true - name: Upload test results uses: actions/upload-artifact@v4 if: always() with: name: e2e-results-shard-${{ matrix.shard }} path: | test-results/ playwright-report/ # Merge sharded reports merge-reports: needs: full-e2e if: always() runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - name: Download all artifacts uses: actions/download-artifact@v4 with: pattern: e2e-results-shard-* merge-multiple: true - name: Merge Playwright reports run: npx playwright merge-reports --reporter=html ./e2e-results-* - name: Publish report uses: peaceiris/actions-gh-pages@v3 with: github_token: ${{ secrets.GITHUB_TOKEN }} publish_dir: ./playwright-report destination_dir: e2e-report # Nightly comprehensive tests (all browsers, edge cases) nightly: if: github.event_name == 'schedule' runs-on: ubuntu-latest timeout-minutes: 120 steps: - uses: actions/checkout@v4 - name: Setup environment run: | npm ci npx playwright install --with-deps docker compose -f docker-compose.e2e.yml up -d npm run wait-for-services - name: Run all tests on all browsers run: | npx playwright test --project=chromium npx playwright test --project=firefox npx playwright test --project=webkit npx playwright test --project=mobile-chrome - name: Notify on failure if: failure() uses: slackapi/slack-github-action@v1 with: payload: | { "text": "Nightly E2E tests failed!", "blocks": [{ "type": "section", "text": { "type": "mrkdwn", "text": ":x: Nightly E2E tests failed. <${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}|View run>" } }] }E2E tests are the final safety net for microservices architectures—verifying that the complete system delivers user value when everything runs together. Their power comes with significant cost, making strategic design and execution essential.
What's Next:
E2E tests require realistic environments to run against, and managing those environments is an art in itself. The next page explores Test Environments—how to design, provision, and manage environments that support the full spectrum of microservices testing needs, from local development to production verification.
You now understand how to design and implement effective E2E tests for microservices. You've learned about test design, architecture, environment management, async handling, flakiness reduction, and CI/CD integration. Next, we'll explore how to design and manage test environments for microservices.