Loading learning content...
Every client-server interaction is, at its core, a conversation. The client speaks (request), pauses (waiting), and the server responds (response). This simple dialogue—repeated billions of times per second across the internet—is the request-response pattern, the fundamental interaction model of distributed computing.
While the concept seems trivially simple, mastering the request-response pattern requires understanding its mechanics at every layer: from the semantics of individual requests to the protocols that carry them, from the blocking nature of synchronous calls to the complexity of asynchronous alternatives, from the ideal case to the failure modes that plague real systems.
This page will equip you with a deep, rigorous understanding of the request-response pattern—knowledge that will inform every API you design, every client you build, and every distributed system you architect.
By the end of this page, you will understand the anatomy of requests and responses, the difference between synchronous and asynchronous patterns, HTTP as the dominant request-response protocol, failure modes and resilience patterns, and advanced variations including streaming and bidirectional communication.
Before exploring patterns and protocols, we must understand what constitutes a request and a response at a fundamental level. What information do they carry? How are they structured? What semantics do they convey?
The Request: What the Client Needs
A request encapsulates everything the server needs to understand and fulfill the client's intent. Regardless of protocol (HTTP, gRPC, custom TCP), requests share common elements:
123456789101112131415161718192021222324
# Complete HTTP Request Anatomy POST /api/v2/orders HTTP/1.1 # Method + Path + Protocol VersionHost: api.example.com # Target HostAuthorization: Bearer eyJhbGciOiJIUzI1NiIs... # AuthenticationContent-Type: application/json # Payload FormatAccept: application/json # Preferred Response FormatX-Request-ID: a1b2c3d4-5e6f-7890-abcd-ef1234567890 # Correlation IDX-Idempotency-Key: order-12345 # Idempotency KeyCache-Control: no-cache # Cache Directive { # Request Body "customer_id": "cust_789", "items": [ { "product_id": "prod_123", "quantity": 2 }, { "product_id": "prod_456", "quantity": 1 } ], "shipping_address": { "street": "123 Main St", "city": "San Francisco", "zip": "94102" }, "payment_method_id": "pm_abc123"}The Response: What the Server Returns
A response communicates the outcome of processing the request. It must convey both the result and sufficient metadata for the client to interpret and use that result effectively:
12345678910111213141516171819202122232425262728293031
# Complete HTTP Response Anatomy HTTP/1.1 201 Created # Protocol + Status Code + ReasonDate: Wed, 15 Jan 2025 10:30:00 GMT # Response TimestampContent-Type: application/json; charset=utf-8 # Payload FormatContent-Length: 512 # Payload SizeLocation: /api/v2/orders/ord_999 # Created Resource URIX-Request-ID: a1b2c3d4-5e6f-7890-abcd-ef1234567890 # Correlation ID (echoed)X-RateLimit-Limit: 1000 # Rate Limit InfoX-RateLimit-Remaining: 999Cache-Control: private, max-age=0 # Caching DirectiveETag: "33a64df551425fcc55e4d42a148795d9f25f89d4" # Version Identifier { # Response Body "id": "ord_999", "status": "pending", "customer_id": "cust_789", "items": [ { "product_id": "prod_123", "quantity": 2, "unit_price": 29.99 }, { "product_id": "prod_456", "quantity": 1, "unit_price": 49.99 } ], "subtotal": 109.97, "tax": 9.90, "total": 119.87, "created_at": "2025-01-15T10:30:00Z", "_links": { "self": { "href": "/api/v2/orders/ord_999" }, "customer": { "href": "/api/v2/customers/cust_789" }, "cancel": { "href": "/api/v2/orders/ord_999/cancel", "method": "POST" } }}The structure of requests and responses forms a contract between client and server. Changing this contract (adding required fields, changing field types, altering status code semantics) can break clients. API versioning, backward compatibility, and schema evolution strategies all exist to manage this contract over time.
The simplest and most intuitive form of request-response is synchronous: the client sends a request, blocks (waits), and resumes only when the response arrives. This mirrors face-to-face conversation—you ask a question and wait for the answer before continuing.
The Mechanics of Synchronous Communication:
This pattern is intuitive because it maps to how we naturally think about interactions. 'Call this API and use the result' is straightforward to code and reason about.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
// Synchronous Request-Response in Practice// The code 'blocks' at each await until the response arrives async function createOrderSynchronously(orderData: OrderInput): Promise<Order> { // Step 1: Validate customer exists (blocks until response) const customer = await fetch(`/api/customers/${orderData.customerId}`) .then(res => { if (!res.ok) throw new Error('Customer not found'); return res.json(); }); // Step 2: Check inventory for all items (blocks until response) const inventoryCheck = await fetch('/api/inventory/check', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ items: orderData.items }) }).then(res => res.json()); if (!inventoryCheck.available) { throw new Error('Insufficient inventory'); } // Step 3: Process payment (blocks until response) const payment = await fetch('/api/payments', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ customerId: orderData.customerId, amount: calculateTotal(orderData.items), paymentMethodId: orderData.paymentMethodId }) }).then(res => res.json()); // Step 4: Create the order (blocks until response) const order = await fetch('/api/orders', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ ...orderData, paymentId: payment.id }) }).then(res => res.json()); return order;} // Total time = validate + inventory + payment + order// Each step must complete before the next beginsBenefits of Synchronous Request-Response:
Drawbacks of Synchronous Request-Response:
In synchronous chains, performance degrades multiplicatively. If you have 5 services, each with 99.9% availability, your chain has only 99.5% availability (0.999^5). If each service adds 50ms latency, your total is 250ms. Synchronous is simple but scales poorly.
To overcome synchronous limitations, systems employ asynchronous communication—patterns where the client doesn't block waiting for a response. Instead, work is decoupled: the client initiates an operation and continues, receiving results through callbacks, polling, or separate channels.
Pattern 1: Fire-and-Forget
The client sends a request and doesn't wait for any response. This is appropriate for notifications, logging, and non-critical updates where acknowledgment isn't required.
123456789101112131415161718192021222324
// Fire-and-Forget Pattern// Client sends and immediately continues; no response expected async function logUserActivity(activity: ActivityData): Promise<void> { // Send to analytics service; don't await the result fetch('/api/analytics/events', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(activity), // Enable async/background sending keepalive: true }).catch(err => { // Log locally but don't fail the user's operation console.error('Analytics failed:', err); }); // Immediately return; user operation continues regardless} // Usage: User clicks button -> log activity -> proceed immediatelyonClick={() => { logUserActivity({ type: 'button_click', button: 'submit' }); submitForm(); // Doesn't wait for analytics}}Pattern 2: Callback / Webhook
The client provides a callback URL where the server will send results when processing completes. The client is free to do other work in the meantime.
This pattern is common for:
12345678910111213141516171819202122232425262728293031323334
// Callback/Webhook Pattern// Client provides a callback URL; server notifies when complete // Client initiates a long-running operationasync function startVideoTranscode(videoId: string): Promise<{ jobId: string }> { const response = await fetch('/api/transcode', { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ videoId, // Callback URL where server will POST results callbackUrl: 'https://myapp.com/webhooks/transcode-complete', // Include metadata to correlate the callback callbackMetadata: { videoId, requestedAt: new Date().toISOString() } }) }); // Server returns immediately with job ID return response.json(); // { jobId: "job_12345" }} // Webhook handler (separate endpoint, called later by transcode service)app.post('/webhooks/transcode-complete', async (req, res) => { const { jobId, status, outputUrl, metadata } = req.body; if (status === 'success') { await updateVideoRecord(metadata.videoId, { transcodedUrl: outputUrl }); await notifyUser(metadata.videoId, 'Your video is ready!'); } else { await handleTranscodeFailure(metadata.videoId, req.body.error); } res.status(200).send('OK'); // Acknowledge receipt});Pattern 3: Polling
The client periodically checks the server for status updates. Simpler to implement than callbacks (no need for the client to expose endpoints) but less efficient (wasted requests when no update is available).
Pattern 4: Message Queues
The client publishes a message to a queue; consumers process messages asynchronously. Results may be returned via response queues, callbacks, or status APIs. This pattern provides durability, load leveling, and temporal decoupling.
Use synchronous when users expect immediate feedback (login, search, form validation). Use asynchronous when processing is slow, non-blocking improves UX, or reliability matters more than instant response (file uploads, payments, order processing).
While many protocols implement request-response (gRPC, GraphQL over HTTP, proprietary protocols), HTTP (Hypertext Transfer Protocol) is overwhelmingly dominant. Understanding HTTP is essential for any system design involving web clients, APIs, or microservices.
HTTP Methods: Semantic Operations
HTTP defines methods that convey the intended operation. Proper use of methods enables caching, safe retries, and clear API semantics:
| Method | Semantics | Safe? | Idempotent? | Request Body? | Typical Use |
|---|---|---|---|---|---|
| GET | Retrieve a resource | Yes | Yes | No | Fetching data, queries |
| POST | Create or trigger action | No | No | Yes | Creating resources, form submission |
| PUT | Replace a resource | No | Yes | Yes | Full resource update |
| PATCH | Partial update | No | No* | Yes | Partial resource update |
| DELETE | Remove a resource | No | Yes | Rarely | Deleting resources |
| HEAD | GET without body | Yes | Yes | No | Checking if resource exists |
| OPTIONS | Query supported methods | Yes | Yes | No | CORS preflight, API discovery |
Safe methods don't modify state; clients can call them freely without side effects. Idempotent methods produce the same result regardless of how many times they're called; clients can safely retry on failures.
*PATCH idempotency depends on implementation—some PATCH operations are idempotent, others aren't.
HTTP Status Codes: Communicating Outcomes
Status codes are the primary mechanism for conveying request outcomes. They're organized into five classes:
| Class | Range | Meaning | Client Action | Examples |
|---|---|---|---|---|
| 1xx | 100-199 | Informational | Continue processing | 100 Continue, 101 Switching Protocols |
| 2xx | 200-299 | Success | Process response | 200 OK, 201 Created, 204 No Content |
| 3xx | 300-399 | Redirection | Follow redirect | 301 Moved Permanently, 304 Not Modified |
| 4xx | 400-499 | Client Error | Fix request and retry | 400 Bad Request, 401 Unauthorized, 404 Not Found |
| 5xx | 500-599 | Server Error | Retry later or escalate | 500 Internal Error, 502 Bad Gateway, 503 Service Unavailable |
HTTP Headers: The Metadata Layer
Headers carry metadata that governs behavior beyond the simple request/response cycle:
Authentication & Authorization: Authorization: Bearer <token> authenticates requests.
Content Negotiation: Accept: application/json (what client wants), Content-Type: application/json (what's being sent).
Caching: Cache-Control, ETag, If-None-Match, If-Modified-Since enable sophisticated caching.
CORS: Origin, Access-Control-Allow-Origin enable cross-origin requests in browsers.
Observability: X-Request-ID, X-Correlation-ID enable distributed tracing.
Rate Limiting: X-RateLimit-* headers communicate rate limit status.
HTTP has evolved significantly. HTTP/1.1 uses text-based headers and one request per connection (without keep-alive). HTTP/2 introduces binary framing, multiplexing (multiple requests over one connection), header compression, and server push. HTTP/3 replaces TCP with QUIC for reduced latency and better handling of packet loss. The request-response semantics remain, but performance improves dramatically.
In distributed systems, failure is not an exception—it's the norm. Understanding how request-response interactions fail is essential for building resilient systems. Failures occur at every layer: network, client, server, and application.
Network Failures:
| Failure Type | What Happened | Did Server Process? | Safe to Retry? |
|---|---|---|---|
| Connection refused | Server not listening or unreachable | No | Yes, immediately or after delay |
| DNS resolution failed | Cannot resolve server hostname | No | Yes, after DNS cache expires |
| Connection timeout | TCP handshake didn't complete | Unlikely | Yes |
| Request timeout | Request sent but response not received | Unknown | Depends on idempotency |
| Read timeout | Response partially received | Yes, partially or fully | Depends on idempotency |
| 4xx Client Error | Request rejected by server | No side effects (usually) | Fix request first |
| 5xx Server Error | Server encountered an error | Unknown | Depends on idempotency |
The Ambiguity Problem:
The most dangerous failure mode is ambiguous failure—the client doesn't know if the server processed the request. Consider:
If the client retries and the operation isn't idempotent, they might create duplicate orders. If they don't retry, they might fail to create the order. This uncertainty is fundamental to distributed systems.
This is a manifestation of the Two Generals Problem: in a system with unreliable communication, there's no protocol that guarantees both parties agree on state. You can only use patterns (idempotency, exactly-once delivery, reconciliation) to manage the uncertainty, never eliminate it.
Resilience Patterns for Request-Response:
While request-response is foundational, many use cases require alternatives: continuous data streams, server-initiated messages, or bidirectional communication. Modern protocols support these patterns while building on request-response foundations.
Server-Sent Events (SSE):
A one-way channel from server to client over HTTP. The client makes a request, and the server holds the connection open, sending events as they occur. Ideal for real-time updates (stock prices, notifications, activity feeds).
12345678910111213141516171819202122232425262728293031323334353637383940
// Server-Sent Events: Server pushes data to client // Server implementation (Node.js/Express)app.get('/events/notifications', (req, res) => { // Set headers for SSE res.setHeader('Content-Type', 'text/event-stream'); res.setHeader('Cache-Control', 'no-cache'); res.setHeader('Connection', 'keep-alive'); // Send events as they occur const sendNotification = (notification: Notification) => { res.write(`event: notification\n`); res.write(`data: ${JSON.stringify(notification)}\n\n`); }; // Subscribe to notification stream const unsubscribe = notificationService.subscribe( req.user.id, sendNotification ); // Clean up on disconnect req.on('close', () => { unsubscribe(); res.end(); });}); // Client implementation (Browser)const eventSource = new EventSource('/events/notifications'); eventSource.addEventListener('notification', (event) => { const notification = JSON.parse(event.data); displayNotification(notification);}); eventSource.addEventListener('error', (event) => { console.error('SSE connection error:', event); // Browser automatically reconnects});WebSocket:
A full-duplex, bidirectional protocol over a single TCP connection. After an HTTP handshake upgrades the connection, both client and server can send messages at any time. Ideal for chat, gaming, collaborative editing, and any scenario requiring low-latency bidirectional communication.
gRPC Streaming:
gRPC (built on HTTP/2) supports four patterns:
| Pattern | Direction | Connection | Use Cases |
|---|---|---|---|
| HTTP Request-Response | Client → Server → Client | Short-lived | CRUD operations, APIs |
| HTTP Long Polling | Client → Server → Client | Held open | Real-time updates (legacy) |
| Server-Sent Events | Server → Client | Persistent one-way | Notifications, feeds, updates |
| WebSocket | Bidirectional | Persistent full-duplex | Chat, gaming, collaboration |
| gRPC Streaming | Configurable | Persistent (HTTP/2) | Microservices, real-time data |
Even streaming and push patterns typically begin with a request-response interaction: the client requests to establish a connection, and the server responds by upgrading the protocol or beginning the stream. Understanding request-response deeply makes these advanced patterns easy to grasp.
Designing effective request-response interactions requires adhering to principles that promote reliability, maintainability, and user experience. These principles apply regardless of the specific protocol or technology stack.
Principle 1: Make Operations Idempotent Where Possible
Client retries are inevitable in distributed systems. Design operations so that executing them multiple times produces the same result as executing once. This requires:
Principle 2: Embrace Timeouts at Every Layer
No request should wait forever. Configure timeouts at:
Principle 3: Minimize Request Chattiness
Every request incurs overhead: network latency, serialization, authentication, logging. Reduce chattiness by:
POST /api/orders/batch vs. multiple POST /api/orders)Principle 4: Use Appropriate HTTP Semantics
Semantically correct API design enables:
Developers sometimes work around HTTP semantics (using POST for all operations, ignoring status codes, putting everything in headers). This creates systems that don't interoperate well with browsers, proxies, caches, and monitoring tools. Embrace HTTP's design; it encodes decades of distributed systems wisdom.
We've explored the request-response pattern in depth—from its basic mechanics to failure modes to advanced variations. Let's consolidate the key takeaways:
What's Next:
Now that we understand the fundamental interaction pattern, we'll examine the diversity of clients in client-server systems. From web browsers to mobile apps to API consumers, each client type brings unique characteristics, constraints, and design considerations.
You now have a comprehensive understanding of the request-response pattern: its anatomy, synchronous and asynchronous variations, HTTP as the dominant protocol, failure modes, streaming alternatives, and design principles. This knowledge will inform every API you design and every distributed system you architect.