Loading content...
If one-to-many messaging is the engine of pub-sub, then topics and subscriptions are the road network—the infrastructure that determines where events flow, how they're organized, and who receives them.
Poorly designed topic structures lead to chaos: events end up in wrong places, consumers process irrelevant messages, and the system becomes impossible to reason about. Well-designed topic hierarchies create clarity: events have obvious homes, consumers receive precisely what they need, and the entire system is self-documenting.
This page explores both sides of this fundamental design challenge. We'll examine how to think about topic design, the different subscription models available, and the semantic guarantees that govern message delivery.
By the end of this page, you will understand topic naming conventions and hierarchies, the difference between exclusive and shared subscriptions, how to configure acknowledgment and dead-letter handling, and the lifecycle management practices that keep pub-sub systems maintainable.
A topic is a named channel or category to which publishers send messages. Topics are the primary organization mechanism in pub-sub systems, providing logical grouping for related events.
What is a Topic?
Depending on the implementation, a topic might be:
Regardless of implementation, topics serve the same conceptual purpose: grouping events by domain or type to enable organized fan-out to interested consumers.
orders, user-events.signups, analytics.pageviews)Topic vs Queue: A Clarification
Topics and queues serve different purposes, though some systems conflate them:
| Concept | Purpose | Delivery Model |
|---|---|---|
| Topic | Organize events by category | One-to-many (fan-out) |
| Queue | Buffer work for processing | One-to-one (competing consumers) |
In Kafka, this distinction is subtle because topics ARE logs, and consumer groups create queue-like behavior within topics. In Google Pub/Sub, topics fan out, and each subscription acts as a queue. Understanding your platform's model prevents architectural confusion.
Topic names are the primary interface developers interact with. Good naming conventions make systems self-documenting; poor names create confusion and coupling. Establishing naming standards early prevents costly migrations later.
orders.created, orders.shipped, inventory.updated)user.signup.completed vs just user)prod.orders.created, staging.orders.createdorders.v2.createdpayment-processed) not abbreviations (pmt-proc)analytics.* captures all analytics events)| Pattern | Example | Use Case |
|---|---|---|
| {domain}.{entity}.{action} | orders.order.created | Standard event sourcing pattern |
| {env}.{domain}.{event} | prod.payments.payment-completed | Multi-environment clusters |
| {team}.{service}.{event} | platform.auth.session-started | Team ownership model |
| {aggregate}.{version}.{event} | customer.v2.address-changed | Versioned event streams (use sparingly) |
| {region}.{domain}.{event} | us-east.inventory.stock-updated | Geo-distributed systems |
Generic names like 'events', 'messages', or 'data'—impossible to find or understand. Technology in names like 'kafka-orders' or 'pubsub-events'—couples to implementation. Consumer in names like 'orders-for-analytics'—creates implicit coupling to specific consumers.
Many pub-sub systems support hierarchical topic structures with wildcard subscriptions, enabling flexible routing patterns without enumerating every topic explicitly.
Hierarchical Structure
Topics organized in a tree-like hierarchy:
ecommerce/
├── orders/
│ ├── created
│ ├── updated
│ ├── shipped
│ └── cancelled
├── inventory/
│ ├── stock-updated
│ └── reorder-triggered
└── payments/
├── initiated
├── completed
└── failed
Wildcard Subscriptions (where supported)
Wildcards let subscribers match multiple topics:
| Wildcard | Meaning | Example | Matches |
|---|---|---|---|
* (single-level) | Any single segment | orders.* | orders.created, orders.shipped |
# or > (multi-level) | Any remaining path | ecommerce.# | ecommerce.orders.created, ecommerce.payments.failed |
This enables powerful patterns like:
# (all events)*.created (all creation events)orders.* (all order events)RabbitMQ supports wildcards natively with topic exchanges. Kafka does not support wildcard subscriptions—consumers specify exact topic names (though Kafka Streams supports pattern matching). Google Pub/Sub requires explicit subscriptions per topic. Design your hierarchy based on your platform's capabilities.
A subscription is the binding between a topic and a consumer (or consumer group). It defines who receives messages from a topic and how they receive them.
Subscriptions are where pub-sub systems provide control over delivery semantics, acknowledgment behavior, and failure handling.
portfolio-service-trades-subscription)12345678910111213141516171819202122232425262728293031323334353637
// Example: Google Pub/Sub subscription configurationimport { PubSub } from '@google-cloud/pubsub'; const pubsub = new PubSub(); async function createSubscription(): Promise<void> { const [subscription] = await pubsub.topic('trades').createSubscription( 'portfolio-service-trades-sub', { // Acknowledgment deadline: 30 seconds to process ackDeadlineSeconds: 30, // Retry policy for failed deliveries retryPolicy: { minimumBackoff: { seconds: 10 }, maximumBackoff: { seconds: 600 }, // 10 minutes max }, // Dead letter queue for messages that fail permanently deadLetterPolicy: { deadLetterTopic: 'projects/my-project/topics/trades-dlq', maxDeliveryAttempts: 5, }, // Message retention (24 hours if not acked, independent of topic) messageRetentionDuration: { seconds: 86400 }, // Filter to only receive BUY orders filter: 'attributes.side = "BUY"', // Enable message ordering (requires ordering key on publish) enableMessageOrdering: true, } ); console.log(`Subscription ${subscription.name} created`);}How messages are distributed to consumers within a subscription determines whether you get pub-sub (fan-out) or queue (competing consumer) behavior. Most systems support both, controlled by the subscription type.
Exclusive Subscription: Each subscription receives all messages, and only one consumer can be active per subscription at a time.
Behavior:
Use Cases:
Example Architecture:
tradesportfolio-service-trades → Portfolio Service (exclusive)risk-service-trades → Risk Engine (exclusive)12345678910
// Apache Pulsar: Exclusive subscription// Only ONE consumer can attach at a timeConsumer<TradeEvent> consumer = pulsarClient.newConsumer(Schema.JSON(TradeEvent.class)) .topic("trades") .subscriptionName("portfolio-service-trades") .subscriptionType(SubscriptionType.Exclusive) // Key setting .subscribe(); // Second consumer with same subscription name will fail// Use for stateful processing that needs complete streamKafka's Approach: Consumer Groups
Kafka combines subscription and consumer group concepts. A consumer group name IS the subscription. Within the group:
This hybrid model provides both ordered processing (per partition) and horizontal scaling (across partitions).
Acknowledgment is the mechanism by which consumers communicate to the broker that messages have been successfully processed. Proper acknowledgment handling is critical for reliability—misconfiguration leads to message loss or infinite redelivery loops.
| Mode | Description | Risk | Use Case |
|---|---|---|---|
| Auto-Ack on Receive | Broker considers delivered = acknowledged | Message loss if consumer crashes during processing | Logs, metrics where loss acceptable |
| Manual Ack After Process | Consumer explicitly acks after successful processing | Redelivery if slow; must handle duplicates | Standard reliable processing |
| Batch Ack | Consumer acks multiple messages at once | Larger window of potential redelivery | High-throughput when individual ack too expensive |
| Negative Ack (Nack) | Consumer signals failure; broker redelivers | Must avoid infinite retry loops | Transient failures that may succeed on retry |
The Ack Lifecycle
Critical Considerations:
1234567891011121314151617181920212223242526272829303132333435363738
// Pattern: Reliable acknowledgment after processingasync function processWithReliableAck(message: Message): Promise<void> { const event = deserialize(message.data); try { // 1. Validate message (nak immediately if invalid) if (!isValid(event)) { await message.nack(); // Send to dead-letter after max retries return; } // 2. Extend deadline if processing will be long if (estimateProcessingTime(event) > 25_000) { await message.modifyAckDeadline(60); // Extend to 60 seconds } // 3. Process with idempotency check const alreadyProcessed = await checkIdempotencyKey(event.eventId); if (!alreadyProcessed) { await processEvent(event); await recordIdempotencyKey(event.eventId); } // 4. ACK only after ALL side effects complete await message.ack(); } catch (error) { if (isRetryable(error)) { // Transient failure - nack for redelivery await message.nack(); } else { // Permanent failure - ack to prevent infinite retry // (in production, send to dead-letter queue first) logger.error(`Permanent failure for ${event.eventId}`, error); await message.ack(); // Or let DLQ policy handle it } }}Never acknowledge a message before processing completes. If your consumer crashes after ack but before the database write, the message is lost forever. This is the most common cause of data loss in messaging systems.
A Dead Letter Queue (DLQ) is a specialized destination for messages that cannot be processed successfully after all retry attempts. DLQs prevent poison messages from blocking the pipeline indefinitely while preserving them for investigation.
Why DLQs Are Essential:
Without a DLQ, a malformed or unprocessable message creates an infinite retry loop:
With a DLQ:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051
// Example: DLQ message structure with full contextinterface DeadLetterMessage<T> { originalMessage: T; originalTopic: string; subscription: string; // Failure context errorMessage: string; errorStack: string; failureTimestamp: string; attemptCount: number; // Original metadata originalEventId: string; originalTimestamp: string; originalHeaders: Record<string, string>;} // Publishing to DLQ with contextasync function sendToDeadLetter<T>( message: Message, error: Error, attemptCount: number): Promise<void> { const dlqMessage: DeadLetterMessage<T> = { originalMessage: JSON.parse(message.data.toString()), originalTopic: message.attributes.topic, subscription: 'order-processor-sub', errorMessage: error.message, errorStack: error.stack || '', failureTimestamp: new Date().toISOString(), attemptCount, originalEventId: message.id, originalTimestamp: message.publishTime.toISOString(), originalHeaders: message.attributes, }; await deadLetterTopic.publish( Buffer.from(JSON.stringify(dlqMessage)), { originalTopic: message.attributes.topic } ); // Alert operations team await alerting.notify('dlq-arrival', { topic: message.attributes.topic, error: error.message, messageId: message.id, });}Subscriptions have lifecycles that must be managed carefully. Creating, updating, and deleting subscriptions requires coordination to avoid message loss or processing gaps.
Manage subscriptions through infrastructure-as-code (Terraform, Pulumi). This ensures subscriptions exist before deployment, tracks configuration in version control, and prevents drift between environments. Never create subscriptions manually in production.
We've explored the organizational primitives that structure every pub-sub system. Let's consolidate the key concepts:
What's Next:
With a solid understanding of topics and subscriptions, we'll explore fan-out patterns—the architectural approaches that leverage pub-sub's one-to-many capabilities for building reactive, event-driven systems at scale.
You now understand the building blocks of pub-sub organization. Topics provide logical grouping for events; subscriptions define delivery to consumers; acknowledgments ensure reliability; and dead-letter queues handle failures gracefully. These primitives combine to create the flexible, scalable messaging infrastructure that powers event-driven architectures.