Loading learning content...
Understanding circuit breaker concepts is essential, but building production systems requires mature, battle-tested implementations. Two libraries have defined the landscape of circuit breakers in the Java ecosystem (and influenced implementations in other languages): Netflix Hystrix and Resilience4j.
Hystrix, born from Netflix's legendary battle for high availability in a microservices architecture, pioneered many patterns we now consider standard. Resilience4j emerged as the modern successor, designed for Java 8+ with functional programming support and lighter-weight architecture.
This page examines both implementations in depth—not just as API references, but as case studies in resilience engineering. Understanding why these libraries are designed as they are deepens your understanding of resilience patterns themselves.
By the end of this page, you will understand the architectural differences between Hystrix and Resilience4j, how to implement circuit breakers with both libraries, the trade-offs that led to Resilience4j's design, and practical guidance for choosing and migrating between them. You'll also see implementations in other languages and ecosystems.
Netflix Hystrix was open-sourced in 2012, born from Netflix's experience building a large-scale microservices architecture on Amazon Web Services. At the time, Netflix was running one of the most complex distributed systems in the world, serving billions of streaming hours across hundreds of microservices.
The Netflix Context
Netflix's systems in the early 2010s faced unprecedented challenges:
Hystrix emerged as Netflix's answer to these challenges, implementing patterns they had refined through years of production incidents.
Core Architecture
Hystrix is built around the concept of wrapping calls to external systems in HystrixCommand objects:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253
import com.netflix.hystrix.HystrixCommand;import com.netflix.hystrix.HystrixCommandGroupKey;import com.netflix.hystrix.HystrixCommandProperties;import com.netflix.hystrix.HystrixThreadPoolProperties; public class GetProductCommand extends HystrixCommand<Product> { private final ProductService productService; private final String productId; public GetProductCommand(ProductService productService, String productId) { super(Setter .withGroupKey(HystrixCommandGroupKey.Factory.asKey("ProductService")) .andCommandPropertiesDefaults( HystrixCommandProperties.Setter() // Circuit breaker settings .withCircuitBreakerEnabled(true) .withCircuitBreakerRequestVolumeThreshold(20) // Min requests before evaluating .withCircuitBreakerErrorThresholdPercentage(50) // Trip at 50% failure .withCircuitBreakerSleepWindowInMilliseconds(5000) // Recovery timeout // Timeout settings .withExecutionTimeoutInMilliseconds(1000) // Metrics settings .withMetricsRollingStatisticalWindowInMilliseconds(10000) // 10s window ) .andThreadPoolPropertiesDefaults( HystrixThreadPoolProperties.Setter() .withCoreSize(10) // Thread pool size .withMaxQueueSize(100) .withQueueSizeRejectionThreshold(100) ) ); this.productService = productService; this.productId = productId; } @Override protected Product run() throws Exception { // The actual call to the external service return productService.getProduct(productId); } @Override protected Product getFallback() { // Fallback when circuit is open or call fails return Product.createCachedOrDefault(productId); }} // UsageProduct product = new GetProductCommand(productService, "prod-123").execute();Key Hystrix Features
The Thread Pool Model
Hystrix's most distinctive feature is its thread pool isolation. Every call to an external dependency executes in a dedicated thread pool:
┌─────────────────────────────────────────────────────────────┐
│ Application Thread Pool │
├──────────┬──────────┬──────────┬──────────┬─────────────────┤
│ Request │ Request │ Request │ Request │ ... │
│ 1 │ 2 │ 3 │ 4 │ │
└────┬─────┴────┬─────┴────┬─────┴────┬─────┴─────────────────┘
│ │ │ │
▼ ▼ ▼ ▼
┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐
│ Product │ │ Inventory│ │ Product │ │ Payment │
│ Pool │ │ Pool │ │ Pool │ │ Pool │
│ (10 th) │ │ (5 th) │ │ (10 th) │ │ (8 th) │
└─────────┘ └─────────┘ └─────────┘ └─────────┘
This architecture provides powerful isolation: if the Inventory Service becomes slow and its thread pool is exhausted, Product and Payment services continue operating normally.
However, this comes at a cost: creating and managing thread pools adds overhead, and context switching between application threads and Hystrix threads impacts latency (typically 1-3ms added latency).
Netflix announced Hystrix entered maintenance mode in 2018. While it remains stable and usable, no new features are being developed. Netflix recommends migrating to Resilience4j for new projects. Understanding Hystrix remains valuable for maintaining legacy systems and understanding the evolution of resilience patterns.
Resilience4j emerged around 2017 as a lightweight alternative to Hystrix, designed specifically for Java 8+ and functional programming patterns. It has since become the de facto standard for resilience in modern Java applications.
Design Philosophy
Resilience4j was built with different priorities than Hystrix:
| Aspect | Hystrix | Resilience4j |
|---|---|---|
| Core dependency | Requires Netflix libraries | Minimal dependencies (Vavr only) |
| Execution model | Thread pool isolation | Decorators (no mandatory thread pools) |
| API style | Command pattern (classes) | Functional decorators (lambdas) |
| Configuration | Properties-based | Fluent builders |
| Modularity | Monolithic JAR | Modular (pick what you need) |
| JVM support | Java 6+ | Java 8+ (uses lambdas, Optional, CompletableFuture) |
The Decorator Pattern
Resilience4j uses decorators to wrap callable functions, avoiding the class-per-command overhead of Hystrix:
12345678910111213141516171819202122232425262728293031323334353637383940
import io.github.resilience4j.circuitbreaker.CircuitBreaker;import io.github.resilience4j.circuitbreaker.CircuitBreakerConfig;import io.github.resilience4j.circuitbreaker.CircuitBreakerRegistry; // 1. Create configurationCircuitBreakerConfig config = CircuitBreakerConfig.custom() .failureRateThreshold(50) .minimumNumberOfCalls(10) .waitDurationInOpenState(Duration.ofSeconds(30)) .permittedNumberOfCallsInHalfOpenState(5) .slidingWindowType(SlidingWindowType.COUNT_BASED) .slidingWindowSize(100) .slowCallRateThreshold(80) .slowCallDurationThreshold(Duration.ofSeconds(2)) .build(); // 2. Create registry and circuit breakerCircuitBreakerRegistry registry = CircuitBreakerRegistry.of(config);CircuitBreaker circuitBreaker = registry.circuitBreaker("productService"); // 3. Decorate your functionSupplier<Product> decoratedSupplier = CircuitBreaker .decorateSupplier(circuitBreaker, () -> productService.getProduct(productId)); // 4. Execute with fallbackProduct product = Try.ofSupplier(decoratedSupplier) .recover(throwable -> Product.createDefault(productId)) .get(); // Alternative: Functional compositionProduct product = circuitBreaker.executeSupplier( () -> productService.getProduct(productId)); // With fallback using DecoratorsSupplier<Product> supplier = Decorators.ofSupplier(() -> productService.getProduct(productId)) .withCircuitBreaker(circuitBreaker) .withFallback(Arrays.asList(CallNotPermittedException.class, Exception.class), e -> Product.createDefault(productId)) .decorate();Modular Architecture
Resilience4j is composed of independent modules. You include only what you need:
| Module | Purpose |
|---|---|
| resilience4j-circuitbreaker | Circuit breaker pattern |
| resilience4j-ratelimiter | Rate limiting |
| resilience4j-retry | Retry with backoff |
| resilience4j-bulkhead | Bulkhead (semaphore or thread pool) |
| resilience4j-timelimiter | Timeout handling |
| resilience4j-cache | Result caching |
| resilience4j-spring-boot2 | Spring Boot auto-configuration |
| resilience4j-micrometer | Metrics export to Micrometer |
Composing Multiple Patterns
Resilience4j's decorator pattern allows elegant composition of multiple resilience patterns:
123456789101112131415161718192021222324252627282930313233
// Create individual componentsCircuitBreaker circuitBreaker = CircuitBreaker.ofDefaults("product");RateLimiter rateLimiter = RateLimiter.ofDefaults("product");Retry retry = Retry.ofDefaults("product");Bulkhead bulkhead = Bulkhead.ofDefaults("product");TimeLimiter timeLimiter = TimeLimiter.ofDefaults("product"); // Compose them using Decorators utilitySupplier<CompletionStage<Product>> decoratedSupplier = Decorators .ofSupplier(() -> productService.getProduct(productId)) .withRetry(retry) // Outermost: retry failed attempts .withCircuitBreaker(circuitBreaker) // Next: check circuit state .withBulkhead(bulkhead) // Next: acquire bulkhead permit .withRateLimiter(rateLimiter) // Next: check rate limit .withFallback(throwable -> Product.createDefault(productId)) .decorate(); /* * Order of decorators matters! * * The call flows through decorators outside-in: * Retry → CircuitBreaker → Bulkhead → RateLimiter → Actual Call * * Results/exceptions flow back inside-out: * Actual Call → RateLimiter → Bulkhead → CircuitBreaker → Retry * * Typical recommended order: * 1. Retry (outside) - retries should wrap everything * 2. CircuitBreaker - should NOT retry when open * 3. RateLimiter/Bulkhead - protect resources * 4. TimeLimiter - bound execution time * 5. Actual call (inside) */Resilience4j provides excellent Spring Boot integration. With annotations like @CircuitBreaker, @Retry, and @Bulkhead, you can add resilience to methods declaratively. Configuration can be externalized to application.yml, allowing runtime tuning without code changes.
Resilience4j's Spring Boot integration provides a production-ready setup with minimal boilerplate. This is the most common way circuit breakers are used in enterprise Java applications.
Setup and Configuration
1234567891011121314
dependencies { // Core Spring Boot implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'org.springframework.boot:spring-boot-starter-aop' // Resilience4j Spring Boot Starter implementation 'io.github.resilience4j:resilience4j-spring-boot2:2.1.0' // Micrometer for metrics (recommended) implementation 'io.micrometer:micrometer-registry-prometheus' // Actuator for health checks implementation 'org.springframework.boot:spring-boot-starter-actuator'}1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556
resilience4j: circuitbreaker: configs: default: registerHealthIndicator: true slidingWindowSize: 100 minimumNumberOfCalls: 10 permittedNumberOfCallsInHalfOpenState: 5 automaticTransitionFromOpenToHalfOpenEnabled: true waitDurationInOpenState: 30s failureRateThreshold: 50 slowCallRateThreshold: 80 slowCallDurationThreshold: 2s eventConsumerBufferSize: 10 recordExceptions: - org.springframework.web.client.HttpServerErrorException - java.io.IOException - java.util.concurrent.TimeoutException ignoreExceptions: - com.example.BusinessException # Custom configuration for payment service (more conservative) payment-critical: failureRateThreshold: 30 slowCallRateThreshold: 60 waitDurationInOpenState: 60s minimumNumberOfCalls: 5 instances: # Apply 'default' config to product service productService: baseConfig: default # Apply 'payment-critical' config to payment service paymentService: baseConfig: payment-critical # Override specific settings waitDurationInOpenState: 45s # Rate limiter configuration ratelimiter: instances: productService: limitForPeriod: 100 limitRefreshPeriod: 1s timeoutDuration: 500ms # Retry configuration retry: instances: productService: maxAttempts: 3 waitDuration: 500ms retryExceptions: - java.io.IOException - java.util.concurrent.TimeoutException123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
import io.github.resilience4j.circuitbreaker.annotation.CircuitBreaker;import io.github.resilience4j.retry.annotation.Retry;import io.github.resilience4j.ratelimiter.annotation.RateLimiter;import org.springframework.stereotype.Service;import org.springframework.web.client.RestTemplate; @Servicepublic class ProductServiceClient { private final RestTemplate restTemplate; public ProductServiceClient(RestTemplate restTemplate) { this.restTemplate = restTemplate; } // Apply circuit breaker, retry, and rate limiter // Order: Retry wraps CircuitBreaker wraps RateLimiter @CircuitBreaker(name = "productService", fallbackMethod = "getProductFallback") @Retry(name = "productService") @RateLimiter(name = "productService") public Product getProduct(String productId) { return restTemplate.getForObject( "http://product-service/products/{id}", Product.class, productId ); } // Fallback method - must have same signature + Throwable private Product getProductFallback(String productId, Throwable throwable) { log.warn("Fallback triggered for product {}: {}", productId, throwable.getMessage()); if (throwable instanceof CallNotPermittedException) { // Circuit is open - use cached data return productCache.get(productId) .orElse(Product.unavailable(productId)); } // Other failures - return default return Product.createDefault(productId); } // You can have different fallback methods for different scenarios @CircuitBreaker(name = "productService", fallbackMethod = "getProductsFallback") public List<Product> getProducts(List<String> productIds) { // ... } private List<Product> getProductsFallback(List<String> productIds, Throwable throwable) { return productIds.stream() .map(Product::unavailable) .collect(Collectors.toList()); }}Actuator and Monitoring Integration
With Spring Boot Actuator, Resilience4j automatically exposes health indicators and metrics:
# Health endpoint includes circuit breaker statusGET /actuator/health { "status": "UP", "components": { "circuitBreakers": { "status": "UP", "details": { "productService": { "status": "UP", "details": { "failureRate": "0.0%", "slowCallRate": "0.0%", "slowCallRateThreshold": "80.0%", "failureRateThreshold": "50.0%", "bufferedCalls": 45, "failedCalls": 0, "slowCalls": 2, "slowFailedCalls": 0, "notPermittedCalls": 0, "state": "CLOSED" } }, "paymentService": { "status": "CIRCUIT_OPEN", "details": { "failureRate": "67.5%", "state": "OPEN" } } } } }} # Dedicated circuit breaker events endpointGET /actuator/circuitbreakerevents # Filter events by circuit breaker nameGET /actuator/circuitbreakerevents/productService # Filter by event typeGET /actuator/circuitbreakerevents/productService/errorTo make an informed choice (or understand migration implications), let's compare these libraries across multiple dimensions.
Execution Model
| Aspect | Hystrix | Resilience4j |
|---|---|---|
| Thread Isolation | Default (HystrixCommand) | Optional (Bulkhead module) |
| Semaphore Isolation | Supported | Supported (Bulkhead) |
| Overhead | Higher (thread pool management) | Lower (decorators only) |
| Latency Impact | 1-3ms added | Negligible |
| Context Propagation | Requires explicit handling | Simpler (same thread by default) |
Feature Comparison
| Feature | Hystrix | Resilience4j |
|---|---|---|
| Circuit Breaker | ✓ Full support | ✓ Full support + slow call detection |
| Bulkhead | ✓ Thread pool based | ✓ Semaphore or thread pool |
| Rate Limiter | ✗ Not built-in | ✓ Full support |
| Retry | ✗ Not built-in (use Spring Retry) | ✓ Full support with backoff |
| Time Limiter | ✓ Command timeout | ✓ Dedicated module |
| Caching | ✓ Request-scoped | ✓ Pluggable cache abstraction |
| Request Collapsing | ✓ Built-in | ✗ Not available |
| Metrics | ✓ Servo/Codahale | ✓ Micrometer native |
| Dashboard | ✓ Hystrix Dashboard | Use Grafana/Prometheus |
| Spring Integration | ✓ Spring Cloud Netflix | ✓ Native Spring Boot starter |
| Reactive Support | Via RxJava 1.x | Project Reactor, RxJava2/3 |
Configuration Approach
123456789101112
// Hystrix: Properties-based// Often via Archaius propertieshystrix.command.ProductCommand .circuitBreaker .requestVolumeThreshold=20 // Or programmaticHystrixCommandProperties.Setter() .withCircuitBreakerEnabled(true) .withCircuitBreakerRequestVolumeThreshold(20) .withCircuitBreakerErrorThresholdPercentage(50) .withCircuitBreakerSleepWindowInMilliseconds(5000)1234567891011
// Resilience4j: Fluent buildersCircuitBreakerConfig.custom() .failureRateThreshold(50) .minimumNumberOfCalls(20) .waitDurationInOpenState( Duration.ofSeconds(5) ) .build(); // Or via application.yml// (Spring Boot)Use Resilience4j for all new projects. It's actively maintained, has better Java 8+ support, and integrates seamlessly with modern Spring Boot and Micrometer. Maintain Hystrix only for legacy systems where migration isn't justified. Consider migrating if you're doing significant refactoring anyway.
Circuit breakers are not limited to Java. Every major ecosystem has mature implementations, often inspired by Hystrix.
Node.js: Opossum
Opossum is the most popular circuit breaker for Node.js, with an API inspired by Hystrix:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162
import CircuitBreaker from 'opossum'; // Define the function to protectasync function fetchUser(userId: string): Promise<User> { const response = await fetch(`https://api.example.com/users/${userId}`); if (!response.ok) throw new Error(`HTTP ${response.status}`); return response.json();} // Create circuit breakerconst breaker = new CircuitBreaker(fetchUser, { // Trip conditions errorThresholdPercentage: 50, // Open at 50% error rate volumeThreshold: 10, // Minimum calls to evaluate // Timing timeout: 5000, // Call timeout resetTimeout: 30000, // Recovery timeout // Rolling window rollingCountTimeout: 10000, // 10-second rolling window rollingCountBuckets: 10, // 10 buckets of 1 second each}); // Event handlers for monitoringbreaker.on('success', (result) => { metrics.increment('circuitbreaker.success');}); breaker.on('failure', (error) => { metrics.increment('circuitbreaker.failure'); console.error('Call failed:', error);}); breaker.on('open', () => { metrics.increment('circuitbreaker.opened'); console.warn('Circuit opened');}); breaker.on('halfOpen', () => { console.log('Testing recovery...');}); breaker.on('close', () => { metrics.increment('circuitbreaker.closed'); console.log('Circuit closed - recovered');}); // Fallbackbreaker.fallback((userId) => ({ id: userId, name: 'Unknown User', status: 'unavailable'})); // Usagetry { const user = await breaker.fire('user-123'); console.log(user);} catch (error) { console.error('Failed with fallback');}Go: gobreaker and sony/gobreaker
The Go ecosystem has several circuit breaker implementations. Sony's gobreaker is particularly popular:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849
package main import ( "fmt" "time" "github.com/sony/gobreaker") func main() { settings := gobreaker.Settings{ Name: "ProductService", MaxRequests: 5, // Max requests in half-open Interval: 30 * time.Second, // Failure count reset interval Timeout: 60 * time.Second, // Recovery timeout // Custom trip condition ReadyToTrip: func(counts gobreaker.Counts) bool { failureRatio := float64(counts.TotalFailures) / float64(counts.Requests) return counts.Requests >= 10 && failureRatio >= 0.5 }, // Event handlers OnStateChange: func(name string, from, to gobreaker.State) { fmt.Printf("Circuit %s: %s -> %s", name, from, to) }, } cb := gobreaker.NewCircuitBreaker(settings) // Execute protected function result, err := cb.Execute(func() (interface{}, error) { return fetchProduct("prod-123") }) if err != nil { if err == gobreaker.ErrOpenState { // Circuit is open fmt.Println("Circuit open, using fallback") result = getProductFallback("prod-123") } else { // Other error fmt.Println("Error:", err) } } fmt.Println("Result:", result)}Python: pybreaker and circuitbreaker
12345678910111213141516171819202122232425262728293031323334353637383940414243
from pybreaker import CircuitBreaker, CircuitBreakerErrorimport requests # Create circuit breakerproduct_breaker = CircuitBreaker( fail_max=5, # Failures to trip reset_timeout=30, # Recovery timeout in seconds exclude=[ValueError], # Exceptions to ignore listeners=[ # Event listeners LoggingListener(), MetricsListener() ]) @product_breakerdef get_product(product_id: str) -> dict: """Protected function - decorated with circuit breaker.""" response = requests.get( f"http://product-service/products/{product_id}", timeout=5 ) response.raise_for_status() return response.json() def get_product_with_fallback(product_id: str) -> dict: """Wrapper with fallback handling.""" try: return get_product(product_id) except CircuitBreakerError: # Circuit is open return {"id": product_id, "status": "unavailable", "cached": True} except requests.RequestException as e: # Call failed (circuit might trip if threshold reached) return {"id": product_id, "status": "error", "error": str(e)} # Alternative: circuitbreaker library (simpler)from circuitbreaker import circuit @circuit(failure_threshold=5, recovery_timeout=30)def fetch_user(user_id: str) -> dict: response = requests.get(f"http://user-service/users/{user_id}") response.raise_for_status() return response.json().NET: Polly
Polly is the de facto standard for resilience in .NET, providing circuit breaker along with retry, timeout, bulkhead, and fallback policies:
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
using Polly;using Polly.CircuitBreaker; // Define the circuit breaker policyvar circuitBreakerPolicy = Policy .Handle<HttpRequestException>() .OrResult<HttpResponseMessage>(r => !r.IsSuccessStatusCode) .AdvancedCircuitBreakerAsync( failureThreshold: 0.5, // 50% failure rate to trip samplingDuration: TimeSpan.FromSeconds(30), // Sampling window minimumThroughput: 10, // Minimum calls to evaluate durationOfBreak: TimeSpan.FromSeconds(30), // Recovery timeout onBreak: (exception, duration) => { _logger.LogWarning($"Circuit opened for {duration.TotalSeconds}s"); }, onReset: () => { _logger.LogInformation("Circuit closed"); }, onHalfOpen: () => { _logger.LogInformation("Circuit half-open"); } ); // Combine with retry and fallbackvar resilientPolicy = Policy.WrapAsync( Policy<Product> .Handle<Exception>() .FallbackAsync( fallbackValue: Product.CreateDefault(), onFallbackAsync: async (exception, context) => { _logger.LogWarning($"Fallback triggered: {exception.Exception.Message}"); } ), Policy .Handle<HttpRequestException>() .WaitAndRetryAsync(3, retry => TimeSpan.FromSeconds(Math.Pow(2, retry))), circuitBreakerPolicy); // Executevar product = await resilientPolicy.ExecuteAsync(async () =>{ var response = await _httpClient.GetAsync($"/products/{productId}"); response.EnsureSuccessStatusCode(); return await response.Content.ReadAsAsync<Product>();});An increasingly popular approach is to implement circuit breakers at the infrastructure layer using a service mesh like Istio. This moves resilience logic out of application code.
Istio Circuit Breaker Configuration
12345678910111213141516171819202122232425262728293031
apiVersion: networking.istio.io/v1beta1kind: DestinationRulemetadata: name: product-servicespec: host: product-service.default.svc.cluster.local trafficPolicy: connectionPool: tcp: maxConnections: 100 # Connection pool limit http: h2UpgradePolicy: UPGRADE http1MaxPendingRequests: 100 # Queue limit http2MaxRequests: 1000 # Concurrent requests limit maxRequestsPerConnection: 100 maxRetries: 3 outlierDetection: # These settings implement circuit-breaker-like behavior consecutive5xxErrors: 5 # Eject after 5 consecutive 5xx interval: 30s # Ejection evaluation interval baseEjectionTime: 30s # Minimum ejection duration maxEjectionPercent: 50 # Max % of hosts to eject minHealthPercent: 30 # Min healthy hosts before panic # Can also use consecutive gateway errors consecutiveGatewayErrors: 5 # Or track all errors splitExternalLocalOriginErrors: true consecutiveLocalOriginFailures: 5Service Mesh vs. Application-Level Circuit Breakers
| Aspect | Application-Level (Resilience4j) | Service Mesh (Istio) |
|---|---|---|
| Granularity | Method/operation level | Service/endpoint level |
| Fallback Logic | Full programmatic control | Limited (fail fast only) |
| Configuration | Code or app config | Kubernetes manifests |
| Language Agnostic | Per-language library | Works for any language |
| Visibility | Application metrics | Mesh observability |
| Complexity | Each service implements | Platform manages |
| Customization | Highly customizable | Limited to mesh features |
| Latency | Minimal | Sidecar proxy overhead |
Many production systems use BOTH application-level and service mesh circuit breakers. The service mesh provides baseline protection for all traffic, while application-level breakers provide fine-grained control with custom fallback logic for critical operations.
For teams with existing Hystrix implementations, migration to Resilience4j is recommended. Here's a systematic approach.
Step 1: Configuration Mapping
First, map your Hystrix configuration to Resilience4j equivalents:
| Hystrix Property | Resilience4j Equivalent |
|---|---|
| circuitBreaker.requestVolumeThreshold | minimumNumberOfCalls |
| circuitBreaker.errorThresholdPercentage | failureRateThreshold |
| circuitBreaker.sleepWindowInMilliseconds | waitDurationInOpenState |
| metrics.rollingStats.timeInMilliseconds | slidingWindowSize (for TIME_BASED) |
| metrics.rollingStats.numBuckets | N/A (auto-calculated) |
| execution.isolation.thread.timeoutInMilliseconds | Use TimeLimiter module |
| threadpool.coreSize | Use Bulkhead module with ThreadPoolBulkhead |
1234567891011121314151617181920212223242526272829303132333435363738
// BEFORE: Hystrix Commandpublic class GetProductCommand extends HystrixCommand<Product> { private final ProductService productService; private final String productId; public GetProductCommand(ProductService productService, String productId) { super(Setter .withGroupKey(HystrixCommandGroupKey.Factory.asKey("ProductService")) .andCommandPropertiesDefaults( HystrixCommandProperties.Setter() .withCircuitBreakerRequestVolumeThreshold(20) .withCircuitBreakerErrorThresholdPercentage(50) .withCircuitBreakerSleepWindowInMilliseconds(5000) .withExecutionTimeoutInMilliseconds(3000) ) .andThreadPoolPropertiesDefaults( HystrixThreadPoolProperties.Setter() .withCoreSize(10) ) ); this.productService = productService; this.productId = productId; } @Override protected Product run() throws Exception { return productService.getProduct(productId); } @Override protected Product getFallback() { return Product.createDefault(productId); }} // UsageProduct product = new GetProductCommand(productService, productId).execute();Step 2: Gradual Migration
Resilience4j's default behavior differs from Hystrix in several ways: no automatic thread pool isolation, different sliding window implementations, and different half-open behavior. Test thoroughly before cutover, especially for services with tight latency requirements.
We've explored the ecosystem of circuit breaker implementations. Let's consolidate the key insights:
What's Next
With implementation knowledge in hand, we turn to a critical integration pattern: combining circuit breakers with retries. This combination is powerful but subtle—retries can undermine circuit breaker protection if not configured correctly. The next page explores how to get this combination right.
You now understand the major circuit breaker implementations, their architectures, and how to use them. You can implement circuit breakers in Java with Resilience4j, understand the migration path from Hystrix, and know where to find quality implementations in other ecosystems. Next, we'll master the combination of circuit breakers and retries.