Loading content...
When you register a service with a dependency injection container, you're answering two fundamental questions: What should be created? and How long should it live? The first question is about types and interfaces. The second—the question of lifetime—is where the real complexity emerges.
Lifetime management determines how many instances of a service exist, when they're created, when they're disposed, and who shares them. Get it wrong, and you'll face memory leaks, stale data, thread-safety violations, or subtle bugs that manifest only under specific conditions. Get it right, and your application becomes predictable, efficient, and maintainable.
Transient lifetime is the simplest of the three major lifetime strategies. Every time you request a transient service, you receive a brand-new instance. No sharing, no caching, no ambiguity. This simplicity makes transient lifetime the safest default—but also potentially the most wasteful.
By the end of this page, you will understand transient lifetime at a deep level: its semantics, its memory implications, when it's the right choice, and when its apparent simplicity masks hidden costs. You'll learn to reason about instance creation graphs and identify scenarios where transient services are essential versus wasteful.
A transient service is one that the container creates anew for every single request. There is no object reuse whatsoever. If Component A requests a Logger and Component B requests the same Logger service one millisecond later, they receive two different Logger instances.
This behavior stems from a fundamental design choice: stateless independence. Each consumer gets its own exclusive instance, ensuring:
1234567891011121314151617
// In .NET Core / .NET 5+public void ConfigureServices(IServiceCollection services){ // Every request for IEmailSender creates a new SmtpEmailSender services.AddTransient<IEmailSender, SmtpEmailSender>(); // Every request for IValidator creates a new OrderValidator services.AddTransient<IValidator<Order>, OrderValidator>(); // Factory-based transient with custom creation logic services.AddTransient<IDocumentProcessor>(provider => { var config = provider.GetRequiredService<IConfiguration>(); var logger = provider.GetRequiredService<ILogger<PdfProcessor>>(); return new PdfProcessor(config["Pdf:LibraryPath"], logger); });}12345678910111213141516171819202122232425
// In Spring Framework, transient is called "prototype" scope@Configurationpublic class AppConfig { // Every request creates a new EmailSender @Bean @Scope("prototype") public EmailSender emailSender() { return new SmtpEmailSender(); } // Alternative: using component scanning} @Component@Scope("prototype")public class DocumentProcessor { private final Logger logger; @Autowired public DocumentProcessor(Logger logger) { this.logger = logger; System.out.println("New DocumentProcessor created: " + this.hashCode()); }}123456789101112131415161718192021222324
// In InversifyJSimport { Container, injectable, inject } from "inversify"; const container = new Container(); // Default binding is transient - each get() returns new instancecontainer.bind<IEmailSender>(TYPES.EmailSender) .to(SmtpEmailSender) .inTransientScope(); // Demonstrating transient behaviorconst sender1 = container.get<IEmailSender>(TYPES.EmailSender);const sender2 = container.get<IEmailSender>(TYPES.EmailSender);console.log(sender1 === sender2); // false - different instances // In NestJS@Injectable({ scope: Scope.TRANSIENT })export class ReportGenerator { private readonly instanceId = Math.random(); constructor(private readonly logger: Logger) { this.logger.log(`ReportGenerator ${this.instanceId} created`); }}Understanding how containers create transient instances reveals important performance characteristics. When you request a transient service, the container must:
This process happens every time. For deeply nested dependency graphs, transient services can trigger significant object creation cascades.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
// Consider this dependency chain - ALL are transientpublic class OrderController{ private readonly IOrderService _orderService; public OrderController(IOrderService orderService) => _orderService = orderService;} public class OrderService : IOrderService{ private readonly IOrderRepository _repository; private readonly IEmailSender _emailSender; private readonly IAuditLogger _auditLogger; public OrderService( IOrderRepository repository, IEmailSender emailSender, IAuditLogger auditLogger) { _repository = repository; _emailSender = emailSender; _auditLogger = auditLogger; }} public class OrderRepository : IOrderRepository{ private readonly IDbContextFactory _contextFactory; private readonly ILogger<OrderRepository> _logger; public OrderRepository( IDbContextFactory contextFactory, ILogger<OrderRepository> logger) { _contextFactory = contextFactory; _logger = logger; }} // If ALL services are transient, resolving OrderController creates:// - 1 OrderController// - 1 OrderService // - 1 OrderRepository// - 1 IDbContextFactory// - 1 ILogger<OrderRepository>// - 1 EmailSender// - 1 AuditLogger// = 7+ objects per controller resolution!When a transient service depends on other transient services, object creation cascades through the entire graph. In web applications handling thousands of requests per second, this can create millions of short-lived objects, stressing the garbage collector significantly. This doesn't mean transient is wrong—it means you must design with awareness.
Visualizing the Creation Timeline:
Consider what happens during two consecutive HTTP requests when services are transient:
12345678910111213141516171819
Request 1 (Thread A):├── Resolve OrderController → NEW instance #1│ └── Resolve IOrderService → NEW OrderService #1│ ├── Resolve IOrderRepository → NEW OrderRepository #1│ ├── Resolve IEmailSender → NEW EmailSender #1│ └── Resolve IAuditLogger → NEW AuditLogger #1├── Execute controller action└── Request completes → All instances eligible for GC Request 2 (Thread B, 1ms later):├── Resolve OrderController → NEW instance #2 (NOT #1 reused!)│ └── Resolve IOrderService → NEW OrderService #2│ ├── Resolve IOrderRepository → NEW OrderRepository #2│ ├── Resolve IEmailSender → NEW EmailSender #2│ └── Resolve IAuditLogger → NEW AuditLogger #2├── Execute controller action└── Request completes → All instances eligible for GC After 1000 requests: 5000+ objects created and garbage collectedTransient instances are short-lived by design. They're created, used briefly, and then become garbage. This pattern has specific implications for memory management and garbage collection:
Advantages:
Disadvantages:
123456789101112131415161718192021222324252627282930
// Diagnostic code to understand transient allocation impactpublic class DiagnosticEmailSender : IEmailSender{ private static int _instanceCount = 0; private readonly int _instanceId; public DiagnosticEmailSender() { _instanceId = Interlocked.Increment(ref _instanceCount); Console.WriteLine($"EmailSender #{_instanceId} created. Total: {_instanceCount}"); } ~DiagnosticEmailSender() { Console.WriteLine($"EmailSender #{_instanceId} finalized"); } public Task SendAsync(Email email) { // Implementation }} // After 10,000 requests:// "EmailSender #10000 created. Total: 10000"// (Finalizer output scattered throughout as GC runs) // Contrast with singleton - would show:// "EmailSender #1 created. Total: 1"// (No finalizer output until application shutdown)| Metric | All Transient | Mixed Lifetimes | Impact |
|---|---|---|---|
| Objects/Request | 7-15 (typical) | 2-4 (service layer) | 3-5x more allocations |
| Gen 0 Collections | Frequent | Occasional | CPU overhead |
| GC Pause Time | Many small pauses | Fewer pauses | Latency jitter |
| Memory Churn | High | Moderate | Cache locality |
| Peak Memory | Bounded | Higher baseline | Different trade-off |
For most applications, modern GC handles transient allocations efficiently. Concern about GC overhead typically matters only for: (1) Very high-throughput systems (10K+ requests/second), (2) Latency-sensitive applications (real-time, gaming), or (3) Resource-constrained environments. Profile before optimizing—premature optimization with lifetimes can introduce bugs.
Transient services provide natural thread isolation. Since each request (typically on its own thread) receives its own instance, there's no shared state to protect. This is the core appeal of transient lifetime for services that maintain internal state.
Consider the contrast:
123456789101112131415161718192021222324252627282930313233343536373839404142434445
// This service maintains per-request state// Transient lifetime makes it inherently safepublic class RequestCorrelator : IRequestCorrelator{ private readonly List<string> _events = new(); private readonly Stopwatch _stopwatch = Stopwatch.StartNew(); public void RecordEvent(string eventName) { // NO LOCKING NEEDED - this instance serves only one request _events.Add($"{_stopwatch.ElapsedMilliseconds}ms: {eventName}"); } public IReadOnlyList<string> GetEventLog() => _events.AsReadOnly();} // Registrationservices.AddTransient<IRequestCorrelator, RequestCorrelator>(); // Usage in controllerpublic class OrderController : ControllerBase{ private readonly IRequestCorrelator _correlator; private readonly IOrderService _orderService; public OrderController(IRequestCorrelator correlator, IOrderService orderService) { _correlator = correlator; _orderService = orderService; } [HttpPost] public async Task<IActionResult> CreateOrder(OrderRequest request) { _correlator.RecordEvent("Received request"); var order = await _orderService.CreateAsync(request); _correlator.RecordEvent("Order created"); // Log for diagnostics - shows events only from THIS request _logger.LogInformation("Request timeline: {Events}", string.Join(", ", _correlator.GetEventLog())); return Ok(order); }}If RequestCorrelator were registered as singleton, all concurrent requests would write to the same _events list, creating race conditions, corrupted data, and meaningless logs. The same code that's perfectly safe as transient becomes a critical bug as singleton. Lifetime choice fundamentally affects correctness, not just performance.
Transient is the right choice in several well-defined scenarios. Understanding these patterns helps you make confident lifetime decisions:
services.AddTransient<IReport>(sp => new Report(userId, reportDate)).123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
// Use Case 1: Per-request validation contextpublic interface IValidationContext{ void AddError(string field, string message); bool IsValid { get; } IEnumerable<ValidationError> Errors { get; }} // MUST be transient - accumulates errors for single validation operationservices.AddTransient<IValidationContext, ValidationContext>(); // Use Case 2: Wrapping non-thread-safe librarypublic class ExcelExporter : IExcelExporter{ private readonly ExcelPackage _package; // EPPlus is not thread-safe public ExcelExporter() { _package = new ExcelPackage(); // Each export gets fresh instance } public byte[] Export(ReportData data) { /* ... */ }} services.AddTransient<IExcelExporter, ExcelExporter>(); // Use Case 3: Simple stateless utilitypublic class DateFormatter : IDateFormatter{ // No state, cheap to construct public string FormatForDisplay(DateTime date) => date.ToString("MMM dd, yyyy"); public string FormatForApi(DateTime date) => date.ToString("O");} services.AddTransient<IDateFormatter, DateFormatter>(); // Use Case 4: Service needing fresh disposable per operationpublic class FileProcessor : IFileProcessor{ private readonly IFileSystem _fileSystem; public FileProcessor(IFileSystem fileSystem) { _fileSystem = fileSystem; // Assume this needs fresh instance per operation }} services.AddTransient<IFileProcessor, FileProcessor>();Despite being safe by default, transient is the wrong choice when the cost of repeated instantiation outweighs its benefits:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
// ❌ WRONG: Expensive initialization repeated per requestpublic class ConfigurationService : IConfigurationService{ private readonly Dictionary<string, string> _settings; public ConfigurationService(IConfigurationSource source) { // Takes 200ms to load from remote source // As transient: 200ms added to EVERY request _settings = source.LoadAllSettings(); // Expensive! }} // ❌ WRONG: Cache that never cachespublic class UserCache : IUserCache{ private readonly Dictionary<int, User> _cache = new(); public User GetOrAdd(int userId, Func<int, User> factory) { if (!_cache.TryGetValue(userId, out var user)) { user = factory(userId); _cache[userId] = user; } return user; }}// As transient: _cache is always empty! Cache provides zero benefit. // ❌ WRONG: Connection pool recreationpublic class DatabaseService : IDatabaseService{ private readonly NpgsqlConnection _connection; public DatabaseService(string connectionString) { // Connection establishment takes 50-200ms _connection = new NpgsqlConnection(connectionString); _connection.Open(); // Network round trip! }}// As transient: new connection per request = massive latency // ✅ CORRECT: These should be singletons or scopedservices.AddSingleton<IConfigurationService, ConfigurationService>();services.AddSingleton<IUserCache, UserCache>();services.AddSingleton<IDatabaseService, DatabaseService>();A common performance bug is registering an expensive service as transient 'because it's the default' or 'to avoid threading issues.' The fix isn't avoiding transient—it's designing services that are naturally thread-safe (stateless), then registering them as singletons. Transient should be a deliberate choice for stateful services, not a safety blanket.
A critical consideration for transient services is disposal. When a transient service implements IDisposable, the container is responsible for disposing it—but the timing depends on the resolution context.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
// In ASP.NET Core / Microsoft.Extensions.DependencyInjectionpublic class FileHandler : IFileHandler, IDisposable{ private readonly FileStream _stream; private bool _disposed; public FileHandler(string path) { _stream = File.OpenRead(path); } public void Dispose() { if (!_disposed) { _stream.Dispose(); _disposed = true; Console.WriteLine("FileHandler disposed"); } }} services.AddTransient<IFileHandler>(sp => new FileHandler("data.txt")); // Scenario 1: Resolved from root container (singleton scope)var handler1 = app.Services.GetRequiredService<IFileHandler>();// ⚠️ handler1 is tracked by root container// ⚠️ Will NOT be disposed until application shutdown!// ⚠️ This is a resource leak! // Scenario 2: Resolved from scoped container (per-request)using (var scope = app.Services.CreateScope()){ var handler2 = scope.ServiceProvider.GetRequiredService<IFileHandler>(); // handler2 is tracked by this scope} // Scope disposes -> handler2.Dispose() called automatically ✓ // Scenario 3: In ASP.NET Core controllerpublic class FilesController : ControllerBase{ private readonly IFileHandler _handler; // Resolved per-request scope public FilesController(IFileHandler handler) { _handler = handler; } [HttpGet] public IActionResult Get() { // Use _handler return Ok(); } // Request ends -> scope disposes -> _handler.Dispose() called ✓}When you resolve a disposable transient from the root container (not from a scope), the container tracks it for later disposal. But 'later' means application shutdown—potentially hours or days later. This creates a memory leak where transient IDisposable instances accumulate. Always resolve transient disposables from scoped containers.
| Resolution Context | Disposal Timing | Risk Level |
|---|---|---|
| Scoped container (per-request) | End of scope | ✓ Safe |
| Using block with CreateScope() | End of using block | ✓ Safe |
| Root container directly | Application shutdown | ⚠️ Memory leak! |
| Injected into singleton | Application shutdown | ⚠️ Resource held too long |
Transient lifetime is the simplest strategy: new instance every time. This simplicity provides safety through isolation—no shared state, no threading concerns. But simplicity comes with costs: higher allocation rates, more GC pressure, and repeated construction overhead.
What's next:
Transient solves the isolation problem but creates many instances. The next page explores Scoped Lifetime—the middle ground that provides instance sharing within a defined boundary (like an HTTP request) while maintaining isolation across boundaries.
You now understand transient lifetime at a deep level: its semantics, its memory implications, its thread-safety characteristics, and when to use or avoid it. Next, we'll explore scoped lifetime—instance reuse within a boundary.