Loading content...
You now understand tail recursion and tail call optimization deeply—what they are, how they work, and how to write tail-recursive code. But there's a crucial practical question we haven't fully addressed: Does your programming language actually perform TCO?
The answer varies dramatically across languages. Some languages guarantee TCO by specification—your tail-recursive code will absolutely use constant stack space. Other languages might perform TCO as an optional optimization, depending on compiler flags or runtime conditions. And many popular languages simply don't support TCO at all—your beautifully crafted tail-recursive function will still overflow the stack on deep recursion.
This page maps out the landscape of TCO support across major programming languages. You'll learn not just which languages support TCO, but why language designers made their choices, and how to write effective recursive code regardless of your language's TCO status.
By the end of this page, you will understand:
• The distinction between mandatory TCO and optional TCO • Which major languages guarantee, support, or lack TCO • Why some languages deliberately avoid implementing TCO • How to check if your specific language/runtime supports TCO • Practical strategies for TCO-lacking languages • The @tailrec annotation pattern and its equivalents
Language support for TCO falls into three categories:
1. Mandatory TCO (Required by Specification)
The language specification requires that all conforming implementations perform tail call optimization. You can rely on TCO as a language feature, not just a compiler optimization. This is essential for functional programming languages where recursion replaces iteration.
2. Optional TCO (Implementation-Dependent)
The language allows but doesn't require TCO. Some compilers/runtimes implement it, others don't. You might get TCO with compiler flags, optimization levels, or specific runtimes—but you can't rely on it portably.
3. No TCO (Not Supported)
The language or its primary implementations don't perform TCO. This may be a deliberate design choice (for stack trace clarity, security, or implementation simplicity) or simply not a priority. Tail-recursive code offers no space savings.
| Category | Behavior | Implications |
|---|---|---|
| Mandatory TCO | Guaranteed by language spec | Use tail recursion freely for any depth; recursion is the idiomatic looping construct |
| Optional TCO | Depends on implementation/flags | May work in development but fail in production; not portable; verify carefully |
| No TCO | Never optimized | Tail recursion gains nothing; use iteration for deep processing; recursion limited to shallow depths |
Relying on optional TCO is dangerous. Code might work on your machine (with optimizations enabled) but crash in production (different compiler flags) or on a colleague's machine (different version). Either use a language with mandatory TCO, or assume TCO doesn't exist.
These languages guarantee tail call optimization in their specifications. Tail recursion is a reliable, first-class feature:
Scheme
Scheme was the first major language to mandate TCO (since R5RS, 1998). The specification requires "proper tail recursion," meaning tail calls must not grow the stack. This makes recursion the primary looping construct—Scheme doesn't even need traditional loop syntax.
Haskell
Haskell guarantees TCO, though its lazy evaluation model means the analysis is more nuanced. Strict tail calls are optimized; lazy tails may build up thunks. The compiler (GHC) is very aggressive about optimizations.
Erlang/Elixir
Erlang and Elixir (which runs on the Erlang VM) mandate TCO. This is crucial for Erlang's actor model, where processes often run infinite recursive loops handling messages.
Clojure (with recur)
Clojure runs on the JVM, which doesn't support TCO natively. However, Clojure provides the recur special form that explicitly invokes tail recursion within the same function. Using recur guarantees constant stack space.
Scala (with @tailrec)
Scala on the JVM provides the @tailrec annotation. If the annotated function is tail-recursive, the compiler transforms it to a loop. If it's NOT tail-recursive, compilation fails—giving you a guarantee.
12345678910111213141516171819202122232425262728293031323334
; In Scheme, you can loop forever with tail recursion; This will NEVER overflow the stack (define (infinite-loop counter) (display counter) (newline) (infinite-loop (+ counter 1))) ; Guaranteed tail call optimization ; This server-style loop is idiomatic in Scheme(define (server-loop state) (let ((request (read-request))) (let ((new-state (handle-request state request))) (server-loop new-state)))) ; Runs forever, O(1) stack ; Factorial with guaranteed TCO(define (factorial n) (define (iter n acc) (if (<= n 1) acc (iter (- n 1) (* n acc)))) ; Tail call, optimized (iter n 1)) ; Even mutual recursion is optimized!(define (is-even? n) (if (= n 0) #t (is-odd? (- n 1)))) ; Tail call to different function (define (is-odd? n) (if (= n 0) #f (is-even? (- n 1)))) ; Tail call to different function ; Both work correctly for arbitrarily large n1234567891011121314151617181920212223242526272829303132333435363738394041424344454647
import scala.annotation.tailrec // The @tailrec annotation provides a GUARANTEE:// - If the function is tail-recursive: compiles to efficient loop// - If NOT tail-recursive: COMPILATION ERROR @tailrecdef factorial(n: Long, acc: Long = 1): Long = { if (n <= 1) acc else factorial(n - 1, n * acc) // Tail call - compiles!} // This would NOT compile with @tailrec:// @tailrec// def badFactorial(n: Long): Long = {// if (n <= 1) 1// else n * badFactorial(n - 1) // NOT a tail call - compiler error!// }// Error: "could not optimize @tailrec annotated method factorial:// it contains a recursive call not in tail position" // The compiled bytecode is equivalent to:def factorialLoop(n: Long): Long = { var current = n var acc = 1L while (current > 1) { acc = current * acc current = current - 1 } acc} // More complex example: GCD@tailrecdef gcd(a: Int, b: Int): Int = { if (b == 0) a else gcd(b, a % b) // Tail call} // List processing with @tailrec@tailrecdef contains[A](list: List[A], target: A): Boolean = list match { case Nil => false case head :: tail => if (head == target) true else contains(tail, target) // Tail call}123456789101112131415161718192021222324252627282930313233343536373839404142
; Clojure runs on JVM, which lacks native TCO; Solution: the 'recur' special form ; Using recur for tail recursion(defn factorial [n] (loop [n n, acc 1] ; loop establishes recursion point (if (<= n 1) acc (recur (dec n) (* n acc))))) ; recur = guaranteed tail call ; recur MUST be in tail position - compiler enforces this!; This would fail:; (defn bad-factorial [n]; (if (<= n 1); 1; (* n (recur (dec n))))) ; Error: recur not in tail position ; recur can target the function itself(defn sum-to [n] (letfn [(helper [n acc] (if (zero? n) acc (recur (dec n) (+ acc n))))] ; recur to helper (helper n 0))) ; Or use loop/recur for local recursion(defn fibonacci [n] (loop [n n, a 0, b 1] (if (zero? n) a (recur (dec n) b (+ a b))))) ; Processing sequences functionally with reduce (often better than recur)(defn sum-list [lst] (reduce + 0 lst)) ; No explicit recursion needed ; When you need custom traversal, recur works(defn my-map [f lst] (loop [lst lst, acc []] (if (empty? lst) acc (recur (rest lst) (conj acc (f (first lst)))))))These languages may perform TCO depending on compiler, version, optimization level, or runtime:
C/C++
Modern C/C++ compilers (GCC, Clang) perform TCO when optimizations are enabled (-O2 or higher). However, it's not guaranteed—certain conditions (debugging symbols, address-taken functions, etc.) can prevent optimization. Never rely on it for correctness.
JavaScript (ES6+)
The ECMAScript 2015 (ES6) specification mandates "proper tail calls" in strict mode. However, as of 2024, only Safari/WebKit implements this. V8 (Chrome, Node.js) and SpiderMonkey (Firefox) do not—they cite debugging concerns and implementation complexity.
Kotlin
Kotlin provides tailrec modifier similar to Scala's @tailrec. When applied, the compiler transforms tail-recursive functions to loops. If the function isn't tail-recursive, compilation fails.
Rust
Rust doesn't guarantee TCO, but LLVM (the backend) often performs it as an optimization. The become keyword for guaranteed TCO has been proposed but not yet implemented.
12345678910111213141516171819202122232425262728293031323334353637383940414243
// C compilers MAY optimize tail calls, but it's not guaranteed #include <stdio.h> // Tail-recursive factoriallong factorial_tail(long n, long acc) { if (n <= 1) return acc; return factorial_tail(n - 1, n * acc); // May be optimized} // With GCC/Clang and -O2 or higher, this compiles to a loop://// factorial_tail:// cmp rdi, 1// jle .return// .loop:// imul rsi, rdi ; acc *= n// sub rdi, 1 ; n -= 1// cmp rdi, 1// jg .loop// .return:// mov rax, rsi ; return acc// ret//// This IS tail call optimization! // BUT: With -O0 (no optimization) or certain debug flags,// the compiler may generate actual recursive calls that grow the stack. // To verify TCO is happening:// 1. Compile with: gcc -O2 -S factorial.c// 2. Check the assembly: no 'call factorial_tail' instruction// 3. Or run with very large n and check for stack overflow // WARNING: These can prevent TCO even with optimizations:// - Taking the address of the function: &factorial_tail// - Variable-length arrays (VLAs) in the function// - alloca() calls// - Certain security features (stack canaries in some modes) // Forcing TCO in C is fragile. For guaranteed safety:// - Use an explicit loop for deep recursion// - Or use a language with mandatory TCO1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950
// ES6 SPECIFIES proper tail calls in strict mode...// ...but only Safari actually implements it as of 2024! "use strict"; // This SHOULD work with unlimited depth per ES6 specfunction factorialTail(n, acc = 1) { if (n <= 1) return acc; return factorialTail(n - 1, n * acc); // Tail call} // In Safari: works for any n (TCO is implemented)// In Chrome/Node/Firefox: Stack overflow for large n! // Practical test:try { console.log(factorialTail(100000));} catch (e) { console.log("Stack overflow! TCO not supported."); // This happens in Chrome, Node.js, Firefox, etc.} // WHY don't V8/SpiderMonkey implement TCO?// 1. Stack traces: TCO eliminates frames, making debugging harder// 2. Performance: Some benchmarks showed slowdowns// 3. Implementation complexity// 4. Low demand: Most JS developers don't use deep recursion // PRACTICAL SOLUTION: Use trampolines for deep recursion function trampoline(fn) { return function trampolined(...args) { let result = fn(...args); while (typeof result === 'function') { result = result(); } return result; };} // Rewrite tail-recursive function to return thunksfunction factorialTrampoline(n, acc = 1) { if (n <= 1) return acc; return () => factorialTrampoline(n - 1, n * acc); // Return thunk} const factorial = trampoline(factorialTrampoline);console.log(factorial(100000)); // Works in ALL JS engines! // The trampoline calls the thunk in a loop, avoiding stack growth1234567891011121314151617181920212223242526272829303132333435363738
// Kotlin's tailrec is like Scala's @tailrec// The compiler transforms to a loop OR fails compilation tailrec fun factorial(n: Long, acc: Long = 1): Long { return if (n <= 1) acc else factorial(n - 1, n * acc) // Tail call - will be optimized!} // Attempting tailrec on non-tail-recursive function:// tailrec fun badFactorial(n: Long): Long {// return if (n <= 1) 1// else n * badFactorial(n - 1) // NOT tail recursive// }// Compiler warning: "A function is marked as tail-recursive // but no tail calls are found" // GCD exampletailrec fun gcd(a: Int, b: Int): Int = if (b == 0) a else gcd(b, a % b) // Binary searchtailrec fun binarySearch( arr: IntArray, target: Int, low: Int = 0, high: Int = arr.size - 1): Int { if (low > high) return -1 val mid = (low + high) / 2 return when { arr[mid] == target -> mid arr[mid] < target -> binarySearch(arr, target, mid + 1, high) else -> binarySearch(arr, target, low, mid - 1) }} // tailrec works great for single-path recursion// For tree recursion, you still need explicit stacks or other approachesThese widely-used languages don't support tail call optimization. Understanding why helps you make informed decisions:
Python
Python's creator, Guido van Rossum, explicitly rejected TCO. His reasons:
Java
The JVM doesn't support TCO at the bytecode level. While the JIT might optimize some tail calls, it's not guaranteed or portable. Java's design favors loops and the JVM's security model relies on predictable stack behavior.
C# / .NET
The .NET CLR has a tail. IL prefix for tail calls, but C# compiler doesn't emit it reliably. F# (also on .NET) does use it and supports TCO well.
Go
Go explicitly doesn't support TCO. The language design emphasizes simplicity and predictable stack usage. Use iteration for deep processing.
Ruby
Ruby doesn't have standard TCO. Some implementations (like Rubinius) have experimented with it, but MRI (the main implementation) doesn't support it.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758
import sys # Python's default recursion limitprint(f"Default recursion limit: {sys.getrecursionlimit()}") # Usually 1000 # Tail-recursive factorial (but TCO won't be applied!)def factorial_tail(n, acc=1): if n <= 1: return acc return factorial_tail(n - 1, n * acc) # Still grows the stack! # This WILL crash for large ntry: result = factorial_tail(10000) # RecursionError!except RecursionError as e: print(f"RecursionError: {e}") # You can increase the limit, but it's a workaround, not a solutionsys.setrecursionlimit(50000) # Dangerous! May crash Python itself. # THE PYTHONIC SOLUTION: Use iterationdef factorial_iterative(n): acc = 1 while n > 1: acc *= n n -= 1 return acc print(factorial_iterative(10000)) # Works fine! # If you MUST use a recursive style, use a trampoline:def trampoline(fn): def trampolined(*args, **kwargs): result = fn(*args, **kwargs) while callable(result): result = result() return result return trampolined def factorial_trampolined(n, acc=1): if n <= 1: return acc return lambda: factorial_trampolined(n - 1, n * acc) factorial = trampoline(factorial_trampolined)print(factorial(10000)) # Works! # Or use generators for lazy recursive-like patternsdef countdown(n): """Generator-based 'recursion' without stack growth""" while n > 0: yield n n -= 1 for num in countdown(1000000): # Works for any size! pass # Process each number12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
public class TailRecursionDemo { // Tail-recursive factorial (but JVM won't optimize it!) public static long factorialTail(long n, long acc) { if (n <= 1) return acc; return factorialTail(n - 1, n * acc); // Stack grows! } public static void main(String[] args) { try { // This will throw StackOverflowError for large n System.out.println(factorialTail(100000, 1)); } catch (StackOverflowError e) { System.out.println("Stack overflow! TCO not supported."); } } // THE JAVA SOLUTION: Use iteration public static long factorialIterative(long n) { long acc = 1; while (n > 1) { acc *= n; n--; } return acc; } // Or use Stream API for functional style public static long factorialStream(long n) { return java.util.stream.LongStream .rangeClosed(1, n) .reduce(1, (a, b) -> a * b); } // For recursive algorithms, use explicit stack public static int treeSum(TreeNode root) { if (root == null) return 0; java.util.Stack<TreeNode> stack = new java.util.Stack<>(); stack.push(root); int sum = 0; while (!stack.isEmpty()) { TreeNode node = stack.pop(); sum += node.value; if (node.right != null) stack.push(node.right); if (node.left != null) stack.push(node.left); } return sum; // No recursion, no stack overflow! }} // Note: Project Loom (virtual threads) doesn't change TCO situation// Kotlin's tailrec works because it compiles to bytecode loops, not callsLanguages that reject TCO often prioritize debugging, simplicity, or security over the elegance of tail recursion. This is a valid trade-off. In these languages, embrace iteration—it's idiomatic, efficient, and what the language designers intended.
Regardless of your language's TCO support, here are practical strategies for writing efficient recursive algorithms:
In Languages With Mandatory TCO:
In Languages With Optional TCO:
In Languages Without TCO:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182
# The trampoline is a general technique that works in ANY language# It converts stack space into heap space (thunk allocations) from typing import Callable, TypeVar, Union T = TypeVar('T')Thunk = Callable[[], Union[T, 'Thunk']] def trampoline(initial_thunk: Union[T, Thunk]) -> T: """ Execute thunks iteratively until we get a non-callable result. This turns tail recursion into iteration by: 1. Returning a thunk (lambda) instead of making a recursive call 2. Letting the trampoline call thunks in a loop """ result = initial_thunk while callable(result): result = result() return result # Example: Tail-recursive sum rewritten for trampolinedef sum_to_n_thunked(n: int, acc: int = 0): """Returns either the result OR a thunk to continue.""" if n <= 0: return acc # Base case: return actual result # Instead of: return sum_to_n_thunked(n - 1, acc + n) # Return a thunk: return lambda: sum_to_n_thunked(n - 1, acc + n) def sum_to_n(n: int) -> int: """Public interface using trampoline.""" return trampoline(sum_to_n_thunked(n)) # Test with arbitrarily large nprint(sum_to_n(1000000)) # Works in Python! # More complex example: Mutual recursion with trampolinesdef is_even_thunked(n: int): if n == 0: return True return lambda: is_odd_thunked(n - 1) def is_odd_thunked(n: int): if n == 0: return False return lambda: is_even_thunked(n - 1) def is_even(n: int) -> bool: return trampoline(is_even_thunked(n)) print(is_even(1000000)) # True - works for any n! # A decorator-based approach for cleaner syntaximport functools def trampolined(fn): """Decorator that automatically trampolines a function.""" @functools.wraps(fn) def wrapper(*args, **kwargs): result = fn(*args, **kwargs) while callable(result): result = result() return result return wrapper # Usage:# 1. Write the thunked versiondef factorial_impl(n, acc=1): if n <= 1: return acc return lambda: factorial_impl(n - 1, n * acc) # 2. Apply decoratorfactorial = trampolined(factorial_impl) print(factorial(10000)) # Works!| Language | TCO Support | Recommendation |
|---|---|---|
| Scheme/Racket | ✅ Mandatory | Use tail recursion freely |
| Haskell | ✅ Mandatory | Use tail recursion; beware lazy thunks |
| Erlang/Elixir | ✅ Mandatory | Use tail recursion; idiomatic for message loops |
| Scala | ⚠️ @tailrec | Use @tailrec annotation for guaranteed loop compilation |
| Kotlin | ⚠️ tailrec | Use tailrec modifier for guaranteed optimization |
| Clojure | ⚠️ recur | Use loop/recur for explicit tail recursion |
| JavaScript | ⚠️ Safari only | Use trampolines; don't rely on TCO |
| C/C++ | ⚠️ -O2+ | May work with optimizations; don't rely on it |
| Rust | ⚠️ LLVM opt | May work; use iteration for safety |
| Python | ❌ None | Use iteration or trampolines |
| Java | ❌ None | Use iteration or explicit stacks |
| Go | ❌ None | Use iteration; designed for loops |
| C# | ❌ Unreliable | Use iteration; F# has better TCO |
| Ruby | ❌ None | Use iteration or Enumerator |
The variation in TCO support isn't arbitrary—it reflects genuine trade-offs in language design. Understanding these helps you appreciate your language's choices:
Arguments FOR Mandatory TCO:
Expressiveness: Recursion is mathematically natural for many problems. TCO makes it practical.
Functional Programming: Pure functional languages use recursion instead of loops. Without TCO, this is impractical.
Predictable Performance: Guaranteed TCO means programmers can rely on O(1) space for tail-recursive code.
State Machines: Long-running state machines (servers, event loops) naturally express as recursive calls.
Arguments AGAINST Mandatory TCO:
Debugging: Stack traces are invaluable for debugging. TCO eliminates frames, making traces less useful.
Security: Some security tools and mechanisms rely on inspecting the call stack. TCO can interfere.
Implementation Complexity: TCO complicates VM/runtime implementation, especially with JIT compilation.
Unexpected Behavior: If TCO only sometimes occurs, developers may be surprised when code fails.
Cultural Fit: Languages emphasizing iteration (Python, Go) see no need for TCO.
"I don't believe in recursion as the basis of all programming. This is a fundamental belief of certain computer scientists, especially those who love Scheme...but I'm not one of them." - Guido van Rossum, Python creator
This captures the philosophical divide: some see recursion as fundamental, others see it as one tool among many.
12345678910111213141516171819202122232425262728293031323334353637383940414243
WITHOUT TCO - Full stack trace (helpful for debugging):======================================================== Traceback (most recent call last): File "app.py", line 100, in main process_all_items(items) File "app.py", line 50, in process_all_items return process_items_recursive(items, []) File "app.py", line 55, in process_items_recursive return process_items_recursive(rest, acc + [processed]) File "app.py", line 55, in process_items_recursive return process_items_recursive(rest, acc + [processed]) ... (repeat 47 more times) ... File "app.py", line 54, in process_items_recursive processed = transform(first) # <-- Error occurred here! File "app.py", line 30, in transform return item["value"] / item["divisor"]ZeroDivisionError: division by zero You can see EXACTLY how deep you are and the path to the error! WITH TCO - Collapsed stack trace (less helpful):================================================= Traceback (most recent call last): File "app.py", line 100, in main process_all_items(items) File "app.py", line 54, in process_items_recursive processed = transform(first) # Which iteration?? File "app.py", line 30, in transform return item["value"] / item["divisor"]ZeroDivisionError: division by zero Which of the 50 iterations caused this? How deep were we?The recursive frames are gone - optimized away! MITIGATION STRATEGIES:- Add explicit context to exceptions (include index, state)- Use logging at recursive entry points- Store "breadcrumbs" in a data structure- Profile/trace tools that capture call history independentlyWe've surveyed the landscape of tail call optimization across programming languages. Let's consolidate the key insights:
Module Complete:
Congratulations! You've completed the Tail Recursion & Optimization module. You now understand:
This knowledge empowers you to write recursive code intelligently—choosing the right approach for your language and problem, understanding the performance implications, and avoiding stack overflow pitfalls.
You now have comprehensive knowledge of tail recursion and its optimization. Whether you're working in Scheme where tail recursion is fundamental, Scala where @tailrec gives you guarantees, or Python where you'll use iteration instead—you understand the landscape and can make informed decisions.