Loading content...
Hyperband's power lies in efficient multi-fidelity evaluation, but it samples configurations uniformly at random. This ignores valuable information from completed evaluations. A configuration that performed well might suggest nearby configurations are also promising—yet Hyperband never exploits this structure.
BOHB (Bayesian Optimization and Hyperband), introduced by Falkner et al. (2018), combines Bayesian optimization's intelligent sampling with Hyperband's efficient evaluation. Rather than sampling randomly, BOHB builds a probabilistic model of the objective function and samples configurations likely to perform well.
This combination achieves the best of both worlds: the sample efficiency of Bayesian optimization and the computational efficiency of early stopping.
By the end of this page, you will understand: • How BOHB combines Bayesian optimization with Hyperband • Tree-structured Parzen Estimators (TPE) for configuration sampling • Multi-fidelity surrogate modeling strategies • Complete BOHB implementation • When BOHB outperforms vanilla Hyperband
BOHB modifies Hyperband at a single point: configuration sampling. Instead of sampling uniformly from the search space, BOHB uses a Tree-structured Parzen Estimator (TPE) to sample configurations likely to perform well based on previous observations.
Algorithm Structure:
| Aspect | Hyperband | BOHB |
|---|---|---|
| Configuration sampling | Uniform random | TPE-guided |
| Model building | None | Kernel density estimation |
| Observations used | None | All completed evaluations |
| Overhead per sample | O(1) | O(n log n) for n observations |
| Best for | Unknown structure | Exploitable structure |
The TPE Sampling Strategy:
TPE models the configuration space by fitting two density estimators:
New configurations are sampled to maximize the ratio (\ell(\lambda) / g(\lambda)), which approximates the Expected Improvement acquisition function.
The "good" threshold is typically the top 15-25% of observed performances, adapting as more observations accumulate.
TPE is a model-based optimization technique that differs fundamentally from Gaussian Process-based Bayesian optimization. Rather than modeling (p(y|\lambda)) (performance given configuration), TPE models (p(\lambda|y)) (configuration given performance).
Density Estimation:
Given (n) observations ({(\lambda_i, y_i)}_{i=1}^n), TPE:
For continuous hyperparameters, densities are modeled as truncated Gaussian mixtures. For discrete/categorical, densities use categorical distributions with smoothing.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175
import numpy as npfrom scipy.stats import norm, truncnormfrom typing import List, Dict, Tuple, Any class TPESampler: """ Tree-structured Parzen Estimator for configuration sampling. """ def __init__( self, search_space: Dict[str, Dict], gamma: float = 0.25, # Top percentile for "good" n_candidates: int = 24, # Candidates to evaluate bandwidth_factor: float = 1.0, ): self.search_space = search_space self.gamma = gamma self.n_candidates = n_candidates self.bandwidth_factor = bandwidth_factor self.observations: List[Tuple[Dict, float]] = [] def _split_observations(self) -> Tuple[List[Dict], List[Dict]]: """Split observations into good (l) and bad (g) groups.""" if len(self.observations) < 2: return [], [] sorted_obs = sorted(self.observations, key=lambda x: x[1]) n_good = max(1, int(len(sorted_obs) * self.gamma)) good = [config for config, _ in sorted_obs[:n_good]] bad = [config for config, _ in sorted_obs[n_good:]] return good, bad def _estimate_density_continuous( self, values: List[float], param_config: Dict ) -> callable: """Fit truncated Gaussian KDE to continuous values.""" low, high = param_config["low"], param_config["high"] if len(values) == 0: # Uniform prior return lambda x: 1.0 / (high - low) values = np.array(values) bandwidth = self.bandwidth_factor * np.std(values) + 1e-8 def density(x): if x < low or x > high: return 0.0 # Sum of truncated Gaussians centered at each observation pdf_sum = 0.0 for v in values: a, b = (low - v) / bandwidth, (high - v) / bandwidth pdf_sum += truncnorm.pdf(x, a, b, loc=v, scale=bandwidth) return pdf_sum / len(values) return density def _sample_from_density( self, density: callable, param_config: Dict, n_samples: int ) -> np.ndarray: """Sample from density using rejection sampling.""" low, high = param_config["low"], param_config["high"] samples = [] # Simple rejection sampling max_attempts = n_samples * 100 attempts = 0 while len(samples) < n_samples and attempts < max_attempts: x = np.random.uniform(low, high) acceptance = density(x) / (2.0 / (high - low)) # Approximate if np.random.random() < min(1.0, acceptance): samples.append(x) attempts += 1 # Fall back to uniform if rejection sampling fails while len(samples) < n_samples: samples.append(np.random.uniform(low, high)) return np.array(samples) def sample(self) -> Dict[str, Any]: """Sample a new configuration using TPE.""" if len(self.observations) < 10: # Not enough data, sample uniformly return self._sample_uniform() good, bad = self._split_observations() if len(good) < 2 or len(bad) < 2: return self._sample_uniform() config = {} for param_name, param_config in self.search_space.items(): if param_config["type"] == "continuous": good_vals = [c[param_name] for c in good] bad_vals = [c[param_name] for c in bad] l_density = self._estimate_density_continuous(good_vals, param_config) g_density = self._estimate_density_continuous(bad_vals, param_config) # Sample candidates from l, select best l/g ratio candidates = self._sample_from_density( l_density, param_config, self.n_candidates ) best_candidate = candidates[0] best_ratio = -float('inf') for c in candidates: l_val = l_density(c) + 1e-10 g_val = g_density(c) + 1e-10 ratio = l_val / g_val if ratio > best_ratio: best_ratio = ratio best_candidate = c config[param_name] = best_candidate elif param_config["type"] == "categorical": choices = param_config["choices"] good_counts = {c: 1 for c in choices} # Laplace smoothing bad_counts = {c: 1 for c in choices} for c in good: good_counts[c[param_name]] += 1 for c in bad: bad_counts[c[param_name]] += 1 # Compute l/g ratio for each choice ratios = {} for choice in choices: l_prob = good_counts[choice] / sum(good_counts.values()) g_prob = bad_counts[choice] / sum(bad_counts.values()) ratios[choice] = l_prob / (g_prob + 1e-10) # Sample proportionally to ratio total_ratio = sum(ratios.values()) probs = [ratios[c] / total_ratio for c in choices] config[param_name] = np.random.choice(choices, p=probs) return config def _sample_uniform(self) -> Dict[str, Any]: """Sample uniformly from search space.""" config = {} for param_name, param_config in self.search_space.items(): if param_config["type"] == "continuous": if param_config.get("log", False): log_val = np.random.uniform( np.log(param_config["low"]), np.log(param_config["high"]) ) config[param_name] = np.exp(log_val) else: config[param_name] = np.random.uniform( param_config["low"], param_config["high"] ) elif param_config["type"] == "categorical": config[param_name] = np.random.choice(param_config["choices"]) return config def update(self, config: Dict[str, Any], performance: float): """Record observation for future sampling.""" self.observations.append((config, performance))BOHB evaluates configurations at multiple fidelity levels. A key design question is: which observations should inform the TPE model?
Strategies:
BOHB's Approach:
The original BOHB uses a pragmatic strategy:
This balances the need for relevant observations (same fidelity) with sample efficiency (cross-bracket sharing).
When low-fidelity and high-fidelity performance are highly correlated, using all fidelities improves sample efficiency. When correlation is weak, mixing fidelities can mislead the model. BOHB's per-bracket approach provides a reasonable middle ground.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147
"""BOHB: Bayesian Optimization and Hyperband"""import numpy as npfrom typing import Dict, List, Callable, Any, Tuplefrom dataclasses import dataclass, fieldimport logging logger = logging.getLogger(__name__) @dataclassclass BOHBConfig: config_id: str bracket: int hyperparameters: Dict[str, Any] performances: Dict[int, float] = field(default_factory=dict) status: str = "active" class BOHB: """ BOHB: Combines Hyperband scheduling with TPE sampling. """ def __init__( self, search_space: Dict[str, Dict], max_budget: float = 81, eta: int = 3, random_fraction: float = 0.33, evaluator: Callable[[Dict, float], float] = None, ): self.search_space = search_space self.max_budget = max_budget self.eta = eta self.random_fraction = random_fraction self.evaluator = evaluator self.s_max = int(np.log(max_budget) / np.log(eta)) # Per-bracket TPE samplers from tpe_sampler import TPESampler self.samplers = { s: TPESampler(search_space) for s in range(self.s_max + 1) } self.all_configs: List[BOHBConfig] = [] self.config_counter = 0 def _sample_config(self, bracket: int) -> Dict[str, Any]: """Sample configuration using TPE or random.""" if np.random.random() < self.random_fraction: return self._sample_random() return self.samplers[bracket].sample() def _sample_random(self) -> Dict[str, Any]: """Uniform random sampling.""" config = {} for name, spec in self.search_space.items(): if spec["type"] == "continuous": if spec.get("log", False): val = np.exp(np.random.uniform( np.log(spec["low"]), np.log(spec["high"]) )) else: val = np.random.uniform(spec["low"], spec["high"]) config[name] = val elif spec["type"] == "categorical": config[name] = np.random.choice(spec["choices"]) return config def run_bracket(self, s: int) -> List[BOHBConfig]: """Run one Successive Halving bracket with TPE sampling.""" n = int(np.ceil((self.s_max + 1) / (s + 1) * (self.eta ** s))) r = self.max_budget * (self.eta ** (-s)) num_rounds = s + 1 logger.info(f"Bracket s={s}: n={n}, r={r:.1f}") # Sample configurations using TPE configs = [] for i in range(n): hp = self._sample_config(s) config = BOHBConfig( config_id=f"b{s}_c{self.config_counter}", bracket=s, hyperparameters=hp ) configs.append(config) self.all_configs.append(config) self.config_counter += 1 survivors = list(range(n)) for round_idx in range(num_rounds): budget = r * (self.eta ** round_idx) budget = min(budget, self.max_budget) n_keep = max(1, len(survivors) // self.eta) # Evaluate survivors results = [] for idx in survivors: config = configs[idx] if self.evaluator: perf = self.evaluator(config.hyperparameters, budget) else: perf = self._simulate(config.hyperparameters, budget) config.performances[round_idx] = perf results.append((idx, perf)) # Update TPE sampler self.samplers[s].update(config.hyperparameters, perf) # Keep top performers results.sort(key=lambda x: x[1]) survivors = [idx for idx, _ in results[:n_keep]] for idx, _ in results[n_keep:]: configs[idx].status = "eliminated" for idx in survivors: configs[idx].status = "completed" return configs def run(self, n_iterations: int = 1) -> BOHBConfig: """Run BOHB for specified iterations.""" for iteration in range(n_iterations): logger.info(f"=== BOHB Iteration {iteration + 1} ===") for s in range(self.s_max, -1, -1): self.run_bracket(s) # Find best completed = [c for c in self.all_configs if c.status == "completed"] best = min(completed, key=lambda c: list(c.performances.values())[-1]) logger.info(f"Best: {best.config_id}, perf={list(best.performances.values())[-1]:.4f}") return best def _simulate(self, hp: Dict, budget: float) -> float: """Simulated evaluator.""" lr = hp.get("learning_rate", 0.001) quality = -((np.log10(lr) + 3) ** 2) / 4 progress = 1 - np.exp(-budget / 30) return 0.5 + 0.2 * quality * (1 - 0.8 * progress) + np.random.normal(0, 0.01)Start with BOHB as default for neural network hyperparameter optimization where learning rate, weight decay, and architecture parameters form a structured, continuous space. Fall back to vanilla Hyperband if BOHB shows no improvement after ~50 configurations or if the overhead becomes problematic.
Looking Ahead:
The final page covers Budget Allocation—strategies for distributing computational resources across hyperparameter optimization, including how to balance parallel workers, handle heterogeneous hardware, and manage optimization campaigns over time.
You now understand how BOHB combines Bayesian optimization with Hyperband, including TPE sampling, multi-fidelity modeling, and when to use BOHB over vanilla Hyperband.