Loading content...
Imagine explaining model performance to a business stakeholder:
Percentage-based metrics express errors relative to the actual values, providing intuitive, scale-independent measures that stakeholders immediately understand.
MAPE (Mean Absolute Percentage Error) and SMAPE (Symmetric Mean Absolute Percentage Error) are the most widely used percentage error metrics, especially in forecasting and business contexts. But their apparent simplicity hides important subtleties.
By the end of this page, you will understand MAPE's mathematical definition and interpretation, why MAPE is undefined or infinite for zero values, asymmetry in MAPE and why it matters, SMAPE as a symmetric alternative, limitations of both metrics, and when to use percentage-based metrics vs. absolute metrics.
MAPE measures the average absolute error as a percentage of the actual values:
$$\text{MAPE} = \frac{100%}{n} \sum_{i=1}^{n} \left| \frac{y_i - \hat{y}_i}{y_i} \right|$$
Or equivalently:
$$\text{MAPE} = \frac{1}{n} \sum_{i=1}^{n} \left| \frac{y_i - \hat{y}_i}{y_i} \right| \times 100%$$
Components:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
import numpy as np def mape(y_true: np.ndarray, y_pred: np.ndarray) -> float: """ Compute Mean Absolute Percentage Error. Parameters: ----------- y_true : np.ndarray Array of true values (must not contain zeros) y_pred : np.ndarray Array of predicted values Returns: -------- float MAPE as a percentage (0-100+) """ y_true = np.asarray(y_true, dtype=float) y_pred = np.asarray(y_pred, dtype=float) # Check for zeros in y_true if np.any(y_true == 0): raise ValueError("MAPE is undefined when y_true contains zeros") # Calculate percentage errors percentage_errors = np.abs((y_true - y_pred) / y_true) # Mean and convert to percentage return np.mean(percentage_errors) * 100 # Example: Sales forecastingy_true = np.array([100, 200, 150, 300, 250])y_pred = np.array([110, 190, 160, 280, 260]) mape_value = mape(y_true, y_pred) print("=== MAPE Calculation ===")print(f"Actual: {y_true}")print(f"Predicted: {y_pred}")print(f"Errors: {y_true - y_pred}")print(f"\nPercentage errors:")for i in range(len(y_true)): pct = abs((y_true[i] - y_pred[i]) / y_true[i]) * 100 print(f" |({y_true[i]} - {y_pred[i]}) / {y_true[i]}| = {pct:.1f}%") print(f"\nMAPE: {mape_value:.2f}%")print(f"\n→ On average, predictions are off by {mape_value:.1f}% of actual values")| MAPE Range | Interpretation | Typical Context |
|---|---|---|
| < 10% | Highly accurate | Mature, stable processes |
| 10-20% | Good | Standard forecasting |
| 20-50% | Reasonable | Volatile or early forecasts |
50% | Poor | High uncertainty, may need different approach |
MAPE is extremely popular in demand forecasting, supply chain management, and financial planning. Its percentage format makes it easy to communicate across functions: 'Our sales forecast is accurate to within 15%' is immediately meaningful to executives.
MAPE has a fundamental mathematical problem: it's undefined when any actual value is zero.
$$\frac{y_i - \hat{y}_i}{y_i} \text{ when } y_i = 0 \Rightarrow \frac{?}{0} = \text{undefined}$$
This isn't a theoretical edge case—it's a real problem in many domains:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667
import numpy as npfrom typing import Optional def mape_with_zero_handling( y_true: np.ndarray, y_pred: np.ndarray, zero_strategy: str = 'exclude') -> Optional[float]: """ MAPE with different strategies for handling zeros. Parameters: ----------- zero_strategy : str 'exclude' - Skip samples where y_true = 0 'replace' - Replace zeros with small value 'weighted' - Use weighted MAPE (divide by sum of |y|) """ y_true = np.asarray(y_true, dtype=float) y_pred = np.asarray(y_pred, dtype=float) if zero_strategy == 'exclude': # Simply exclude zero values mask = y_true != 0 if not np.any(mask): return None y_true = y_true[mask] y_pred = y_pred[mask] return np.mean(np.abs((y_true - y_pred) / y_true)) * 100 elif zero_strategy == 'replace': # Replace zeros with small value (problematic!) epsilon = 1e-10 y_true_safe = np.where(y_true == 0, epsilon, y_true) return np.mean(np.abs((y_true - y_pred) / y_true_safe)) * 100 elif zero_strategy == 'weighted': # Weighted MAPE: sum(|error|) / sum(|actual|) total_error = np.sum(np.abs(y_true - y_pred)) total_actual = np.sum(np.abs(y_true)) if total_actual == 0: return None return (total_error / total_actual) * 100 else: raise ValueError(f"Unknown strategy: {zero_strategy}") # Example with zerosy_true = np.array([100, 0, 50, 200, 0, 150])y_pred = np.array([110, 10, 55, 180, 5, 160]) print("=== Zero Handling Strategies ===")print(f"Actual: {y_true}")print(f"Predicted: {y_pred}")print() for strategy in ['exclude', 'replace', 'weighted']: result = mape_with_zero_handling(y_true, y_pred, strategy) if result is not None: print(f" {strategy:12s}: MAPE = {result:.2f}%") else: print(f" {strategy:12s}: Cannot compute") print("\n*** Warning ***")print("The 'replace' strategy can give misleading results!")print("A zero actual with prediction of 10 gives:")print(f" |(0 - 10) / 1e-10| = {abs(10/1e-10):.0e} (astronomical!)")Before using MAPE, check for zeros in your target variable. If zeros are common or meaningful, MAPE may not be the right metric. Consider SMAPE, MAE, or domain-specific alternatives.
MAPE has a subtle but important bias: it penalizes over-predictions more heavily than under-predictions (relative to the prediction magnitude).
Consider:
Actual = 100, Predicted = 150 (over-prediction by 50)
Actual = 100, Predicted = 50 (under-prediction by 50)
Same absolute error, same MAPE. But what if we flip the perspective?
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
import numpy as np def demonstrate_mape_asymmetry(): """ Show how MAPE is asymmetric in a way that favors under-prediction. """ print("=== MAPE Asymmetry Demonstration ===\n") # Case 1: Over-predict y_true_1 = np.array([100]) y_pred_over = np.array([200]) # Predict double mape_over = abs((100 - 200) / 100) * 100 # Case 2: Under-predict y_true_2 = np.array([200]) y_pred_under = np.array([100]) # Predict half mape_under = abs((200 - 100) / 200) * 100 print("Scenario: Predictions are off by a factor of 2") print("\n1. Over-prediction (predict 200 when actual is 100):") print(f" Error = |100 - 200| = 100") print(f" MAPE = 100/100 = {mape_over:.0f}%") print("\n2. Under-prediction (predict 100 when actual is 200):") print(f" Error = |200 - 100| = 100") print(f" MAPE = 100/200 = {mape_under:.0f}%") print(f"\n*** Same factor of 2 error, but: ***") print(f" Over-prediction MAPE: {mape_over:.0f}%") print(f" Under-prediction MAPE: {mape_under:.0f}%") print("\n→ Under-predictions look 'better' in MAPE!") # Impact on model optimization print("\n=== Impact on Optimization ===") print("If you minimize MAPE during training, the model will") print("learn to systematically UNDER-predict to achieve lower MAPE.") # Demonstrate with multiple predictions actual = 100 print(f"\nActual value: {actual}") print(f"{'Prediction':>12} | {'Error':>8} | {'% Error':>10}") print("-" * 40) for pred in [50, 75, 100, 125, 150, 200]: error = abs(actual - pred) pct_error = error / actual * 100 print(f"{pred:>12} | {error:>8} | {pct_error:>9.1f}%") demonstrate_mape_asymmetry()Why Does This Happen?
MAPE divides by the actual value, not the predicted value or their average. Since:
Practical Implications
This asymmetry motivated the development of SMAPE.
In supply chain forecasting, MAPE's asymmetry can cause systematic under-forecasting, leading to stockouts. If a stockout costs more than overstocking, optimizing on MAPE can hurt business outcomes despite 'good' MAPE scores.
SMAPE addresses MAPE's asymmetry by using the average of actual and predicted values in the denominator:
$$\text{SMAPE} = \frac{100%}{n} \sum_{i=1}^{n} \frac{|y_i - \hat{y}_i|}{(|y_i| + |\hat{y}_i|)/2}$$
Simplified: $$\text{SMAPE} = \frac{200%}{n} \sum_{i=1}^{n} \frac{|y_i - \hat{y}_i|}{|y_i| + |\hat{y}_i|}$$
Key difference: The denominator is $(|y| + |\hat{y}|)/2$ instead of just $|y|$.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273
import numpy as np def smape(y_true: np.ndarray, y_pred: np.ndarray) -> float: """ Compute Symmetric Mean Absolute Percentage Error. SMAPE = (200/n) * Σ |y - ŷ| / (|y| + |ŷ|) Ranges from 0% (perfect) to 200% (worst case) Some versions cap at 100% by dividing by 2. """ y_true = np.asarray(y_true, dtype=float) y_pred = np.asarray(y_pred, dtype=float) numerator = np.abs(y_true - y_pred) denominator = np.abs(y_true) + np.abs(y_pred) # Handle case where both are zero with np.errstate(divide='ignore', invalid='ignore'): ratio = np.where(denominator == 0, 0, numerator / denominator) return np.mean(ratio) * 200 def smape_capped(y_true: np.ndarray, y_pred: np.ndarray) -> float: """ SMAPE capped at 100% (alternative formulation). """ return smape(y_true, y_pred) / 2 # Compare MAPE and SMAPE asymmetrydef compare_mape_smape_symmetry(): """ Show that SMAPE is symmetric while MAPE is not. """ print("=== MAPE vs SMAPE Symmetry ===\n") # Over-prediction: actual=100, predicted=200 y1_true, y1_pred = np.array([100]), np.array([200]) # Under-prediction: actual=200, predicted=100 y2_true, y2_pred = np.array([200]), np.array([100]) # Calculate all metrics mape1 = abs(100 - 200) / 100 * 100 mape2 = abs(200 - 100) / 200 * 100 smape1 = smape(y1_true, y1_pred) smape2 = smape(y2_true, y2_pred) print("Scenario: Factor of 2 error in opposite directions\n") print("Case 1: Actual=100, Predicted=200 (over-prediction)") print(f" MAPE: {mape1:.1f}%") print(f" SMAPE: {smape1:.1f}%") print("\nCase 2: Actual=200, Predicted=100 (under-prediction)") print(f" MAPE: {mape2:.1f}%") print(f" SMAPE: {smape2:.1f}%") print("\n*** Key Insight ***") print(f"MAPE difference: {mape1 - mape2:.1f}% (asymmetric!)") print(f"SMAPE difference: {smape1 - smape2:.1f}% (symmetric)") compare_mape_smape_symmetry() # SMAPE calculation exampleprint("\n" + "="*50)y_true = np.array([100, 200, 150, 300, 250])y_pred = np.array([110, 190, 160, 280, 260]) print(f"\nActual: {y_true}")print(f"Predicted: {y_pred}")print(f"\nSMAPE: {smape(y_true, y_pred):.2f}%")print(f"SMAPE (0-100 scale): {smape_capped(y_true, y_pred):.2f}%")| Property | MAPE | SMAPE |
|---|---|---|
| Range | 0% to ∞ | 0% to 200% (or 0-100%) |
| Symmetric | No (favors under-prediction) | Yes |
| Zero handling | Undefined if y=0 | Defined unless both 0 |
| Interpretation | % of actual value | % of average of actual+predicted |
| Common use | Traditional forecasting | M-competitions, academic |
There are multiple formulations of SMAPE in the literature. Some range 0-100%, others 0-200%. The M3 and M4 forecasting competitions used the 0-200% version. Always clarify which formulation is being used when comparing results.
SMAPE solves MAPE's asymmetry but introduces its own issues.
Problem 1: Asymmetry Returns Near Zero
While SMAPE is symmetric for large values, it becomes asymmetric near zero. If actual is near zero but prediction is not (or vice versa), the metric behaves unexpectedly.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263
import numpy as np def smape(y_true, y_pred): """SMAPE calculation.""" y_true = np.asarray(y_true, dtype=float) y_pred = np.asarray(y_pred, dtype=float) numerator = np.abs(y_true - y_pred) denominator = np.abs(y_true) + np.abs(y_pred) with np.errstate(divide='ignore', invalid='ignore'): ratio = np.where(denominator == 0, 0, numerator / denominator) return np.mean(ratio) * 200 def smape_near_zero_issues(): """ Show SMAPE issues near zero values. """ print("=== SMAPE Issues Near Zero ===\n") # Issue 1: Near-zero actual print("Issue 1: Near-zero actual value") y_true_1 = np.array([0.001]) y_pred_1 = np.array([10]) smape_1 = smape(y_true_1, y_pred_1) print(f" Actual: {y_true_1[0]}, Predicted: {y_pred_1[0]}") print(f" SMAPE: {smape_1:.1f}%") print(f" Error is huge (10) but SMAPE is 'only' {smape_1:.0f}%") # Issue 2: Both small print("\nIssue 2: Small values, same relative error") y_true_2 = np.array([0.01]) y_pred_2 = np.array([0.02]) # Double the actual smape_2 = smape(y_true_2, y_pred_2) print(f" Actual: {y_true_2[0]}, Predicted: {y_pred_2[0]}") print(f" SMAPE: {smape_2:.1f}%") # Compare with larger values y_true_3 = np.array([100]) y_pred_3 = np.array([200]) # Also double smape_3 = smape(y_true_3, y_pred_3) print(f"\n Actual: {y_true_3[0]}, Predicted: {y_pred_3[0]}") print(f" SMAPE: {smape_3:.1f}%") print(f" → Same relative error, same SMAPE (good!)") print("\n" + "="*50) print("Issue 3: SMAPE can be artificially low") print("="*50) # Large prediction when actual is 0 y_true_4 = np.array([0]) y_pred_4 = np.array([100]) smape_4 = smape(y_true_4, y_pred_4) print(f"\nActual=0, Predicted=100") print(f"SMAPE = 2 * |0-100| / (0+100) = 200/{100} = {smape_4:.0f}%") print("→ Maximum possible SMAPE, correct behavior") # Both zero y_true_5 = np.array([0]) y_pred_5 = np.array([0]) smape_5 = smape(y_true_5, y_pred_5) print(f"\nActual=0, Predicted=0") print(f"SMAPE = 0/0 → defined as {smape_5:.0f}% (perfect)") smape_near_zero_issues()Problem 2: SMAPE Can Still Favor Under-Prediction
Contrary to its name, SMAPE isn't perfectly symmetric. Consider:
But for the same absolute prediction shift:
So $\frac{10}{195}$ vs $\frac{10}{190}$ — slightly different!
Problem 3: Unbounded Can Be Confusing
SMAPE ranges 0-200% in the standard formulation. Explaining why "30% SMAPE" is good when the scale goes to 200% can confuse stakeholders used to MAPE.
Mean Absolute Scaled Error (MASE) is often recommended over MAPE/SMAPE. It scales errors by the in-sample MAE of a naive forecast, avoiding zero and infinity issues entirely. It's especially popular in academic forecasting research.
Weighted MAPE (WMAPE) offers another alternative that handles zeros gracefully:
$$\text{WMAPE} = \frac{\sum_{i=1}^{n} |y_i - \hat{y}i|}{\sum{i=1}^{n} |y_i|} \times 100%$$
Instead of averaging individual percentage errors, WMAPE computes:
This is equivalent to a weighted average of percentage errors, where weights are proportional to actual values.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
import numpy as np def wmape(y_true: np.ndarray, y_pred: np.ndarray) -> float: """ Compute Weighted Mean Absolute Percentage Error. WMAPE = sum(|y - ŷ|) / sum(|y|) * 100% Handles zeros in y_true as long as sum(y_true) != 0 """ y_true = np.asarray(y_true, dtype=float) y_pred = np.asarray(y_pred, dtype=float) total_error = np.sum(np.abs(y_true - y_pred)) total_actual = np.sum(np.abs(y_true)) if total_actual == 0: raise ValueError("WMAPE undefined when all actual values are zero") return (total_error / total_actual) * 100 def compare_mape_wmape(): """ Compare MAPE and WMAPE, especially with zeros. """ print("=== MAPE vs WMAPE ===\n") # Dataset with a zero y_true = np.array([10, 0, 100, 50, 1000]) y_pred = np.array([12, 5, 90, 55, 980]) print(f"Actual: {y_true}") print(f"Predicted: {y_pred}") print(f"Errors: {np.abs(y_true - y_pred)}") # MAPE (excluding zeros) mask = y_true != 0 mape_val = np.mean(np.abs((y_true[mask] - y_pred[mask]) / y_true[mask])) * 100 # WMAPE wmape_val = wmape(y_true, y_pred) print(f"\nMAPE (excluding zero): {mape_val:.2f}%") print(f"WMAPE: {wmape_val:.2f}%") print("\n--- Key Difference ---") print("MAPE: Each sample has equal weight") print("WMAPE: High-value samples have more weight") # Show weights print("\nImplicit WMAPE weights (proportional to |y|):") weights = np.abs(y_true) / np.sum(np.abs(y_true)) for i, (y, w) in enumerate(zip(y_true, weights)): print(f" y={y}: weight = {w:.3f} ({w*100:.1f}%)") compare_mape_wmape() def wmape_business_context(): """ Show why WMAPE may be preferred in business contexts. """ print("\n=== Business Context: Why WMAPE Makes Sense ===\n") # Sales forecasting across products products = ['A', 'B', 'C', 'D'] actuals = np.array([1000, 50, 500, 10]) # Sales in $ predictions = np.array([900, 55, 480, 12]) errors = np.abs(actuals - predictions) print("Product | Actual | Predicted | Error | % Error") print("-" * 55) for i, p in enumerate(products): pct = (errors[i] / actuals[i] * 100) if actuals[i] > 0 else 0 print(f" {p} | ${actuals[i]: 4} | ${ predictions[i]: 4 } | ${ errors[i]: 3 } | { pct: .1f } % ") mape_val = np.mean(errors / actuals) * 100 wmape_val = wmape(actuals, predictions) print(f"\nMAPE: {mape_val:.1f}%") print(f"WMAPE: {wmape_val:.1f}%") print("\n*** Interpretation ***") print("MAPE gives equal weight to all products.") print(" → Product D's 20% error matters as much as A's 10%") print("\nWMAPE weights by revenue.") print(" → Product A (92% of revenue) dominates the metric") print("\nWhich is 'right' depends on your business question!") wmape_business_context()Choosing between MAPE, SMAPE, WMAPE, and other metrics depends on your specific context.
| Scenario | Recommended Metric | Reason |
|---|---|---|
| No zeros in data | MAPE or SMAPE | Both work; SMAPE if symmetry matters |
| Zeros present | WMAPE or SMAPE | Handle zeros without issues |
| High-value items matter more | WMAPE | Weighted by value naturally |
| Equal importance across items | MAPE (exclude zeros) | Each observation equally weighted |
| Forecasting competition | SMAPE or MASE | Standard in M-competitions |
| Academic publishing | MASE | Robust, well-defined properties |
| Business reporting | MAPE or WMAPE | Executives understand percentages |
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576
import numpy as npfrom typing import Dict def comprehensive_percentage_metrics( y_true: np.ndarray, y_pred: np.ndarray ) -> Dict[str, float]: """ Compute all common percentage- based metrics. Returns dict with metric names and values. """ y_true = np.asarray(y_true, dtype = float) y_pred = np.asarray(y_pred, dtype = float) results = {} # MAPE(excluding zeros) mask = y_true != 0 if np.any(mask): results['MAPE'] = np.mean(np.abs((y_true[mask] - y_pred[mask]) / y_true[mask])) * 100 else: results['MAPE'] = None # SMAPE numerator = np.abs(y_true - y_pred) denominator = np.abs(y_true) + np.abs(y_pred) with np.errstate(divide = 'ignore', invalid = 'ignore'): smape_vals = np.where(denominator == 0, 0, 2 * numerator / denominator) results['SMAPE'] = np.mean(smape_vals) * 100 # WMAPE total_actual = np.sum(np.abs(y_true)) if total_actual > 0: results['WMAPE'] = np.sum(np.abs(y_true - y_pred)) / total_actual * 100 else: results['WMAPE'] = None # MdAPE(Median APE - robust to outliers) if np.any(mask): apes = np.abs((y_true[mask] - y_pred[mask]) / y_true[mask]) * 100 results['MdAPE'] = np.median(apes) else: results['MdAPE'] = None # Max APE(worst case) if np.any(mask): results['MaxAPE'] = np.max(np.abs((y_true[mask] - y_pred[mask]) / y_true[mask])) * 100 else: results['MaxAPE'] = None return results def generate_metric_report(y_true: np.ndarray, y_pred: np.ndarray, name: str = ""): """Generate formatted percentage metrics report.""" metrics = comprehensive_percentage_metrics(y_true, y_pred) print(f"╔══════ Percentage Metrics {name} ══════╗") for metric, value in metrics.items(): if value is not None: print(f"║ {metric:10s}: {value:>10.2f}%") else: print(f"║ {metric:10s}: {'N/A':>10s}") print(f"╚{'═' * 40}╝") # Warnings if metrics.get('MaxAPE', 0) and metrics['MaxAPE'] > 100: print(f"⚠️ Warning: Max APE is {metrics['MaxAPE']:.0f}% - check for outliers") return metrics # Example with comprehensive analysis np.random.seed(42) y_true = np.array([100, 200, 50, 0, 150, 300, 5, 180]) y_pred = np.array([95, 210, 45, 8, 140, 280, 7, 190]) generate_metric_report(y_true, y_pred, "Forecast")Here are key practical points when using percentage-based metrics.
MAPE of 50% does NOT mean predictions are within 50% of actuals. It's an average. You might have some predictions off by 10% and others off by 90%. Always look at the distribution of percentage errors, not just the mean.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748
import numpy as np def analyze_percentage_error_distribution(y_true: np.ndarray, y_pred: np.ndarray): """ Analyze the full distribution of percentage errors, not just the mean(MAPE). """ y_true = np.asarray(y_true, dtype = float) y_pred = np.asarray(y_pred, dtype = float) # Exclude zeros mask = y_true != 0 y_t = y_true[mask] y_p = y_pred[mask] # Calculate all percentage errors pct_errors = np.abs((y_t - y_p) / y_t) * 100 print("=== Percentage Error Distribution ===\n") print(f"MAPE (mean): {np.mean(pct_errors):.2f}%") print(f"\nFull distribution:") print(f" Min: {np.min(pct_errors):.2f}%") print(f" 25th: {np.percentile(pct_errors, 25):.2f}%") print(f" Median: {np.median(pct_errors):.2f}%") print(f" 75th: {np.percentile(pct_errors, 75):.2f}%") print(f" 90th: {np.percentile(pct_errors, 90):.2f}%") print(f" 95th: {np.percentile(pct_errors, 95):.2f}%") print(f" Max: {np.max(pct_errors):.2f}%") # Count by threshold thresholds = [10, 20, 50, 100] print(f"\nSamples within threshold:") for t in thresholds: pct_within = np.mean(pct_errors <= t) * 100 print(f" Within {t:3d}%: {pct_within:.1f}% of samples") # Identify worst predictions print(f"\nTop 5 worst percentage errors:") worst_idx = np.argsort(pct_errors)[-5:][:: -1] for i in worst_idx: print(f" Actual={y_t[i]:.1f}, Pred={y_p[i]:.1f}, Error={pct_errors[i]:.1f}%") # Example np.random.seed(42) y_true = np.random.uniform(50, 500, 100) y_pred = y_true * np.random.uniform(0.7, 1.3, 100) # 30 % noise analyze_percentage_error_distribution(y_true, y_pred)Percentage-based metrics provide intuitive, scale-independent measures of forecast accuracy. Let's consolidate the key insights:
What's Next
We've covered mean-based metrics, but what about robust measures of central tendency? Median errors provide resistance to outliers that means lack. We'll explore median absolute error and related robust metrics next.
You now understand the nuances of MAPE, SMAPE, and WMAPE. You can select the appropriate percentage metric for your context, handle edge cases properly, and communicate results effectively to stakeholders.