Loading content...
In machine learning and statistical classification, evaluating the performance of a binary classifier requires understanding how predictions align with actual outcomes. The Outcome Matrix (also known as an error matrix or classification outcome table) is a structured summary that captures the relationship between predicted labels and ground truth labels in a 2×2 tabular format.
For a binary classification task with two possible labels—positive (1) and negative (0)—the outcome matrix organizes prediction results into four distinct categories:
| Predicted Negative (0) | Predicted Positive (1) | |
|---|---|---|
| Actual Negative (0) | True Negative (TN) | False Positive (FP) |
| Actual Positive (1) | False Negative (FN) | True Positive (TP) |
Understanding Each Quadrant:
True Negatives (TN): Cases where the model correctly predicted the negative class. The actual label was 0, and the prediction was also 0.
False Positives (FP): Cases where the model incorrectly predicted positive. The actual label was 0, but the prediction was 1. Also called Type I errors or false alarms.
False Negatives (FN): Cases where the model incorrectly predicted negative. The actual label was 1, but the prediction was 0. Also called Type II errors or missed detections.
True Positives (TP): Cases where the model correctly predicted the positive class. The actual label was 1, and the prediction was also 1.
Why This Matters:
The outcome matrix forms the foundation for computing crucial classification metrics:
Your Task:
Given a list of observation pairs where each pair contains [actual_label, predicted_label], construct and return the 2×2 outcome matrix. The matrix should be structured as:
[[TN, FP],
[FN, TP]]
where the first row corresponds to actual negatives (0) and the second row corresponds to actual positives (1).
observations = [[1, 1], [1, 0], [0, 1], [0, 0], [0, 1]][[1, 2], [1, 1]]Analyzing each observation pair [actual, predicted]: • [1, 1] → Actual=1, Predicted=1 → True Positive (TP) • [1, 0] → Actual=1, Predicted=0 → False Negative (FN) • [0, 1] → Actual=0, Predicted=1 → False Positive (FP) • [0, 0] → Actual=0, Predicted=0 → True Negative (TN) • [0, 1] → Actual=0, Predicted=1 → False Positive (FP)
Tallying the counts: • TN = 1 (one case of actual=0, predicted=0) • FP = 2 (two cases of actual=0, predicted=1) • FN = 1 (one case of actual=1, predicted=0) • TP = 1 (one case of actual=1, predicted=1)
The outcome matrix [[TN, FP], [FN, TP]] = [[1, 2], [1, 1]]
observations = [[0, 0], [1, 1], [0, 0], [1, 1]][[2, 0], [0, 2]]This represents a perfect classifier with no errors: • [0, 0] → True Negative (count: 2) • [1, 1] → True Positive (count: 2)
No false positives or false negatives occurred. The diagonal entries (TN=2, TP=2) capture all correct predictions, while off-diagonal entries are zero, indicating flawless classification performance.
Result: [[2, 0], [0, 2]]
observations = [[0, 1], [1, 0], [0, 1], [1, 0]][[0, 2], [2, 0]]This represents the worst possible classifier—every prediction is wrong: • [0, 1] → False Positive (count: 2) • [1, 0] → False Negative (count: 2)
The classifier never makes a correct prediction. True negatives and true positives are both zero, while all errors are concentrated in the off-diagonal cells.
Result: [[0, 2], [2, 0]]
Constraints