Loading content...
In the realm of machine learning evaluation, sensitivity (also known as the true positive rate or hit rate) stands as one of the most critical metrics for assessing how well a classifier captures positive instances. This metric answers a fundamental question: "Of all the samples that are actually positive, what fraction did our model correctly identify?"
Sensitivity is formally defined as:
$$\text{Sensitivity} = \frac{TP}{TP + FN}$$
Where:
Intuitive Understanding: Imagine a medical screening test designed to detect a disease. A sensitivity of 0.95 means that out of every 100 people who actually have the disease, the test correctly identifies 95 of them. The remaining 5 are "missed" (false negatives). In high-stakes scenarios like cancer detection or fraud prevention, maximizing sensitivity is often crucial because missing a true positive can have severe consequences.
The Trade-off Consideration: While high sensitivity is desirable, it often comes at the cost of increased false positives. A model that predicts everything as positive would achieve perfect sensitivity (1.0) but would be practically useless. Understanding this trade-off is essential for building effective classifiers.
Your Task: Implement a function that computes the sensitivity score for a binary classification model. The function should:
actual_labels = [1, 0, 1, 1, 0, 1]
predicted_labels = [1, 0, 1, 0, 0, 1]0.75Let's identify the components:
• Total actual positives (1s in actual_labels): 4 instances (positions 0, 2, 3, 5) • True Positives: Positions where actual=1 AND predicted=1: positions 0, 2, 5 → 3 instances • False Negatives: Positions where actual=1 AND predicted=0: position 3 → 1 instance
Sensitivity = TP / (TP + FN) = 3 / (3 + 1) = 3/4 = 0.75
The model successfully identified 75% of all positive cases.
actual_labels = [1, 1, 1, 0, 0]
predicted_labels = [1, 1, 1, 0, 1]1.0Analyzing the classification:
• Total actual positives: 3 instances (positions 0, 1, 2) • True Positives: All 3 actual positives were correctly predicted → 3 instances • False Negatives: No positive samples were missed → 0 instances
Sensitivity = 3 / (3 + 0) = 1.0
Perfect sensitivity! The model captured ALL positive cases. Note: There is one false positive at position 4 (predicted=1, actual=0), but this doesn't affect sensitivity—it would affect precision instead.
actual_labels = [1, 1, 1, 0, 0]
predicted_labels = [0, 0, 0, 1, 1]0.0This represents the worst-case scenario:
• Total actual positives: 3 instances (positions 0, 1, 2) • True Positives: None of the actual positives were correctly identified → 0 instances • False Negatives: All 3 positive samples were missed → 3 instances
Sensitivity = 0 / (0 + 3) = 0.0
The model failed to identify ANY positive cases. Interestingly, the model also predicted all negatives as positives, suggesting the predictions might be inverted or the model is fundamentally flawed.
Constraints