Loading problem...
In ensemble machine learning, combining the predictions of multiple models often yields better performance than any single model alone. Soft voting is a powerful ensemble technique that leverages the full probability distributions from each classifier rather than just their hard class predictions.
Unlike hard voting, which simply counts class votes from each classifier and picks the majority, soft voting computes a weighted average of predicted class probabilities across all classifiers. This approach captures the confidence level of each model's predictions, allowing highly certain predictions to carry more influence than uncertain ones.
Given:
For each sample i and class j, the weighted average probability is:
$$\bar{p}{ij} = \frac{\sum{k=1}^{K} w_k \cdot p_{k,i,j}}{\sum_{k=1}^{K} w_k}$$
The final predicted class for sample i is:
$$\hat{y}i = \arg\max{j} \bar{p}_{ij}$$
When weights are provided, they must be normalized to sum to 1 before computing the weighted average. If no weights are provided, use uniform weights where each classifier contributes equally ((w_k = 1/K)).
Implement a function that performs soft voting ensemble classification. Given:
Return a list of predicted class labels for each sample.
probabilities = [[[0.7, 0.2, 0.1], [0.3, 0.5, 0.2]], [[0.3, 0.4, 0.3], [0.2, 0.3, 0.5]]]
weights = None[0, 1]We have 2 classifiers predicting probabilities for 2 samples across 3 classes.
With uniform weights (0.5 each):
Sample 0: • Classifier 1 probabilities: [0.7, 0.2, 0.1] • Classifier 2 probabilities: [0.3, 0.4, 0.3] • Weighted average: [(0.7+0.3)/2, (0.2+0.4)/2, (0.1+0.3)/2] = [0.5, 0.3, 0.2] • Class 0 has highest probability (0.5), so predicted class = 0
Sample 1: • Classifier 1 probabilities: [0.3, 0.5, 0.2] • Classifier 2 probabilities: [0.2, 0.3, 0.5] • Weighted average: [(0.3+0.2)/2, (0.5+0.3)/2, (0.2+0.5)/2] = [0.25, 0.4, 0.35] • Class 1 has highest probability (0.4), so predicted class = 1
Final output: [0, 1]
probabilities = [[[0.6, 0.3, 0.1], [0.2, 0.6, 0.2]], [[0.4, 0.4, 0.2], [0.3, 0.3, 0.4]]]
weights = [0.7, 0.3][0, 1]We have 2 classifiers with custom weights [0.7, 0.3] (already normalized to sum to 1).
Sample 0: • Classifier 1 (weight 0.7): [0.6, 0.3, 0.1] • Classifier 2 (weight 0.3): [0.4, 0.4, 0.2] • Weighted average:
Sample 1: • Weighted average:
Final output: [0, 1]
probabilities = [[[0.8, 0.1, 0.1], [0.1, 0.8, 0.1]], [[0.7, 0.2, 0.1], [0.2, 0.6, 0.2]], [[0.6, 0.3, 0.1], [0.3, 0.5, 0.2]]]
weights = [0.5, 0.3, 0.2][0, 1]We have 3 classifiers with custom weights [0.5, 0.3, 0.2] for 2 samples over 3 classes.
Sample 0: • Classifier 1 (w=0.5): [0.8, 0.1, 0.1] • Classifier 2 (w=0.3): [0.7, 0.2, 0.1] • Classifier 3 (w=0.2): [0.6, 0.3, 0.1] • Weighted average:
Sample 1: • Weighted average:
Final output: [0, 1]
Constraints