Loading problem...
In probability theory, understanding the relationship between events is crucial for reasoning under uncertainty. Conditional probability quantifies how the likelihood of one event changes when we know that another event has occurred. This fundamental concept forms the backbone of Bayesian inference, statistical learning, and countless real-world decision-making systems.
Given a joint probability distribution over two binary events A and B, we can derive the conditional probability of A given B using the mathematical definition:
$$P(A|B) = \frac{P(A \cap B)}{P(B)}$$
Where:
The joint distribution is provided as a dictionary with four entries representing all possible combinations of the two binary events and their complements:
('A', 'B'): Probability that both A and B occur('A', '\B'): Probability that A occurs but B does not (backtick denotes logical NOT)('\A', 'B')`: Probability that B occurs but A does not('\A', '`B')`: Probability that neither A nor B occursComputing the Marginal Probability: The marginal probability P(B) is obtained by summing all joint probabilities where B is true: $$P(B) = P(A \cap B) + P( eg A \cap B)$$
Your Task: Write a Python function that computes the conditional probability P(A|B) from the given joint distribution. The result should be rounded to 4 decimal places.
Edge Case: If P(B) = 0, the conditional probability is undefined. In this case, return 0 to indicate the impossibility of the calculation.
joint_distribution = {('A', 'B'): 0.2, ('A', '`B'): 0.3, ('`A', 'B'): 0.1, ('`A', '`B'): 0.4}0.6667First, compute the marginal probability P(B): • P(B) = P(A ∩ B) + P(¬A ∩ B) = 0.2 + 0.1 = 0.3
Next, identify P(A ∩ B): • P(A ∩ B) = 0.2
Apply the conditional probability formula: • P(A|B) = P(A ∩ B) / P(B) = 0.2 / 0.3 = 0.6666...
Rounded to 4 decimal places: 0.6667
Interpretation: Given that event B has occurred, there is approximately a 66.67% chance that event A also occurred.
joint_distribution = {('A', 'B'): 0.25, ('A', '`B'): 0.25, ('`A', 'B'): 0.25, ('`A', '`B'): 0.25}0.5This is a uniform distribution where all four outcomes are equally likely.
Compute P(B): • P(B) = 0.25 + 0.25 = 0.5
Compute P(A|B): • P(A|B) = P(A ∩ B) / P(B) = 0.25 / 0.5 = 0.5
When events are uniformly distributed, knowing B occurred gives us no additional information about A. The conditional probability equals the marginal probability, indicating statistical independence between A and B.
joint_distribution = {('A', 'B'): 0.3, ('A', '`B'): 0.4, ('`A', 'B'): 0.0, ('`A', '`B'): 0.3}1.0Notice that P(¬A ∩ B) = 0, meaning B never occurs without A.
Compute P(B): • P(B) = 0.3 + 0.0 = 0.3
Compute P(A|B): • P(A|B) = P(A ∩ B) / P(B) = 0.3 / 0.3 = 1.0
This is a case of logical implication: whenever B occurs, A is guaranteed to occur. Event B implies event A with certainty, resulting in a conditional probability of exactly 1.
Constraints