Bayes' Theorem & Applications

Visual guide to Bayesian reasoning and updating beliefs with evidence.


Formula

$$ P(H|E) = \frac{P(E|H) \cdot P(H)}{P(E)} $$

  • $P(H|E)$: Posterior (probability of hypothesis given evidence)
  • $P(E|H)$: Likelihood (probability of evidence given hypothesis)
  • $P(H)$: Prior (initial probability of hypothesis)
  • $P(E)$: Marginal likelihood (total probability of evidence)

Visual Representation

Bayes Rule as Information Flow

Key Insight: P(E) = P(E|H)·P(H) + P(E|¬H)·P(¬H) = 0.99×0.01 + 0.05×0.99 = 0.059

This normalizing constant ensures probabilities sum to 1.


Intuitive Example: Medical Test

Example: Medical Test

 1# Disease prevalence (prior)
 2p_disease = 0.01
 3
 4# Test accuracy
 5p_positive_given_disease = 0.99  # Sensitivity
 6p_negative_given_no_disease = 0.95  # Specificity
 7
 8# P(positive)
 9p_positive = (p_positive_given_disease * p_disease + 
10              (1 - p_negative_given_no_disease) * (1 - p_disease))
11
12# Bayes: P(disease | positive test)
13p_disease_given_positive = (p_positive_given_disease * p_disease) / p_positive
14
15print(f"P(disease | positive test) = {p_disease_given_positive:.3f}")

Bayesian Updating

1def bayesian_update(prior, likelihood, evidence):
2    """Update belief based on new evidence"""
3    posterior = (likelihood * prior) / evidence
4    return posterior

Further Reading

Related Snippets