Bayes' Theorem & Applications
Visual guide to Bayesian reasoning and updating beliefs with evidence.
Formula
$$ P(H|E) = \frac{P(E|H) \cdot P(H)}{P(E)} $$
- $P(H|E)$: Posterior (probability of hypothesis given evidence)
- $P(E|H)$: Likelihood (probability of evidence given hypothesis)
- $P(H)$: Prior (initial probability of hypothesis)
- $P(E)$: Marginal likelihood (total probability of evidence)
Visual Representation
Bayes Rule as Information Flow
Key Insight: P(E) = P(E|H)·P(H) + P(E|¬H)·P(¬H) = 0.99×0.01 + 0.05×0.99 = 0.059
This normalizing constant ensures probabilities sum to 1.
Intuitive Example: Medical Test
Example: Medical Test
1# Disease prevalence (prior)
2p_disease = 0.01
3
4# Test accuracy
5p_positive_given_disease = 0.99 # Sensitivity
6p_negative_given_no_disease = 0.95 # Specificity
7
8# P(positive)
9p_positive = (p_positive_given_disease * p_disease +
10 (1 - p_negative_given_no_disease) * (1 - p_disease))
11
12# Bayes: P(disease | positive test)
13p_disease_given_positive = (p_positive_given_disease * p_disease) / p_positive
14
15print(f"P(disease | positive test) = {p_disease_given_positive:.3f}")
Bayesian Updating
1def bayesian_update(prior, likelihood, evidence):
2 """Update belief based on new evidence"""
3 posterior = (likelihood * prior) / evidence
4 return posterior
Further Reading
Related Snippets
- Central Limit Theorem
Foundation of statistical inference - Common Probability Distributions
Normal, Binomial, Poisson, Exponential, Gamma, Pareto distributions - Monte Carlo Methods
Simulation and numerical integration - Null Hypothesis Testing
Understanding null hypothesis and hypothesis testing - P-Values Explained
Understanding p-values and statistical significance - Percentiles and Quantiles
Understanding percentiles, quartiles, and quantiles - Probability Basics
Fundamental probability concepts and rules - Random Variables
Expected value, variance, and moments - Statistical Moments
Mean, variance, skewness, and kurtosis explained