Regularization Techniques
Prevent overfitting by adding penalties to the objective function.
L2 Regularization (Ridge)
$$ \min_w |Xw - y|^2 + \lambda|w|^2 $$
1from sklearn.linear_model import Ridge
2
3model = Ridge(alpha=1.0) # alpha = lambda
4model.fit(X, y)
L1 Regularization (Lasso)
$$ \min_w |Xw - y|^2 + \lambda|w|_1 $$
Promotes sparsity (many weights become zero).
1from sklearn.linear_model import Lasso
2
3model = Lasso(alpha=1.0)
4model.fit(X, y)
Elastic Net
Combines L1 and L2:
$$ \min_w |Xw - y|^2 + \lambda_1|w|_1 + \lambda_2|w|^2 $$
1from sklearn.linear_model import ElasticNet
2
3model = ElasticNet(alpha=1.0, l1_ratio=0.5)
4model.fit(X, y)
Tikhonov Regularization
General form with matrix $\Gamma$:
$$ \min_w |Xw - y|^2 + |\Gamma w|^2 $$
When to Use
- L2: Smooth solutions, all features matter
- L1: Feature selection, sparse solutions
- Elastic Net: Balance between L1 and L2
Further Reading
Related Snippets
- Interpolation Methods
Linear, polynomial, and spline interpolation - Numerical Differentiation
Finite differences and automatic differentiation - Numerical Integration
Trapezoidal rule, Simpson's rule, Gaussian quadrature - Optimization Methods
Gradient descent, Newton's method, BFGS - Root Finding Methods
Newton's method, bisection, and secant method - Solving Linear Systems
LU decomposition and iterative methods