AI & Machine Learning Interview Questions
Comprehensive AI/ML and LLM interview questions covering fundamental concepts, algorithms, and practical applications. AI/ML Fundamentals Interview questions covering core machine learning concepts, algorithms, and best practices: Easy: Supervised/unsupervised learning, overfitting, bias-variance tradeoff, basic …
Read MoreEasy-level AI/ML interview questions with LangChain examples and Mermaid diagrams. Q1: What is the difference between supervised and unsupervised learning? Answer: Supervised: Has labels (input → output mapping) Unsupervised: No labels (discover structure) LangChain Example: 1from langchain.prompts import …
Read MoreHard-level AI/ML interview questions covering advanced architectures, optimization, and theoretical concepts. Q1: Implement attention mechanism from scratch. Answer: How It Works: Attention allows model to focus on relevant parts of input when producing output. Core Idea: Compute weighted sum of values, where weights …
Read MoreMedium-level AI/ML interview questions covering neural networks, ensemble methods, and advanced concepts. Q1: Explain backpropagation in neural networks. Answer: How It Works: Backpropagation is the algorithm for training neural networks by computing gradients of the loss with respect to weights. Forward Pass: Input …
Read MoreComprehensive guide to Random Forests: theory, implementation, tuning, and interpretation. What are Random Forests? Random Forest is an ensemble learning method that constructs multiple decision trees and combines their predictions. Key Concepts: Bagging: Bootstrap Aggregating - train each tree on random subset of data …
Read MoreCommon patterns and workflows for scikit-learn: preprocessing, model training, evaluation, and pipelines. Installation 1pip install scikit-learn numpy pandas matplotlib Basic Workflow 1from sklearn.model_selection import train_test_split 2from sklearn.preprocessing import StandardScaler 3from sklearn.linear_model …
Read MoreRegularization Techniques
Prevent overfitting by adding penalties to the objective function. L2 Regularization (Ridge) $$ \min_w |Xw - y|^2 + \lambda|w|^2 $$ 1from sklearn.linear_model import Ridge 2 3model = Ridge(alpha=1.0) # alpha = lambda 4model.fit(X, y) L1 Regularization (Lasso) $$ \min_w |Xw - y|^2 + \lambda|w|_1 $$ Promotes sparsity …
Read More