AI & ML Topics
Quick Nav
| Area | Notes |
|---|---|
| LangChain | [[Langchain]] |
| MCP | [[MCP]] |
Topics Checklist
LLM & GenAI Fundamentals
- Transformer architecture (attention, embeddings)
- Prompt engineering — zero-shot, few-shot, chain-of-thought
- RAG (Retrieval-Augmented Generation)
- Fine-tuning vs RAG vs prompt engineering trade-offs
- Evaluation — BLEU, ROUGE, LLM-as-judge
- Hallucination causes and mitigations
LangChain
- LCEL (LangChain Expression Language)
- Chains — basic, sequential, router
- Agents & tools
- Memory (ConversationBufferMemory, VectorStoreRetriever)
- Vector stores — FAISS, Chroma, Pinecone
- Document loaders, text splitters
- LangGraph — stateful multi-agent workflows
- LangSmith — tracing & evaluation
MCP (Model Context Protocol)
- What MCP is — tools, resources, prompts
- Architecture — host / client / server
- LangChain MCP integration
- Common MCP servers (filesystem, GitHub, DB, etc.)
ML Fundamentals (for interviews / AWS exam)
- Supervised vs unsupervised vs reinforcement learning
- Bias-variance trade-off
- Regularisation — L1 (Lasso), L2 (Ridge)
- Cross-validation
- Classification metrics — precision, recall, F1, AUC-ROC
- Common algorithms — linear regression, logistic regression, decision tree, random forest, gradient boosting, SVM, k-means
- Feature engineering, normalisation, handling missing values
MLOps
- Model lifecycle — training, evaluation, deployment, monitoring
- Feature stores
- Model versioning & registry
- Data drift vs model drift
- A/B testing models
- CI/CD for ML pipelines
System Design Notes (AI-Specific)
- [[RAG & LLM System]] — RAG architecture, chunking, retrieval, reranking
- [[ML Feature Store]] — offline vs online store, point-in-time correctness