Comprehensive sequences. Each course is a series of modules, each module is a series of lessons. Read in order to build a foundation, or jump straight to the lesson you need.
From transformer architecture to cutting-edge research — each concept explained with intuition, math, and connections to the bigger picture.
Foundations of autonomous AI agents — reasoning, planning, memory, tool use, multi-agent systems, and safety.
Benchmarks, automated evaluation methods, trajectory analysis, and production monitoring for AI agents.
Architecture selection, tool design, error resilience, multi-agent coordination, and production patterns for agentic systems.
Image fundamentals through CNNs, object detection, segmentation, generative models, vision transformers, and 3D vision.
Build production AI agents with LangGraph — tools, memory, human-in-the-loop, streaming, multi-agent systems, and deployment.
The history and trajectory of large language models — from pre-transformer foundations through the 2025 frontier.
Mathematical foundations, learning theory, supervised and unsupervised methods, neural networks, and production ML systems.
A hands-on guide to building Model Context Protocol servers with Supabase — from architecture to production deployment.
Text preprocessing, representation, sequence models, NLP tasks, information extraction, and multilingual NLP.
Core prompting techniques, reasoning elicitation, system prompts, structured output, context engineering, and production safety.
Foundations through deep RL, policy gradients, model-based methods, RL for language models, and landmark applications.
Hands-on guide to building an AI agent with multiple skills — architecture, tool design, orchestration, error handling, and a capstone research agent project.
The harness layer above LLMs — Claude Agent SDK, Codex CLI, Cursor, ruflo, LangGraph, AutoGen, CrewAI, and OpenAI Agents SDK compared concept-by-concept. Topologies, consensus, federation, planning, and the orchestration plumbing that turns models into systems.
A second-volume tour of the techniques pushing large language models forward — advanced training, modern inference and serving, retrieval and embeddings, alignment, and adversarial robustness.