№ 01 · The pillar

Drip.
Long reads with labs you can play.

One idea, given the time and the interactive surface area it deserves. Drip pieces are essays you read, not chapters you skim — designed to leave you with a working mental model by the last paragraph.

Featured

Core Concepts

Live
Core Concepts

LoRA & qLoRA

Fine-tune massive LLMs on consumer hardware. Learn about Low-Rank Adaptation and 4-bit Quantization.

Read Live
Live
Core Concepts

Tokenization

Before AI can read, it must chop. Learn how text is broken down into the fundamental atoms of meaning.

Read Live
Live
Core Concepts

LLM Sampling

How do LLMs decide what to say next? Explore greedy vs. probabilistic sampling and log probabilities.

Read Live
Live
Core Concepts

Context Engineering

Beyond the prompt: Curate the perfect information to feed your LLM's limited attention span.

Read Live
Live
Core Concepts

Prompt Engineering

Learn how Zero-Shot, Few-Shot, and Chain-of-Thought prompting steer LLM probabilities.

Read Live
Live
Core Concepts

KV Cache (Inference)

Why doesn't ChatGPT re-read your whole chat every time it types a word? Memory optimization explained.

Read Live
Live
Core Concepts

Naive Bayes

Predicting the future by assuming simplicity. Learn how this probabilistic algorithm uses Bayes' Theorem for classification.

Read Live
Live
Core Concepts

Random Forest

Strength in numbers. See how an ensemble of diverse decision trees can vote to make robust predictions.

Read Live
Live
Core Concepts

Support Vector Machines

The classic algorithm that finds the widest possible street between two classes of data.

Read Live
Live
Core Concepts

Recommender Systems

From collaborative filtering to matrix factorization: how Netflix knows what you want before you do.

Read Live

Architectures

Agents & RAG

Latest Research

Live
Latest Research

AI Overthinking

New Research: When models think too much, they often talk themselves out of the correct answer.

Read Live
Live
Latest Research

Latent Reasoning (Coconut)

New Research: What if LLMs didn't have to 'think' in words? Explore reasoning directly in continuous latent space.

Read Live
Live
Latest Research

Qwen3 (Unified Thinking)

New Research: A single model that can dynamically switch between fast responses and deep reasoning modes.

Read Live
Live
Latest Research

DeepSeekMath (GRPO)

New Research: How a 7B model approached GPT-4 math performance by ditching the RL 'Critic' model.

Read Live
Live
Latest Research

Kimi K2 Thinking

New Research: An open-source thinking agent that interleaves reasoning with tool use (300+ steps).

Read Live
Live
Latest Research

DeepSeek-OCR

New Research: Compressing long documents into highly efficient 2D visual tokens instead of text.

Read Live
Live
Latest Research

CoT Monitoring

New Research: Can AI models learn to hide their dangerous thoughts from safety monitors?

Read Live
Live
Latest Research

Transformer Sensitivity

New Research: Why are Transformers so robust? They naturally learn 'low sensitivity' functions.

Read Live
Live
Latest Research

Coherence (Segmentation)

New Research: An unsupervised method that uses 'sticky' keywords to find topic boundaries.

Read Live
Live
Latest Research

SFT vs. RL Generalization

New Research: Does Supervised Fine-Tuning just memorize while RL actually learns rules?

Read Live