Course · 10 modules · 49 lessons · 202 min

LangGraph Agents

Build production AI agents with LangGraph — tools, memory, human-in-the-loop, streaming, multi-agent systems, and deployment.

← All courses
Langgraph Foundations
·Edges and RoutingEdges define execution flow between nodes — static edges for fixed paths, conditional edges for dynamic routing based on state, and parallel edges for concurrent fan-out execution.4 min·Graph CompilationCalling `builder.compile()` validates the graph structure, resolves all edges and nodes, and returns a frozen, executable `CompiledGraph` object that supports invoke, stream, and async execution.4 min·NodesNodes are Python functions that receive the current graph state, perform a unit of work, and return a partial state update dict — they are the computational building blocks of every LangGraph application.4 min·State and State SchemaState is a typed, shared data structure that flows through every node in a LangGraph graph, with reducers controlling how concurrent or sequential updates are merged.4 min·The Command APIThe `Command` object lets a node simultaneously update state and control routing in a single return value, replacing the need for separate conditional edges in many scenarios.4 min·What Is LangGraphLangGraph is a low-level orchestration framework that models AI agent logic as a directed graph of nodes, edges, and shared state.4 min
Tools And Models
·Binding Tools to Models`model.bind_tools(tools)` attaches tool definitions to a chat model so the LLM can generate structured `tool_calls` instead of plain text when it determines a tool should be used.4 min·Community ToolsThe LangChain ecosystem provides a rich library of pre-built tools — from web search to code execution — available through `langchain-community` and partner packages, so you can equip agents with real-world capabilities without writing tool logic from scratch.3 min·LangChain @tool DecoratorThe `@tool` decorator from `langchain_core.tools` transforms ordinary Python functions into structured, LLM-callable tools by extracting names, docstrings, and type hints automatically.4 min·MCP Tools IntegrationThe Model Context Protocol (MCP) lets LangGraph agents use tools hosted on external servers -- connecting to databases, APIs, and services through a standardized protocol without writing custom tool implementations.5 min·Tool Error HandlingRobust tool error handling in LangGraph means catching failures, storing them in state, and routing back to the LLM so it can analyze what went wrong and adapt its approach -- turning errors into recovery opportunities rather than crashes.5 min·ToolNode`ToolNode` is a prebuilt LangGraph node that extracts tool calls from the last AI message, executes the corresponding tool functions (in parallel when possible), and returns `ToolMessage` results to the graph state.4 min·Tool Runtime and Context`ToolRuntime` is a special parameter type that gives tools access to runtime context, long-term memory (store), and user-specific data -- enabling tools to read and write persistent state without polluting the LLM's tool schema.5 min·Tool Schemas and ValidationPydantic `BaseModel` with `Field()` descriptors lets you define rich, validated input schemas for LangChain tools, giving LLMs detailed JSON Schema instructions for correct parameter generation.4 min
Building Your First Agent
·Manual ReAct AgentBuilding the ReAct pattern by hand with `StateGraph` gives you full control over every node, edge, and routing decision in the agent loop.4 min·Prebuilt ReAct Agent`create_react_agent` from `langgraph.prebuilt` is the highest-level abstraction for building a fully functional tool-calling agent in under 10 lines of code.4 min·Structured Output`model.with_structured_output(Schema)` forces LLM responses into typed Pydantic models, turning free-form text into reliable, parseable data structures.4 min·Tool-Calling LoopThe tool-calling loop is the fundamental cycle where an LLM reasons about a task, invokes tools, observes results, and repeats until it can answer without further tool use.5 min
Memory And Persistence
·CheckpointersCheckpointers save graph state at every step, enabling persistence, human-in-the-loop workflows, memory, time travel, and fault recovery.4 min·Long-Term Memory StoreCross-thread memory using a Store lets agents persist knowledge -- user preferences, learned facts, and accumulated context -- across entirely separate conversations.4 min·State Inspection and ReplayCheckpointers let you inspect the current state, walk through the full history, and replay execution from any previous checkpoint -- enabling time travel for debugging and recovery.4 min·State Schema DesignWell-designed state schemas keep agent data flat, typed, and organized with reducers for messages, audit trails, and error tracking -- making persistence, debugging, and scaling straightforward.4 min·Thread-Based MemoryThread-based memory gives an agent short-term recall within a single conversation by persisting messages across invocations that share the same `thread_id`.4 min
Human In The Loop
·Approval GatesAn approval gate is a graph pattern where the agent proposes an action, pauses for human approval via `interrupt()`, then executes or cancels based on the human's response.4 min·Content Review PatternThe content review pattern uses `interrupt()` to surface agent-generated content for human review and optional editing before the content is used downstream.4 min·Interrupt and ResumeThe `interrupt()` function from `langgraph.types` pauses graph execution, surfaces a payload to the caller, and waits for human input before resuming via `Command(resume=value)`.4 min·Tool-Level ApprovalTool-level approval places an `interrupt()` call inside individual tool functions, pausing execution for human review before the tool's side effect runs, with support for parameter modification.4 min
Streaming
·Async StreamingLangGraph's `astream()` and `ainvoke()` methods provide non-blocking async execution, essential for concurrent web applications built with FastAPI or asyncio.4 min·Stream ModesLangGraph provides four streaming modes -- updates, values, messages, and events -- each offering a different granularity of visibility into graph execution.4 min·Streaming in ProductionProduction streaming requires Server-Sent Events or WebSocket transport, stateful thread management, interrupt handling, and resilience against timeouts, disconnections, and backpressure.4 min·Token StreamingThe `"messages"` stream mode delivers LLM output token-by-token as `(message_chunk, metadata)` tuples, enabling responsive real-time chat interfaces.4 min
Multi Agent Systems
·Agent HandoffsAgents transfer control directly to each other using the Command API, forming a peer network where any agent can hand off to any other without a central supervisor.4 min·Evaluator-Optimizer PatternAn iterative loop where one LLM generates content and another evaluates it with structured feedback, repeating until the output meets a defined quality threshold.4 min·Subgraph ArchitectureEach agent is a fully independent StateGraph with its own state schema, compiled separately and invoked as a single node inside a parent graph for maximum encapsulation and modularity.4 min·Supervisor PatternA central supervisor agent receives every user request, decides which specialist sub-agent should handle it, routes work via conditional edges, and aggregates results before deciding the next step.3 min
Observability
·Evaluation with DatasetsLangSmith datasets and the `evaluate()` function enable systematic, repeatable testing of agent behavior with custom evaluators and regression tracking.4 min·LangSmith SetupLangSmith provides automatic observability for LangChain and LangGraph applications through simple environment variable configuration.4 min·Production MonitoringLangSmith provides production dashboards, user feedback collection, annotation queues, and alerting to monitor agent health and catch degradation.4 min·Tracing and DebuggingLangSmith traces provide nested span visibility into every node, edge, and LLM call, with the `@traceable` decorator extending coverage to custom functions.4 min
Deployment
·Cloud Provider DeploymentDeploying LangGraph agents to AWS, GCP, or Azure involves packaging the agent as a Docker container and running it on a managed container service -- with each provider offering trade-offs in complexity, cost, and scaling behavior.5 min·ContainerizationDocker packages your LangGraph agent, its dependencies, and runtime into a portable container that runs identically everywhere -- from your laptop to production servers.4 min·FastAPI DeploymentWrapping a LangGraph agent in FastAPI gives you a production-ready API with sync and streaming endpoints, full control over auth, rate limiting, and zero vendor lock-in.4 min·LangGraph Dev ServerThe `langgraph dev` command launches a built-in development server with an API, visual Studio UI, and auto-generated docs -- the fastest way to test and debug agents locally.5 min·LangGraph PlatformLangGraph Platform (deployed via LangSmith) is a managed hosting service purpose-built for stateful, long-running agents -- handling infrastructure, scaling, persistence, and operational concerns so you can deploy directly from a GitHub repository.6 min·LangGraph SDKThe `langgraph-sdk` package provides Python and JavaScript clients for interacting with any LangGraph server -- managing threads, streaming runs, inspecting state, and controlling agent execution through a unified API.4 min·Production ChecklistTwelve essential steps that transform a working LangGraph prototype into a reliable, observable, and maintainable production system.4 min
Practical Projects
·Customer Support AgentA manually constructed support agent that routes simple questions to FAQ lookup, order queries to an order tool, and complex or sensitive issues to a human operator via `interrupt()`.4 min·Multi-Agent Content PipelineA supervisor-orchestrated pipeline where a researcher, writer, and editor agent collaborate with an evaluator-optimizer loop for iterative content improvement.4 min·Research Assistant AgentA multi-tool research agent that searches the web, synthesizes findings, and produces structured reports using `create_react_agent` with memory for iterative, multi-turn research sessions.4 min