One-Line Summary: The capstone decision: pick a harness (interactive surface), decide whether you need an orchestration platform on top (multi-agent / autopilot), pick an SDK if you're building rather than using, and lean on MCP and configuration files to keep the choice reversible — most of the cost of getting it wrong is portability cost, which is partly mitigable.

Prerequisites: This concept synthesizes the entire course; expect to reference all eight modules.

The Decision

Build the stack from the bottom up:

1. Pick a base harness

Driving questions: where does the user live (terminal, IDE, browser), what's the team's primary model, how much customization will you do?

If...Pick
Terminal-first, want hooks/sub-agents/pluginsClaude Code
Terminal-first, OpenAI ecosystemCodex CLI
IDE-first, want IDE intimacyCursor
OSS, headless, agentic-OS-shapedOpenHands
Web-first, lighter touchAider, Continue

2. Add an orchestration platform if needed

Driving question: do you need multi-agent / autopilot / federation?

If...Add
Single user, single agent, interactiveNothing — base harness is enough
Single user, want background workers / autopilotruflo
Multi-agent within one trust boundaryruflo or build with sub-agents
Federated, cross-machine, zero-trustruflo (federation mode)
Long-horizon agentic OSOpenHands or ruflo

3. Decide on a framework or SDK if you're building

Driving question: are you using a harness or building agent-shaped functionality into your own application?

If...Use
Extending an existing harnessPlugins, sub-agents, slash commands, hooks
Building a Claude-powered appClaude Agent SDK
Building a graph-shaped flowLangGraph
Building a dialogue-shaped flowAutoGen / AG2
Building a role-shaped flowCrewAI
Building a Next.js / Node appMastra
Building on GeminiGoogle ADK
Building on OpenAIOpenAI Agents SDK

4. Standardize on MCP for tools

This is independent of the rest. Every modern harness supports MCP; tools you build for MCP work everywhere. Don't write tools in harness-specific formats unless you have a reason.

5. Keep configuration portable

Write CLAUDE.md (or equivalent) that's straight markdown — easy to convert to AGENTS.md, .cursorrules, etc. when the day comes to switch.

What Drives the Decision

Three factors dominate:

  1. Team's existing tools and models. A team on OpenAI's platform with Azure deployment doesn't pick Claude Code first. A team with Anthropic credits and a customization mindset doesn't pick Codex CLI.
  2. Customization appetite. Hook-rich harnesses pay off when you want to customize. They're overhead when you just want it to work.
  3. Workload shape. Single-developer interactive coding → IDE/terminal harness. Long-horizon autonomous workflows → orchestration platform. Production agent service → continuous-loop platform.

A Worked Example

A small startup, 8 engineers, mixed Python/TypeScript codebase, on AWS with a Claude API contract.

  • Base harness: Claude Code (terminal, fits the team's vim/tmux workflow, hooks let them ship custom permission policies).
  • Orchestration: Adopt ruflo for the team's batch workflows (testgen, security audit). Don't use ruflo for interactive sessions — too much overhead.
  • Framework: For the customer-facing app's agent feature, use the Claude Agent SDK (matches the harness, lower context-switch cost).
  • MCP: All custom tools written as MCP servers from day one.
  • Memory: CLAUDE.md files committed to repos; team members can BYO .cursorrules if they prefer Cursor for some tasks.

A different team — a solo developer building agent features into a Next.js app — would pick differently: Cursor as harness (IDE-first), Mastra as framework (TypeScript-first), no orchestration platform (overhead).

Why This Decision Is Reversible (Mostly)

Switching costs are real but bounded:

  • Configuration files convert mechanically (CLAUDE.mdAGENTS.md is mostly find-and-replace).
  • MCP tools port without modification.
  • Sub-agent definitions convert with effort but mostly mechanically.
  • Trajectories and adapters don't port.

Investing in MCP and portable configuration is the practical way to lower switching costs. The ratchets are: trajectories, parametric adapters, deeply harness-specific hooks. These bind you to the harness you started with — appropriately so once you've committed.

Connections to Other Concepts

This concept synthesizes the course. Every concept you've read so far feeds the decision.

Direct connections:

  • the-2026-harness-landscape.md — The roster.
  • harness-vs-framework-vs-sdk.md — The category boundaries.
  • claude-code-vs-codex-vs-cursor.md — Harness comparison.
  • langgraph-vs-autogen-vs-crewai.md — Framework comparison.
  • openai-agents-sdk-and-mastra-and-adk.md — SDK comparison.
  • topology-selection-decision-tree.md — The orchestration-side decision.
  • memory-portability-across-harnesses.md — Why portable config matters.

Further Reading

You've reached the end of the course. Re-read any concept that didn't quite click; the cross-links are designed to make second readings productive.