One-Line Summary: An adaptive topology switches between queen-led, mesh, hive-mind, and other shapes at runtime based on workload signals (task complexity, agent count, latency, cost) — the most sophisticated coordination pattern, exemplified by ruflo's adaptive mode, with significant complexity cost.
Prerequisites: Queen-led hierarchy, mesh topology, hive-mind pattern, topology as a design decision
What Is Adaptive Topology Switching?
A static topology — pick queen-led on day one and never change — works fine for most workloads. An adaptive topology says: the right topology depends on what you're doing right now, so reconfigure at runtime. Start in queen-led for routine subtasks; promote to mesh when peers need to negotiate; collapse to hive mind for exploratory phases.
Ruflo's "adaptive" topology is the canonical implementation. The system observes signals — task graph fan-out, peer disagreement rate, time-since-progress, cost-burn-rate — and selects the topology that historically performed best for the current signature.
This is a sophisticated pattern. It is also expensive. The decision logic is itself an agent; the topology change has overhead; debuggability suffers. Adaptive topology is the right answer for systems running many heterogeneous tasks at high volume; it is overkill for almost everything else.
How It Works
Three components:
- Signals: features describing the current workload. Common ones: number of active agents, task graph depth, agreement rate among peers, recent token spend, recent latency, presence of stuck agents.
- Policy: a function from signals to topology choice. Can be a hand-tuned ruleset, a learned model, or an LLM that classifies the situation.
- Switching mechanism: how the harness implements the change without losing in-flight work. Usually: pause active agents, snapshot state, re-instantiate the swarm in the new topology, resume.
Ruflo's adaptive mode uses a learned policy: the SONA pattern engine has historical data about which topology worked best for which signal pattern.
Why It Matters
For systems running thousands of heterogeneous tasks daily, adaptive topology is the difference between "always paying for the most expensive topology" and "paying for the cheapest topology that handles the current task." At scale, that gap is significant.
The other reason adaptive matters: it is the natural endpoint of a system that already supports multiple topologies. Once you have queen, mesh, and hive-mind implementations, switching between them is mostly engineering — and the cost savings are real.
Key Technical Details
- Switching is not free: Each transition adds latency (snapshot + re-instantiate) and may invalidate in-flight tool calls. Don't switch too often.
- Hysteresis is mandatory: Without it the system thrashes between topologies. Require signals to persist for N turns before switching.
- Logging every switch is essential for debugging: Why did we go from queen to mesh at turn 47? Without logs, you cannot answer that.
- Default to the simplest topology: Adaptive should only escalate when signals justify it. Starting in queen-led and rarely switching is safer than starting in mesh.
- Test policies against historical workloads: Use traces of past tasks; replay them through different policies; compare cost/quality.
- Adaptive policy is itself a maintenance burden: It is code that decides operationally important things. Treat it like a production service.
How Harnesses & Frameworks Implement This
| Harness / Framework | Adaptive topology |
|---|---|
| Claude Code | ✗ — single topology per session |
| Claude Agent SDK | DIY |
| ruflo | First-class — ruflo-swarm adaptive mode driven by SONA |
| LangGraph | DIY — runtime graph rewriting |
| AutoGen | Limited — speaker selection can change but topology is fixed |
| CrewAI | ✗ |
| OpenAI Agents SDK | DIY |
| Codex CLI / Cursor | ✗ |
Connections to Other Concepts
queen-led-hierarchy.md,mesh-topology.md,hive-mind-pattern.md— The topologies switched between.topology-as-design-decision.md— Why static is usually fine.sona-self-learning-neural-patterns.md— Ruflo's policy substrate.harness-cost-models.md— The savings adaptive can capture.
Further Reading
- ruvnet, ruflo-swarm adaptive mode docs.