Amplifier Technique

The Multi-Provider
Swarm Technique

Eliminate AI blind spots by dispatching parallel agents across providers — then let them cross-review until they converge.

Claude
×
GPT
×
Gemini
The Problem

Every model has
blind spots.

  • Patterns it misses — each model's training creates systematic gaps in what it notices
  • Assumptions it makes — confident but wrong on edge cases unique to its architecture
  • Biases it inherits — training data shapes what it considers "normal" code
You wouldn't have one person review their own code.

Why have one model review its own reasoning?
Single-model pattern
Model A finds issues → Model A reviews findings → Same blind spots persist
The Technique

Parallel dispatch with
provider_preferences

Fire 3–6 delegate() calls in a single message, each targeting a different AI provider. They all work in parallel. Same task. Different minds.

Orchestrator Session delegate() delegate() delegate() Claude Anthropic / claude-* Security analysis focus: auth patterns GPT OpenAI / gpt-* Security analysis focus: injection risks Gemini Google / gemini-* Security analysis focus: data leaks
In Practice

Three calls. One message. Three providers.

// All three fire simultaneously from a single orchestrator message delegate(agent="foundation:explorer", instruction="Analyze auth module for security issues", provider_preferences=[{"provider": "anthropic", "model": "claude-*"}], context_depth="none") delegate(agent="foundation:explorer", instruction="Analyze auth module for security issues", provider_preferences=[{"provider": "openai", "model": "gpt-*"}], context_depth="none") delegate(agent="foundation:explorer", instruction="Analyze auth module for security issues", provider_preferences=[{"provider": "google", "model": "gemini-*"}], context_depth="none")
Same agent type
Same task
Different minds
Cross-Review

Let them challenge
each other.

After the initial sweep, route each provider's findings to a different provider for review. This catches false positives and surfaces overconfident mistakes.

PHASE 1: ANALYZE Claude finds 8 issues GPT finds 6 issues Gemini finds 7 issues Each reviews a different provider's findings PHASE 2: CROSS-REVIEW Claude reviews GPT's 6 GPT reviews Gemini's 7 Gemini reviews Claude's 8 5 confirmed 1 false positive 6 confirmed 1 false positive 7 confirmed 1 false positive
The Consensus Pattern

Agreement = done.

Converge

All providers agree on a finding? High confidence. Ship it. No further review needed.

Diverge

Disagreement on a finding? Route it for another round of targeted cross-review.

Reject

Multiple providers flag a false positive? Drop it. Cross-review caught the error.

The orchestrator (your root session) synthesizes results, noting agreements and divergences. You're the final arbiter — the swarm gives you signal, not noise.
Full Lifecycle

Dispatch → Cross-Review → Converge

STEP 1 Parallel Dispatch 3–6 agents, different providers Same task, clean context each context_depth="none" STEP 2 Cross-Review Swap findings between providers Each validates another's work context_scope="agents" STEP 3 Synthesize Orchestrator merges results Agreements + divergences noted You are the final arbiter DIVERGE? Re-route for another round ITERATE Targeted re-review Only disputed findings, fresh eyes CONVERGE → Done ✓
Battle-Tested

Building Attractor

A real application built using multi-provider swarm validation — not a demo, not a toy.

200K+
Tokens in session
3+
Providers cross-validating
N
Rounds until convergence

What happened

  • Multiple models analyzed the same codebase
  • Each caught issues the others missed
  • Cross-review eliminated false positives
  • Iterated until the implementation was solid

What it proved

  • Technique scales to real applications
  • Sustained quality over long sessions
  • Provider diversity catches real bugs
  • Convergence signals genuine confidence
No New Code Needed

It's just Amplifier.

This isn't a special feature. It's a technique that emerges naturally from existing primitives.

delegate() + provider_preferences

Route any agent to any provider with a single parameter

Parallel agent dispatch

Multiple delegate() calls in one message — they run concurrently

Context sharing between agents

context_scope="agents" passes prior agent results forward

Orchestrator pattern

Root session synthesizes all agent results into final decision

Four existing primitives. Zero new code. Just a technique that leverages what Amplifier already gives you.
Try It

Stop trusting
a single mind.

Next time you're reviewing code, analyzing architecture, or validating an approach — dispatch the swarm. Let them argue. Ship the consensus.

// Your next session delegate(..., provider_preferences=[{"provider": "anthropic", ...}]) delegate(..., provider_preferences=[{"provider": "openai", ...}]) delegate(..., provider_preferences=[{"provider": "google", ...}])
Amplifier
Multi-Provider Swarm Technique
February 2026
1 / 10
More Amplifier Stories