Amplifier Ecosystem Active
Multi-Provider
Agent Swarms
Cross-Validating AI with AI
The Insight
What if you could get a second opinion on every code change?
And a third. And a fourth.
Code review catches bugs. Pair programming catches design flaws. But what if you could run every architectural decision past four different AI models simultaneously—and only proceed when they converge on the same answer?
Origin Story
One engineer started an experiment.
“What if I ran the same prompt through Anthropic, OpenAI, Gemini, and GitHub Copilot at the same time? If three out of four models agree on the architecture, I know I'm on the right track.”
— Samuel Lee, MADE:Explorations
Samuel Lee pioneered running four providers simultaneously for cross-validation. What started as a personal workflow is now becoming standard practice across the MADE team.
Why This Works
Kernel provides mechanism.
Modules provide policy.
Amplifier's architecture was designed for flexibility—each provider is just a module you compose into your bundle. Nobody planned for cross-validation. The architecture made it inevitable.
Bundle
Your Configuration
composes →
Providers
provider-anthropic
provider-openai
provider-gemini
provider-copilot
Kernel
Amplifier Core — Orchestration, Tools, Sessions
How It Works
Composition, not configuration.
Adding a provider is a three-line YAML change. Running four simultaneously is just adding four includes.
providers:
- module: provider-anthropic
config:
default_model: claude-opus-4-6
enable_1m_context: true
- module: provider-openai
config:
default_model: gpt-5
- module: provider-gemini
config:
default_model: gemini-3-pro-preview
- module: provider-github-copilot
config:
default_model: claude-sonnet-4
provider_preferences:
- provider: anthropic
model: claude-opus-4-6
- provider: openai
model: gpt-5*
The Pattern
Convergence = Confidence
Run the same question through multiple models. When they converge, you know you're right. When they diverge, you know to dig deeper.
↓ ↓ ↓ ↓
3 / 4 agree → High Confidence — Ship It
The dissenting model isn't wrong—it's a signal. It may have caught an edge case the others missed, or it may have a different but valid perspective worth examining.
New Provider New
GitHub Copilot
joins the swarm.
A brand-new provider module brings every model in your Copilot subscription into Amplifier—no extra API keys required.
Deny + Destroy Pattern
Each API call creates an ephemeral Copilot session. Tool calls are captured, then the session is destroyed—keeping Amplifier in full control of orchestration.
Zero API Keys
Authenticates through your existing GitHub Copilot subscription. Run copilot auth login and you're done. Every model in your plan is available.
Full Model Catalog
Claude Opus 4.5, Sonnet 4, GPT-5, GPT-5.1 Codex, Gemini 3 Pro—all accessible through a single provider module.
8 Commits, 1 Day
Built by Marc Goodner in a single day. 10 source modules, comprehensive test suite, 80% coverage threshold enforced.
Anthropic Provider
1M tokens.
The Anthropic provider now supports 1 million token context windows for Opus and Sonnet models.
Model-Family-Aware Defaults
The provider now detects model families and adjusts automatically:
Opus: 128K output · 64K thinking
Sonnet: 64K output · 32K thinking
Haiku: 64K output · 32K thinking
Adaptive Thinking New
Changed from type: "enabled" to type: "adaptive"—the model now decides when to think, yielding better performance across all task types.
Pioneered by Samuel Lee
Samuel's original commit added the beta headers infrastructure that made 1M context possible. Brian Krabach then extended it to Opus models with model-family-aware defaults.
Interleaved Thinking
Claude can now think between tool calls, not just at the start. Critical for complex multi-step reasoning across huge context windows.
Evolution
From experiment to standard practice.
Nov 2025
Samuel Lee adds beta headers to Anthropic provider
The foundational commit that enables 1M context windows. Introduces the infrastructure for pushing provider capabilities forward.
Dec 2025
1M context goes live for Sonnet models
Context window and max output token configuration added to provider info. Streaming enabled by default for large contexts.
Dec 2025
Interleaved thinking + Claude 4.5 optimization
Thinking budget increased from 10K to 32K tokens. Models can think between tool calls for better multi-step reasoning.
Feb 5, 2026
1M context extended to Opus · Adaptive thinking ships
Model-family-aware defaults give Opus 128K output, 64K thinking. Thinking mode switches from “enabled” to “adaptive” for opus-4-6 compatibility.
Feb 10, 2026
GitHub Copilot provider module launches
Marc Goodner builds the complete provider in a single day. 8 commits, 10 modules, full streaming + tool use support.
Feb 2026
Multi-provider swarms become team practice
What Samuel started as an experiment is now standard workflow on the MADE team. Four providers running simultaneously for cross-validation.
In Production
Where swarms are shipping value.
These aren't demos. They're daily workflows running on the MADE team right now.
-
MADE Support Bundle Docs Freshness
Multiple models validate documentation pipelines, catching staleness and inconsistencies that a single model misses.
-
Slack Integration Debugging
The Slack bridge connects to Amplifier sessions using the configured provider swarm. Real-time debugging with cross-validated suggestions.
-
NL Spec Creation
Natural language specifications validated across providers. When Claude, GPT, and Gemini all parse the spec the same way, it's unambiguous.
The Slack Connection
The amplifier-distro Slack bridge is a 2,400-line integration connecting Slack workspaces to Amplifier sessions. Messages flow through whatever provider swarm is configured in the user's bundle.
Slack → SocketMode → EventHandler
→ SessionManager → Bridge API
→ Amplifier (your providers)
6 Providers in Catalog
Anthropic, OpenAI, Gemini, xAI, Ollama, and Azure OpenAI. Mix and match. The distro's bundle composer makes it trivial.
The Pioneer
Samuel Lee's footprint
across the ecosystem.
Senior engineer, floater with high energy, and the person who proved multi-provider swarms work in practice.
amplifier-core
1 commit. Contributing to the kernel itself.
amplifier-foundation
8 commits. Building and refining the bundle system.
provider-anthropic
1 foundational commit. Beta headers that enabled 1M context for the entire team.
amplifier-distro
8 commits. Config validation, server fixes, Slack Socket Mode dedup window.
amplifier-app-cli
7 commits. Improving the command-line experience.
made-support bundle
2 commits. The support bundle where cross-validation first proved its value.
By the Numbers
4
Providers Running
Simultaneously
6
Providers in the
Distro Catalog
1M
Token Context
Window (Anthropic)
128K
Opus Max
Output Tokens
8
Copilot Provider
Commits (1 Day)
3/4
Model Agreement
= Ship It
The Takeaway
The best features are the ones you didn't plan.
Multi-provider was designed for flexibility and vendor independence. Cross-validation was an emergent behavior—discovered by a team member who asked “what if?” and had the architecture to try it immediately.
Designed For
Provider flexibility. No vendor lock-in. User chooses their models.
Discovered Use
AI cross-validation. Swarm intelligence. Convergence-based confidence.
Sources & Methodology
How this story was researched.
- Data as of
- February 20, 2026
- Feature Status
- Active — Multi-provider is a core architecture feature, actively used in daily workflows
- Primary Contributors
-
- Samuel Lee — Multi-provider swarm pioneer, beta headers infrastructure
- Marc Goodner (robotdad) — GitHub Copilot provider module
- Brian Krabach — 1M context / Opus extensions, model-family-aware defaults
- Research Performed
-
- amplifier-core coordinator.py: grep for “provider” mount points and configuration
- amplifier-foundation cache: agents behavior bundle inspection
- Provider module repositories: anthropic, openai, gemini, github-copilot providers
- Known Gaps
- Exact commit counts for individual contributors not verified in this regeneration
Get Started
Try it yourself.
Add a second provider to your bundle. Run the same task through both. See what happens when they agree—and pay close attention when they don't.
GitHub Copilot Provider
microsoft/amplifier-module-provider-github-copilot
Zero API keys. Just your Copilot subscription.
Anthropic Provider
microsoft/amplifier-module-provider-anthropic
1M context. Adaptive thinking. Model-family defaults.
Amplifier Distro
microsoft/amplifier-distro
6 providers. Bundle composer. Multi-provider assumed.
More Amplifier Stories