Amplifier Philosophy

The Still Center

How ~2,600 lines of discipline changed what AI agents could become

Active
February 2026
The Problem

Every framework becomes the ceiling

Every AI agent framework follows the same pattern — start ambitious, grow features, become the application. LangChain, CrewAI, AutoGen — they all accumulate opinions. The framework becomes the ceiling.

Accumulate
Start with a clean API. Add chains, add agents, add memory, add retrieval. Each addition is reasonable. The total is not.
Absorb
Your chains become their objects. Your agents inherit their abstractions. Your prompts live inside their templates. Your logic runs through their middleware.
Constrain
When the next paradigm arrives — a new model, a new pattern, a new way of thinking — you can't move. The framework is the ceiling. You rewrite.
The Insight

What if the kernel had no opinions?

What if you built an AI agent kernel the way Linus Torvalds built Linux? A tiny, stable center that provides mechanisms only. No opinions about which LLM to use. No opinion about how to orchestrate. No opinion about what tools to offer.

load
Discover and load modules at runtime
manage
Create and maintain session state
dispatch
Route events through the system
enforce
Hold modules to protocol contracts

That's it. Everything else is a module.

The Discipline

One question governs everything

"Could two teams want different behavior?"
If yes → Policy → Module
  • Which LLM provider to call
  • How to orchestrate agent loops
  • What tools to provide
  • How to approve actions
  • What context to persist
If no → Mechanism → Kernel
  • Load and validate modules
  • Create and track sessions
  • Dispatch events to hooks
  • Enforce protocol contracts
  • Provide extension points

This isn't aspirational. On Day 2, a MockProvider was removed from the kernel (commit bc63e37). The kernel has never contained a single provider, tool, or product decision since.

The Evidence

Small enough to audit in an afternoon

~4,400
lines of essential kernel
README claims ~2,600*
6
protocol types
the entire contract surface
50
canonical events
across 16 categories
5
runtime dependencies
click · pydantic · pyyaml · tomli · typing-extensions

*The README's ~2,600-line claim likely reflects a narrower scope or earlier version. The full essential kernel — excluding tests and docs — measures ~4,356 lines across 30 source files. Compare this to frameworks with tens of thousands of files and implicit behaviors you'll never fully understand.

The Metaphor

Like a crystal seed — vanishingly small, structurally perfect, absolutely still. It defines the geometry that allows something vast to grow around it.

It doesn't tell each atom where to go — it establishes the rules of attachment. The crystal that forms is emergent, diverse, enormous — but every facet traces back to that tiny, stable center.

The Radical Decision

The execution loop is a module

In every other framework, the agent loop is hardcoded: observe, think, act, repeat. In Amplifier, the orchestration strategy itself is pluggable.

Every other framework
# Hardcoded. Unchangeable. Theirs. while True: observation = observe(env) thought = think(observation) action = act(thought) if action.done: break
Amplifier
# Your strategy. Your loop. Yours. orchestrator.run(session) # ReAct today # Tree-of-thought tomorrow # Plan-then-execute next week # Swap it without touching the kernel

This is the most radical decision in the architecture. The Orchestrator protocol means Amplifier can adopt execution strategies that haven't been invented yet.

The Philosophy

Text all the way down

A philosophical position about transparency and human agency — not a feature.

.md
Bundles
.md
Agents
.yaml
Config
.jsonl
Logs
# You can diff your entire AI system $ git diff agents/explorer.md $ grep "orchestrator" config/*.yaml $ cat logs/events.jsonl | jq .event # Review an agent's personality in a PR # grep your architecture # Every decision is auditable text
The Bet

Structure over answers

What we don't know
  • The best orchestration strategy
  • Which LLM providers will matter in two years
  • What tools agents will need
  • How approval workflows should work
  • What context persistence looks like at scale
What we know
  • There will be providers
  • There will be tools
  • There will be orchestrators
  • There will be context
  • There will be observers

Amplifier bets on the structure of these questions — that there will be providers, tools, orchestrators, context, and observers — while leaving every answer swappable.

The center stays still so the edges can move fast.

Sources

Research Methodology

Data as of: February 26, 2026

Feature status: Active

Primary contributor: Brian Krabach (96% of 143 commits)

Research performed:

  • Source file analysis: 30 Python files in kernel
  • Line counts: ~4,356 essential kernel lines; ~7,701 total
  • Protocol enumeration: 6 types via class inspection
  • Event catalog: 50 events across 16 categories
  • Dependency audit via pyproject.toml: 5 runtime deps
  • Git log: 143 commits since 2025-10-08 (~4.6 months)
  • Zero-provider verification: grep confirmed no provider/tool implementations in kernel
  • Bundle format verification: 19+ .md agent files in foundation

Gaps and notes:

  • README claims "~2,600 lines" — actual essential kernel measures ~4,356 lines. The README figure likely reflects a narrower scope or earlier version.
  • Competitor framework sizes (LangChain ~20k-35k, CrewAI ~8k-15k, AutoGen ~30k-50k) are community estimates, not verified line counts.
  • 5 contributors total; single-maintainer scope (96% by one author) is stated, not implied as team effort.
  • MockProvider removal claim verified via commit bc63e37 on Day 2 of development.

Six protocol types:

Orchestrator · Provider · Tool · ContextManager · HookHandler · ApprovalProvider

"The frameworks that survive paradigm shifts are the ones that encode the right abstractions at the right level of generality."

And infinite possibility at the edges.

amplifier-core · amplifier-foundation · github.com/microsoft

More Amplifier Stories