More Amplifier Stories
Amplifier

Infrastructure & Automation

The Self-Maintaining
Docs System

How Amplifier used itself to build a pipeline that
keeps documentation truthful — automatically, every day.

February 20, 2026  ·  Production Deployment

The Hook

Documentation
lies.

The moment code ships, the docs start drifting.

Model names go stale. Config keys get renamed. Default values change. Nobody has time to keep up. The result: engineers trust the code, not the docs. Docs become decoration.

Sound familiar?

API docs reference a renamed endpoint

Config example uses a deleted key

Model name in the quickstart is two versions old

Filed issue sits open for 3 months

Act 1  —  The Old Way

Manual updates don't scale.
They don't even keep up.

The manual process

  • 1Engineer notices a doc is wrong
  • 2Files an issue (if they have time)
  • 3Issue sits in the backlog
  • 4Someone eventually updates it
  • 5Other sections stay wrong

Manual updates: 30–60 min per change set, done infrequently, always incomplete. Engineers route around docs. Trust erodes.

Amplifier's scale

34
source repos
83
documented sections

Keeping 83 sections accurate across 34 moving repos manually would require constant vigilance by a dedicated person. Nobody does it consistently. Nobody can.

There had to be a better way.

We decided to build it — using Amplifier itself.

Act 2  —  The Meta Moment

We used Amplifier to automate
Amplifier's own documentation.

In a single session, Amplifier designed the system, built the plan, executed every task, diagnosed its own failures, fixed them, deployed the pipeline, and ran it.

Brainstorm mode
Write-plan mode
Execute-plan mode
Self-diagnosis
Deployed & running

One Session — Start to Finish

Amplifier built the system
that maintains Amplifier.

  • 1 Designed the architecture
    Brainstorm mode → validated design doc
  • 2 Built the implementation plan
    Write-plan mode → 12 scoped tasks
  • 3 Executed all 12 tasks
    Execute-plan mode ran autonomously
  • 4 Diagnosed its own failures
    Session analyst agent found 4 interference points
  • 5 Fixed the interference points
    Recipe author agent applied targeted fixes
  • 6 Deployed the CI pipeline
    GitHub Actions workflow committed and live
  • 7 Ran it — and it worked
    342 claims verified. 1 error found. Fixed. Committed.

"The system ate its own cooking."  —  Amplifier used the same agentic loop to build the automation that now runs daily.

Act 3  —  How It Works

Five stages. Zero human intervention.

🔍
Detect
Clone 34 repos. Compute content hashes. Diff against baseline. Only process what changed.
✏️
Regenerate
AI surgically rewrites only drifted sections. Everything accurate stays untouched.
🔬
Verify
Second AI pass checks every factual claim against source code. Every commit is evidence-grounded.
🚦
Filter
Cosmetic prose rewrites rejected. Only meaningful content changes get through.
Commit & Alert
Changes commit atomically with updated hashes. Missing sources become tracked GitHub issues.

Detect

Content hashing means only genuinely changed sections trigger downstream work. No wasted compute.

Verify

Not just AI-generated — AI-verified. Wrong model name? Fix it. Stale config key? Remove it.

Filter bypass

Factual corrections skip the semantic diff threshold. A one-character model name fix ships regardless.

Act 4  —  First Real Run

On day one, it caught something
nobody asked it to find.

Before — what was committed
# providers/anthropic.md ## Recommended model   model: claude-opus-4-1 context_window: 200k supports_vision: true

Wrong model name. Silent for weeks.

Auto-fixed
After — pipeline commit
# providers/anthropic.md ## Recommended model   model: claude-opus-4-6 context_window: 200k supports_vision: true

Verified against source. Committed automatically.

342
claims evidence-checked
341
verified correct
1
factual error found & fixed

Act 5  —  The Outcome

The numbers that matter.

34
source repos monitored daily
83
sections tracked by hash
<24h
until any drift is corrected
0
human hours for routine updates
12
tasks executed autonomously to build it
4
interference points self-diagnosed and fixed
Runs every day at
6:00 AM UTC
· No human required · Scales automatically as the codebase grows

Time Saved

From hours of manual effort
to zero.

Before automation

⏱ 30–60 min per change set

Manual updates done infrequently, always incomplete

👁 Human vigilance required

Someone has to notice, file, triage, and assign each update

🕳 Errors ship silently

No systematic verification — wrong info stays until someone notices

After automation

⚡ Zero human time

Routine updates handled entirely by the daily pipeline

🔁 Daily, automatic, complete

Every drift caught within 24 hours across all 34 repos

🔬 Evidence-backed commits

10+ factual errors caught in the first session. Nothing ships unverified.

Trust

Docs you can actually trust.

🔗

Evidence-backed

Every committed doc is verified against source code — not just AI-generated, but AI-verified. No claim ships without a source to back it.

🚦

Factual first

The cosmetic filter won't stop a one-character model name fix. Factual corrections bypass the threshold. Accuracy always wins over brevity.

🔔

Nothing disappears silently

Missing sources surface as tracked GitHub issues. Every warning has an owner and a paper trail. The system tells you what it couldn't fix.

📊

From decoration to infrastructure

Documentation isn't a side project anymore. It's a first-class artifact with the same integrity guarantees as the code itself. When the pipeline says it's correct, it's correct.

The Closer

This isn't a docs tool.
It's a pattern.

Any artifact that should stay in sync with code can be maintained the same way.

The system Amplifier built for itself can be replicated across any project that has the same problem: things that should reflect the code, but don't.

What else fits the pattern?

📖

API references

Endpoints, parameters, response schemas

📋

Runbooks

Deployment steps that reflect actual infra config

📝

Changelogs

Auto-generated from commits, verified against releases

🏗

Architecture diagrams

Service maps that reflect the actual running system

The Question Worth Asking

The question isn't whether AI
can maintain your documentation.
It already can.

The question is: what else should it be maintaining?

Built with

Amplifier — agentic automation for engineering teams

Status

Production · Running daily

Deployed

February 20, 2026

Try It

Where to go next.

For builders

The pipeline is fully open

The docs-sync recipe, CI workflow, evidence validator, and semantic diff filter are all part of the Amplifier ecosystem. Fork, adapt, run it on your own repos.

docs-sync recipe
evidence-validator
semantic-diff
📑

Design document

The brainstorm-mode output that seeded the implementation. Read the full architecture decision.

🔁

Recipe: docs-sync

The Amplifier recipe that runs daily. Configurable per-repo with custom source mappings.

📈

DOC_GENERATION.md

The living spec for how Amplifier's own documentation is generated and maintained.

💬

Ask Amplifier

Start a session: "Set up docs-sync for my project"

More Amplifier Stories