Amplifier Case Study

Teaching AI
to Have Taste

How a brainstorm session turned a passive design archive into a self-improving intelligence system — in one afternoon.

February 2026 · Design Intelligence Enhanced
The Starting Point

An honest question. A harder answer.

The Design Intelligence Enhanced bundle already had an impressive pipeline: weekly scrapes of design showcase sites, vision AI analysis of screenshots, and a growing archive of trend data.

Then someone asked the obvious question:

"How does it actually self-improve?"

The honest answer was uncomfortable:

"It gets more informed over time, but it doesn't get smarter over time."
— End-of-session reflection
The Problem

Six gaps between data and intelligence

The bundle accumulated data. It never learned from it. A forensic look at the architecture revealed six structural gaps:

1
No outcome tracking
Couldn't tell if design decisions actually worked
2
No preference persistence
Every session started from zero — your taste forgotten
3
Static knowledge base
Cognitive task catalog and observation files were hand-curated
4
No self-modifying agents
Agent instructions never evolved from experience
5
No quality scoring
All archive entries weighted equally — noise and signal treated the same
6
No cross-session learning
No feedback loop between sessions whatsoever
The Brainstorm

Two layers of learning

Through a structured brainstorm, a design emerged. Not a single monolithic "learning engine," but two complementary layers — each meaningless without the other.

Layer B — Personal Taste Profile
Remembers what you like
Per-user and per-project preferences that evolve through inferred-then-confirmed suggestions. The system watches your choices, notices patterns, proposes updates — but nothing writes without your say.
built on top of
Layer A — Collective Design Intelligence
Knows what's happening in design
The bundle's knowledge base grows organically from real project work. When agents encounter patterns from archive data that aren't captured yet, they propose additions inline.

The key insight: B without A is a static preference file. A without B is a knowledge base nobody's taste is mapped against. Together they create the loop.

Design Decisions

Every choice was the user's

Five questions. Five answers. Each one shaped the architecture through collaborative Q&A:

Scope
Both layers, together. Collective intelligence informs personal taste. As the user's taste evolves, the system recommends how to evolve it because it's on trend and up-to-date.
Storage
Layered profiles. Global user baseline + per-project overrides. Your e-commerce site can be bold while your blog stays minimal. Project wins on conflicts.
Updates
Inferred, then confirmed. The system watches choices silently, surfaces patterns as proposals, but nothing gets written to your profile without your explicit approval.
Growth
Agent-driven, on demand. Knowledge base enrichment happens during real project work — not on a schedule. Aligned with what the user is actually doing.
Surface
Woven into reasoning. Taste awareness isn't a sidebar or notification. It's part of how every design agent thinks — baked into their reasoning, surfaced transparently in output.
The Protocol

Decide. Flag. Recommend.

Every design agent now reasons through taste using three modes, each triggered by different confidence levels:

Decide
"Using a muted earth tone palette here. Your profile strongly favors this direction and it fits the editorial tone of this project."

High confidence + context aligns. The agent applies the preference and explains why. Transparent, not blocking.
Flag
"Your profile leans toward generous whitespace, but this dashboard has 12 data points that need to be scannable. Tighter spacing with clear grouping, or keep the whitespace and use progressive disclosure?"

Medium confidence, or tension between preference and context. The agent asks for a call.
Recommend
"You favor serif headlines, and there's a strong current trend toward condensed bold sans-serifs for data-dense headers. Worth considering for this project where you need density?"

Archive trend intersects with a preference. The agent suggests evolution.
The Shift

Before and after

Before
  • Every session starts from zero
  • Archive data sits passively
  • Knowledge base is hand-curated
  • Agents explain choices generically
  • No memory of user preferences
  • Gets more informed, not smarter
After
  • Profiles persist across sessions
  • Trends inform active design reasoning
  • Knowledge base grows from real work
  • Agents reason with your taste explicitly
  • Preferences evolve with confirmation
  • Gets more informed AND smarter
The Build

From design to shipped in one session

7 tasks dispatched to subagents. 5 clean commits. All markdown and YAML — no code, no new modules.

Task 1
Core instructions created — 185 lines teaching agents the decide/flag/recommend protocol, profile update proposals, and knowledge enrichment flow
Task 2
Behavior YAML authored — thin bundle definition wiring instructions into the composition system
Task 3
Registered in bundle.md — taste-awareness added as a composable sub-bundle
Tasks 4-6
8 design agents updated — art director, system architect, component designer, layout architect, responsive strategist, animation choreographer, voice strategist, research analyst
Task 7
End-to-end validation — all 8 agents wired correctly, all checks pass, 4 utility agents correctly excluded
By the Numbers

What shipped

2
Files Created
9
Files Modified
8
Agents Updated
5
Clean Commits
185
Lines of Instructions
0
Lines of Code

Pure instruction-driven behavior. All markdown and YAML. No new modules, no new tools, no code changes.

The Process

Designed by conversation

The entire architecture emerged through structured brainstorming. Five questions, five answers. The system presented options; the user made every call.

A pattern emerged in the choices: the user consistently selected the "both, layered" option. Not the simplest path. Not the most complex. The nuanced one.

"This should be per... ugh, every time you give me a 'both' option I want to select both. It really is layered and there are per-project considerations."

That instinct — that real design systems need nuance, not binary switches — shaped every decision. Layered profiles, not flat. Inferred-then-confirmed, not silent or manual. Woven into reasoning, not bolted on as a sidebar.

The brainstorm moved from "what should self-improvement mean?" to "here's the exact file format for taste profiles" in about 20 minutes of back-and-forth. Design by conversation, then implemented by dispatching subagents straight from the plan.

Takeaways

What others can learn

01
Instructions are infrastructure
185 lines of well-structured markdown gave 8 agents a new capability. No code required. In the Amplifier ecosystem, behavior is composition — you teach agents by writing clear instructions, not by writing code.
02
Design the learning loop, not the data store
The bundle already had data collection. What it lacked was the loop: observe, propose, confirm, remember. The self-improvement wasn't about more data. It was about closing the feedback cycle.
03
Brainstorm before you build
The brainstorm mode worked. Present options, let the user decide, validate each section before moving on. Twenty minutes of conversation produced a design that would have taken days to spec in a document-first approach.
04
Respect user agency
The system infers preferences but never writes them silently. Every profile update is proposed, explained with evidence, and only persisted on confirmation. Transparency builds trust. Trust enables the learning loop.
Sources

Methodology

Session date: February 25-26, 2026

Bundle: design-intelligence-enhanced (Amplifier ecosystem)

Feature status: Shipped to origin, awaiting real-session validation

Artifacts produced:

Commits: 5 commits (bbd8291 through 52bdb8c)

Approach: Brainstorm mode (design) → Plan-write mode (spec) → Subagent-driven execution (build) → End-to-end validation (verify)

What's next: Validate the core loop by running real design sessions. If the reasoning feels natural and profile proposals emerge organically, the foundation is sound. If not, the instructions get tuned — which is exactly the kind of iteration this system is designed for.

More Amplifier Stories