A story about human judgment in the age of AI-generated pull requests
A product manager looks at a pull request. One line of code. A colleague wants to add a memory system to a research pipeline built on one principle: every piece of research shows its work.
He's not technical. He knows that makes him
exactly the person most likely to approve
something he shouldn't.
A silent memory system that would watch every session, learn from every interaction, and never tell the user it was doing any of it.
Chris mentioned, almost offhand, that he constantly runs test sessions through his specialists. Competitive analyses. Storyteller experiments. Junk data by design.
A silent memory system doesn't know the difference. It would ingest test noise as real knowledge and feed it back into future work.
The tool's memory would be poisoned from day one.
Context no automated review could have — he knew how the product actually gets used. The test sessions, the junk data, the daily reality.
Lenses he didn't naturally have — CTO-level dependency analysis, CPO-level philosophy checks. Structured ways to see what one line hides.
Together, they caught something
neither would have caught alone.
“The name doesn't think of the user's job.”
Nobody's job is reviewing PRs.
The job is making good decisions
about what enters your product.
lines. A vision document.
Before writing a single line of code.
His own methodology — doc-driven dev.
Define the problem, the positioning, the principles.
Then build.
“AI amplifies velocity
but fragments alignment.
People are doing more
and talking less.”
Contributors arrive with polished, AI-generated PRs that look professional and conflict with your product philosophy — because their AI optimized for their task, not your coherence.
The recipe asked for APPROVE/REJECT verdicts, but the vision document explicitly said the tool never decides.
Without the vision doc, that ships.
“I'm a PM and not as technical.
Are we overcomplicating this?”
Fixed the contradiction.
Documented the rest as backlog.
Shipped it.
A non-technical PM who wrote no code produced:
He needed to review a pull request.
The process turned out to be the product.
Data as of: March 5, 2026
Type: First-person session narrative
Author: Chris Park (product manager, sole session participant)
All numbers cited in this deck come from the session account:
Feature status: New — Change Advisor recipe and vision doc created in this session
Repository: amplifier-change-advisor (Chris Park / personal project)
Note: No automated research was performed for this deck. This is a case study based on a provided session narrative, not a data-mined feature story. PR analysis numbers (68 files, etc.) originate from AI analysis within the described session and have not been independently re-verified for this presentation.
The most valuable moment — the one that killed
the integration — came from a conversation, not a scan.
The AI didn't replace judgment.
It created the conditions where
judgment had somewhere to land.