While Gaming with My Kids
I dropped Amplifier on a barely-set-up Raspberry Pi 5 with an unknown video conference speaker/microphone puck.
As you can guess, I had it configure the rest of the Pi setup for me...
Including probing and figuring out the hardware through conversation:
Then it did the rest.
The result: a complete voice interface built through conversation.
The Pi was sitting next to me, idle.
During a ~15 second pause in action, I gave it a task:
“Go build the Amplifier service that I can run on each of my boxes so that they can talk to each other.”
— Me, during a door hack in GTFOAnd got back to the game.
When deployed to the DGX Spark, the agent recognized what it was working with:
“This has a killer GPU and excessive memory.”
“We should offload the voice processing to this device to use local models.”
— Bonus observation from the agentThe AI wasn’t just following instructions. It was thinking about architecture.
It asked about deploying to another box. I clarified:
“Yeah, the general idea is that I want you to be able to run a lightweight version on this one with the voice features, but then distribute work to the others to do — you are more just a coordinator box then.”
So then it was off...
It told the first box it set up
to go deploy to the second box!
Autonomous agent-to-agent delegation.
Then I came back later and had it split some dev work between them:
Craziness!
It figured out the multi-agent coordination pattern on its own.
The casual future of AI infrastructure
Data as of: February 2026
Feature status: Experimental
Research performed:
find ~/dev/ANext -maxdepth 2 -name "*distributed*" -o -name "*network*" — returned network-monitor.py onlyGaps: No git log data, commit counts, or PR history available. All claims are from the original first-person narrative account. No independent verification of timeline or technical details was possible.
Primary contributors: Brian Krabach (robotdad) — sole author of the narrative and project