Blueprints are out, Wind tunnels are in
LLMs collapse the product loop
Two weeks, two side projects, two very different partners: one deep in code, the other steeped in business ops. Building with both made something obvious: large language models aren’t just engineering accelerants. They collapse the entire product loop.
For the past decade, most teams have organized around a trinity – PM, UX, Eng. Because engineering was scarce and expensive, we optimized for engineering leverage. The culture drifted toward documents and handoffs: PRDs, mocks, design docs, reviews. “Handoff” became the milestone; building became the after-party. As a result, learning from customers slid months to the right.
Somewhere in there, “throwaway work” became a slur. Prototyping without a path to production was treated as waste. The best engineers quietly ignored that taboo, hacked something together, and came back with actual signal. Everyone else waited for the next review.
LLMs change the math. They parallelize the trinity and radically reduce the cost of being wrong.
- PM can draft a crisp one-pager, expand risks and edge cases, and generate three experiment plans in an afternoon.
- UX can turn that one-pager into clickable flows and alternative microcopy the same day.
- Engineering can scaffold a fake backend, wire in guardrails, and instrument a demo by day two.
The point isn’t that AI writes production code; it shrinks time-to-signal. Treat prototypes as wind-tunnel models: not for flying, for learning how air hits your wing.
A faster loop, by role
- PM: Use LLMs to create the first PRD draft, enumerate unknowns, and produce testable hypotheses. Generate interview guides and summary memos so insights move.
- UX: Translate hypotheses into mid-fi flows, produce microcopy variants, and run task simulations. Build a click/tap-through that captures the core choice you need users to make.
- Eng: Ask the model for scaffolding, contract tests, and a stubbed data layer. Auto-generate a design-doc outline from code comments as you go. Instrument from day zero.
Run a 72-hour Learning Sprint: build just enough to put in front of five users. Kill fast or iterate. “Throwaway” code is paid research.
Change the scoreboard
If we continue to (over) optimize for engineering utilization, we will keep writing documents. If we optimize for learning, we will ship smaller bets – faster. Here are some metrics that might help you think about it better:
- cycle time for a validation signal
- learning velocity (meaningful insights / week)
- percent of bets killed quickly (and cycle time to bet killing)
- prototype half-life (time until learnings hit prod)
- alignment drift (features without a link to a top level outcome)
Make OKRs smaller and more numerous. Measure the loop, not the launch. Celebrate the team that invalidates shiny idea in 3 days.
Regulated shouldn’t mean slow
Finance and health have longer cycles, but the pattern should hold.
- Try compliance sandboxes with anonymized / synthetic data; log prompts and outputs
- Learn from the crypto gang: let’s enable policy-as-code gatekeeping for prototypes
- Generate pre-read packs for the legal and compliance gang to bring them along
- Get compliance to be part of witness tests to get them to break the prototype
For the long term, we have to bring the regulators along also and we need them to move from blockers -> collaborators.
Make product fun again
The joy of product is the rapid loop: talk to users, build a little, learn a lot. LLMs give that loop back. Use them to parallelize the trinity, to lower the cost of being wrong, and to move learning left. Optimize for time-to-signal, and the rest of the process starts behaving.
Ship the loop, not the doc.
#ai #aiInProductManagemement #design #eng #engineering #llmPrototyping #llms #okrs #pm #pmUxEngineering #prd #product #productLifecycle #productManagement #prototyping #startups #ux #uxr #velocity #visualDesign