principle · predictions over plans
Predictions over plans
A plan says what we'll do. A prediction commits to what will change — and arranges to find out.
A roadmap is a list of features and dates. It is comfortable. It survives every quarter mostly intact, regardless of whether the features changed anyone's life. A prediction is uncomfortable. It names — in writing, before the cycle runs — what specifically will change in a named person's day, with a date someone has committed to running the check.
The corpus prefers the uncomfortable artefact.
What a prediction commits to
Five fields, all required, none optional.
- Baseline. Number. Sample size. Date. Witnessed.
- Target. Specific number, range, or threshold.
- Check date. A calendar commitment by a named person.
- Check method. Witnessed-not-described. Same shape as the baseline.
- Owner. A named person.
A claim missing one of these is not a prediction. It is a plan in costume.
Why a plan is not enough
A plan answers what will we do? That is not the chain's question. The chain's question is what will be different? These look similar from a distance. They produce different features.
A planning culture asks did we ship the things on the roadmap? — and answers itself with a list. A prediction culture asks did the world change in the way the brief named? — and answers itself with a measurement against a baseline. The two cultures look the same in the meeting room and run wildly differently in the cycle.
Plans become predictions when checks are scheduled
A plan can be made falsifiable simply by adding the four missing fields. We will ship Hebrew name support by June is a plan. The same statement extended:
We will ship Hebrew name support by 2026-06-01.
Baseline: 47-min mean grading cycle, n=12 (witnessed 2026-04-22).
Target: <15 min mean.
Check: three observation sessions on 2026-06-15.
Owner: Alex (PO).…is a prediction. The team did not plan less. The team committed more.
When predictions go wrong
The two failure modes are equally dangerous, in opposite directions.
Soft predictions. The team writes targets they can meet without changing anything substantive. Improve user satisfaction is a soft prediction; the team will always be able to claim some improvement. The corpus's antidote: ranges and thresholds, with what would not be improvement explicitly named.
Vanity baselines. The baseline is the metric the team wanted to see, not the metric the activity actually produces. Time-on-task in the LMS dashboard is a vanity baseline if graders alt-tab; focused-grading minutes observed in the field is the real number. The corpus's antidote: capture baselines by observation, not by query.
See also
- Canon — Before We Build · Prediction Writing
- Practice — Writing predictions
- Principle — Not checked is the only worthless outcome
- Clinic — A brief that didn't witness