Skip to content

Ilities Selection

Which non-functional requirements matter for this Epic, to what level.

Ilities — performance, accessibility, security, learnability, durability, internationalisation, reliability — are the requirements that don't show up in the user story but that determine whether the feature lives or dies once shipped. The corpus pattern: ilities are selected, not assumed. The selection is recorded in the brief and (where adjusted) in an ADR.

The standard list

text
Performance         (latency, throughput, resource use)
Reliability         (availability, fault tolerance)
Security            (auth, authz, data protection, audit)
Privacy             (PII handling, retention, deletion)
Accessibility       (WCAG, keyboard, screen reader, contrast)
Internationalisation (locale, RTL, unicode forms)
Learnability        (first-use, no-training adoption)
Maintainability     (code clarity, deletability)
Operability         (monitorability, runbook coverage)
Durability          (data loss tolerance)
Scalability         (load shape, growth path)

The default and the deviation

Each project has a defaults table. We do not write it down once and forget it. It lives in a top-level ADR, and every brief either confirms the defaults or names the deviation.

text
Project: Grading Flow v2

Default ilities:
  Performance        <200ms p95 on read endpoints
  Reliability        99.9% monthly availability per ADR-12
  Security           Auth required; no PII in logs; audit logs retained 90d
  Privacy            Hebrew names, English names — no other PII collected
  Accessibility      WCAG 2.2 AA
  Internationalisation UTF-8 throughout; Hebrew + English
  Learnability       Graders adopt without training (target: <10 min first use)
  Maintainability    Standard project conventions
  Operability        Standard observability per ADR-08
  Durability         No data loss; standard backup policy
  Scalability        Up to 50 concurrent graders per customer

When a brief deviates — we are accepting <500ms p95 for this batch endpoint — the deviation is recorded in the brief and (if structurally significant) gets its own ADR.

How to read each ility for an Epic

Each ility translates into questions the trio answers before code begins.

Performance

  • What is the latency target? p50, p95, p99.
  • What is the throughput shape? Burst vs steady?
  • What load profile do we expect this Epic to add?

Reliability

  • What is acceptable downtime?
  • What dependencies introduce failure modes?
  • What is the failure-degradation pattern? Fail closed, fail open, fail soft?

Security

  • What auth and authz changes does this introduce?
  • Does this Epic touch PII or new sensitive data?
  • What audit logging is required?
  • Has the threat model been re-read?

Accessibility

  • Keyboard-only navigation across the new flows?
  • Screen reader output verified?
  • Contrast ratios?
  • RTL behaviour validated?

Internationalisation

  • Unicode forms fully supported (NFC/NFD/NFKC/NFKD)?
  • Locale-aware sorting and search?
  • RTL layout?
  • Date/number format handling?

Learnability

  • First-use onboarding?
  • Empty states?
  • Help text in domain language?

Maintainability

  • Standard project patterns or new ones?
  • Test coverage adequate?
  • Dependencies introduced reviewed?

Operability

  • New metrics, traces, log fields?
  • New runbook entry?
  • Monitor/alert rules updated?

Durability

  • Data loss tolerance?
  • Backup and restore tested?

Scalability

  • Growth headroom for the next 12 months?
  • Hot-spot risk?

The UX/product ilities

Some ilities live more on the product side than the system side. The corpus calls these UX/product ilities and the Designer + PO own them.

  • Learnability — can a new person use this without training?
  • Content clarity — does the on-screen language match the person's domain language?
  • Responsiveness — does the experience hold across the devices the person uses?
  • Comprehension — does the person know what state they are in at any moment?

These are real ilities. They are checked in design review and in QA. A feature that ships with broken learnability is a feature that produces support tickets, regardless of how reliable the backend is.

What gets recorded where

SelectionWhere it lives
Project defaultsA top-level ADR in /docs/architecture/adr/ADR-001-default-ilities.md
Per-Epic confirmationThe Feature Brief or Technical Design Brief
Per-Epic deviationA new or referenced ADR
Per-story specificsThe story's acceptance criteria, often as Gherkin scenarios

When ilities are skipped

The pattern is predictable. A team ships a feature, it works in staging, it fails on the third day in production because of a load profile the brief did not name. The postmortem traces the gap to the ilities table that was left as default when it should have been adjusted.

The corpus pattern: if a feature touches a load path that hasn't been seen at scale, the Tech Lead writes a one-line ility deviation in the brief, even if no number is yet known. Naming the unknown is the cheapest insurance against the missed unknown.

Part 9 — Slicing & Prioritization →

200apps · How We Work · NWIRE