Skip to content

Overview

Audience: UX and Front-End Engineers (with Product Owner collaboration) Scope: Implementing Storybook stories from documented Use Cases for any GEN::ENT-DA entity domain Reference project: Business Affiliate UX (PR #52 on Arda-cards/ux-prototype)


This guide describes the end-to-end workflow for turning a set of documented Use Cases and Scenarios (in the documentation repository) into a complete Storybook story suite (in ux-prototype). You drive the process through four phases, collaborating with the Product Owner at approval gates.

PhaseNameYour RolePO RoleDuration Estimate
AScope Analysis & Story SpecificationAnalyze docs, draft specsDecides scope, reviews specs1 session
BWave Planning & Plan GenerationBuild dependency graph, plan wavesApproves wave structure1 session
CAgent Team ImplementationOrchestrate agents, verify wavesMonitors, resolves blockers1-3 sessions
DStabilization, PR & DocumentationStabilize CI, open PRReviews PR, approves merge1 session

Approval gates between phases are on by default. The PO may waive specific gates at project start by declaring: “Phases A-B are gated, C-D are autonomous” (or any combination). Waived gates still produce their artifacts — they just don’t block progression.


Before starting, ensure:

  1. Use Case documentation exists in documentation/src/content/docs/product/use-cases/ for the target entity domain. Each Use Case should have numbered scenarios (e.g., REF::BA::0001::0001.UC).
  2. General behaviors are documented — the entity domain implements GEN::ENT-DA (or GEN::ENTSUB for subordinate entities), and those generic behaviors are already specified.
  3. The ux-prototype repository has the Storybook infrastructure in place (Vite, React, Tailwind, MSW, fullAppProviders decorator).
  4. A product release name is known (e.g., “MVP2”) for priority annotation of in-scope scenarios.

Phase A: Scope Analysis & Story Specification

Section titled “Phase A: Scope Analysis & Story Specification”

Goal: Define what’s in scope, what’s deferred, and produce detailed specifications for each story.

Read all Use Case documents for the target domain, cross-referencing with:

  • The generic entity behaviors (GEN::ENT-DA, GEN::ENTSUB)
  • Any existing Storybook stories in ux-prototype
  • The backend API shape (for MSW handler design)

For each Use Case and scenario, decide: in scope, deferred, or simplified. Collaborate with the Product Owner on these decisions.

Document each decision using the Scope Decision format:

### SD-N: <Title>
<One-sentence decision statement>
**Rationale**: <Why this decision was made — user value, technical complexity, dependency reasoning>

Typical decisions for an entity-domain project:

CategoryQuestionExample from BA project
LayoutDetail panel structure (tabs vs. scroll)?SD-1: Continuous scroll with collapsible sections
TabsWhich detail tabs are in scope?SD-2: Single Details tab only
SubordinatesSubordinate entity management?SD-3: Silent VENDOR assignment, no role UI
DeleteReferential integrity warnings?SD-4: Simple confirmation, no warnings
HistoryVersion history and restore?SD-5: Deferred entirely
APIWhich MSW handlers are needed?SD-6: Lookup handler for name search

Template: Scope Analysis Example: Scope Decisions

List every scenario from the documentation, marking each as in-scope or deferred. This becomes the source of truth for story count and wave sizing.

For each in-scope scenario (or group of related scenarios), write a story specification covering:

  1. Story variants: Interactive, Stepwise, Automated (for multi-step workflows) or single-variant with play function (for single-interaction stories)
  2. Shared infrastructure: Types, mock data, MSW handlers, shared components
  3. Play function assertions: Step-by-step interaction sequence with specific queries and expected outcomes
  4. File organization: Directory structure under src/use-cases/reference/<domain>/

Template: Story Specification Example: Story Spec — Pagination

Document domain-specific decisions that apply across all stories:

  • MSW handler registry (routes, latencies, error shapes)
  • Shared component registry (what’s reused, what’s new)
  • Mock data requirements (count, variety, edge cases)
  • Naming conventions for files and Storybook titles

Template: Application Conventions

Gate A: Get PO sign-off on scope decisions and story specifications.

Section titled “Gate A: Get PO sign-off on scope decisions and story specifications.”

Exit criteria:

  • All scope decisions documented with rationale
  • Scenario inventory complete with in/out classification
  • Story specifications written for all in-scope stories
  • Application conventions document captures shared infrastructure plan

Goal: Organize stories into dependency-ordered waves and generate individual implementation plans.

Identify which stories depend on shared infrastructure built by other stories. Common patterns:

  • Foundation story: Builds all _shared/ infrastructure (types, mock data, MSW handlers, page component). Always Wave 0.
  • Critical unlock: A single story that builds a shared component (e.g., detail drawer) that multiple later stories depend on. Usually a dedicated wave.
  • Parallel-safe stories: Stories that create new files without modifying shared infrastructure. Can run in parallel with worktrees.
  • Sequential stories: Stories that modify the same shared files (e.g., the page component). Must run in sequence.

Organize into waves following the dependency graph:

Wave 0 (foundation) ─── Wave 1 (parallel: browse/search stories)
└────────────── Wave 2 (sequential: critical unlock)
├── Wave 3 (sequential: create + edit happy paths)
└── Wave 4 (parallel: error variants)

For each wave, specify:

  • Stories included
  • Parallelism: How many agents can work simultaneously
  • Worktree strategy: Whether agents need isolated worktrees or work on the integration branch
  • Exit criteria: What must pass before the next wave starts

Template: Implementation Plan Example: Wave Structure

For each story, generate a detailed implementation plan covering:

  1. Preconditions: What infrastructure must exist
  2. Element classification: Reused as-is, modified, created
  3. File inventory: Create/modify/delete
  4. Implementation steps: Numbered steps with verification
  5. Risk notes: What could go wrong and mitigations

Template: Story Plan Example: Individual Plan — Pagination

After generating plans in batches, run audit passes to verify:

  • No infrastructure gaps (every handler/component referenced is actually built by a prior story)
  • No file conflicts between parallel stories in the same wave
  • Consistent naming and import paths

Gate B: Get PO sign-off on wave structure and implementation plan.

Section titled “Gate B: Get PO sign-off on wave structure and implementation plan.”

Exit criteria:

  • Dependency graph documented
  • All stories assigned to waves with parallelism strategy
  • Individual plans generated for all stories (except pre-existing Wave 0)
  • Audit reports show no gaps or conflicts

Goal: Execute the plans wave by wave, producing working Storybook stories.

  1. Create the integration branch: <username>/<domain>-use-cases from main
  2. Configure permissions: Add worktree paths to .claude/settings.json if using parallel agents
  3. Set up progress tracking: Initialize the Progress Tracker

For each wave:

  1. Launch agents with the wave’s plans
    • Parallel waves: Use worktrees (<domain>-impl-worktrees/<story-name>)
    • Sequential waves: Work directly on the integration branch
  2. Verify wave completion: npm run lint, npx tsc --noEmit, npm run test
  3. Merge worktree branches into the integration branch
  4. Commit: feat(<domain>): implement Wave N — <summary>

This workflow uses the following agent personas and skills. Agent definitions and skills are maintained in the workspace repository; the shared location of these definitions may change in the future.

Personas:

PersonaPurpose
front-end-engineerImplements React components, stories, MSW handlers, play functions
quality-reviewerReviews code for standards compliance (read-only)
team-leadOrchestrates multi-agent waves, manages worktrees

Skills:

SkillPurpose
launch-teamLaunches agent teams against task plans with worktree isolation
ui-componentReact component creation conventions (separated concerns, Storybook integration)
unit-tests-frontendFrontend testing patterns (Jest, RTL, coverage)

After all waves complete:

  • Add Table of Contents MDX page linking all stories
  • Add Storybook sidebar navigation (spec-ID prefixes, sort ordering)
  • Add addon-links for cross-story navigation
  • Verify sidebar structure matches the story specification

Gate C: Get PO sign-off on implementation output (stories render correctly, play functions pass).

Section titled “Gate C: Get PO sign-off on implementation output (stories render correctly, play functions pass).”

Exit criteria:

  • All stories implemented per specifications
  • All waves pass verification (lint, tsc, test)
  • Progress tracker shows 100% completion
  • Sidebar structure matches specification

Phase D: Stabilization, PR & Documentation

Section titled “Phase D: Stabilization, PR & Documentation”

Goal: Get CI green, merge the PR, and annotate the documentation with scope decisions.

Storybook play functions that pass locally often fail in CI due to environment differences. The table below summarizes the most common issues; see CI Stabilization Patterns for the full catalog with code examples.

IssueSymptomFix
AG Grid virtualizationRows in DOM but not “visible”Don’t assert toBeVisible() after findByText on grid cells
AG Grid buffer rowsRow count off by 1-2Never assert exact row counts
AG Grid re-rendersElements disappear after mutationsWrap post-mutation assertions in waitFor()
Sonner toast animationfindByText succeeds, toBeVisible() failsTwo-step: find element, then waitFor(() => expect(el).toBeVisible())
Radix dropdown portalscanvas.findByRole('menuitem') failsUse screen.findByRole('menuitem') — Radix portals to document.body
Duplicate text matches”Found multiple elements”Scope with within(drawer) after re-querying the container
userEvent.tripleClickMethod doesn’t existUse userEvent.clear() instead
  1. Push the integration branch
  2. Open PR against main with:
    • Story inventory table
    • Architecture decisions summary
    • Testing notes (what’s covered, known limitations)
  3. Run mandatory pre-push gate: npm run lint, npx tsc --noEmit, npm run test
  4. Monitor CI checks, fix failures, re-push

For each reviewer comment:

  1. Read and understand the concern
  2. Reply with explanation or commit a fix
  3. Resolve the thread

After the PR is merged, update the documentation repository:

  1. Add scenario-level Priority annotations to each Use Case scenario:

    • In-scope scenarios: **Priority**: <release-name>
    • Deferred scenarios: **Priority**: Undefined
    • Simplified scenarios: **Priority**: <release-name> -- simplified: <note>
  2. Add an overview page summarizing all scenarios in two tables (in-scope vs. deferred) with links to each scenario anchor

  3. Update CHANGELOG.md with a new version entry

Capture the complete project lifecycle in a retrospective document:

  • Phases executed with key decisions
  • Technical insights discovered during CI stabilization
  • Process improvements for future projects

Gate D: Get PO sign-off on PR merge and documentation updates.

Section titled “Gate D: Get PO sign-off on PR merge and documentation updates.”

Exit criteria:

  • CI green on all checks
  • PR reviewed and approved
  • Documentation annotated with scenario priorities
  • CHANGELOG updated
  • Retrospective written

When planning parallel waves, use these heuristics:

  1. Split by file, not by statement count. If one agent gets 3 unique pages and another gets 7 identical components, the pages agent will bottleneck.

  2. Weight mock complexity heavily. Files requiring unique MSW handler strategies take longer than files reusing existing handlers.

  3. Discount template-replicable work. Seven identical typeahead stories finish much faster than seven unique page stories with the same line count.

  4. Give giant files their own agent. Files over 1000 lines should get a dedicated agent to avoid serialization.

  5. Formula: number of unique source files × mock complexity factor. This is a better proxy than estimated statement count.


ArtifactTemplateProduced In
Scope AnalysisScope AnalysisPhase A
Story SpecificationsStory SpecificationPhase A
Application ConventionsApplication ConventionsPhase A
Implementation PlanImplementation PlanPhase B
Individual Story PlansStory PlanPhase B
Progress TrackerProgress TrackerPhase C
Retrospective(free-form)Phase D

When you create a project using this workflow, the working directory should follow this structure:

<project-dir>/
├── scope-analysis.md # Phase A output
├── story-specifications.md # Phase A output
├── application-conventions.md # Phase A output
├── implementation-project-plan.md # Phase B output
├── implementation-progress.md # Phase C tracker
├── planning-progress.md # Phase B tracker
├── plans/ # Phase B output
│ ├── <ID>_<Story_Name>.md # One per story
│ └── run-N-audit-report.md # Audit reports
└── retrospective.md # Phase D output