Skip to content

Frontend Pipeline — Development Blueprint

Step-by-step development blueprint for the frontend deployment pipeline. Each step corresponds to a phase in the Design Analysis Development Sequence, expanded with specific file changes, acceptance criteria, and verification mechanisms.

  • The Alpha001 account (009765408297) has a fully provisioned demo partition (CDK infrastructure: Cognito, Aurora DB, API Gateway, DNS at demo.app.arda.cards, Secrets Manager entries for Alpha001-demo-*). This was deployed by a previous amm.sh Alpha001 demo run.
  • The arda-frontend-app repository exists in the Arda-cards GitHub organization.
  • The AMPLIFY_GITHUB_ACCESSTOKEN GitHub org secret (or 1Password entry) grants read access to the arda-frontend-app repository.
  • The ARDA_API_KEY_DEMO GitHub org secret exists (provisioned by infrastructure/tools/sync-secrets-from-1password.sh).
  • The infrastructure repository main branch is the deployment source for the amm.yml workflow.

Create an isolated Amplify application in the demo partition (Alpha001) connected to a demo branch, with auto-build disabled. This provides a safe testing ground for the new deployment pipeline without touching any existing Amplify apps.

  • arda-frontend-app (create branch)
  • infrastructure (modify CloudFormation template)
  • Alpha001 AWS account (deploy CloudFormation stacks)

Create the branch off main. No code changes needed — the branch exists solely so the new Amplify app has something to connect to.

Modify: infrastructure/src/main/cfn/amplifyBranch.cfn.yaml

Section titled “Modify: infrastructure/src/main/cfn/amplifyBranch.cfn.yaml”

Two changes:

1. Add an EnableAutoBuild parameter so it can be controlled at deployment time:

Parameters:
# ... existing parameters ...
EnableAutoBuild:
Type: String
Default: "true"
AllowedValues: ["true", "false"]
Description: "Whether Amplify auto-builds on push to the connected branch"

Update the AWS::Amplify::Branch resource to use the parameter:

EnableAutoBuild: !Ref EnableAutoBuild

The default is true, which preserves the current behavior for all existing deployments. The SandboxKyle002 (kyle) deployment does not pass this parameter, so it gets the default — no behavior change.

2. Add an AmplifyBranchName export so the GitHub Actions workflow can read the branch name from CloudFormation:

Outputs:
AmplifyBranchName:
Description: The Amplify branch resource name
Value: !Ref Branch
Export:
Name: !Sub "${Infrastructure}-${Partition}-I-AmplifyBranchName"

This export is consumed by the reusable deployment workflow alongside the existing AmplifyAppId export from amplify.cfn.yaml.

Modify: infrastructure/amm.sh — update the Amplify gate, branch mapping, and export stacks

Section titled “Modify: infrastructure/amm.sh — update the Amplify gate, branch mapping, and export stacks”

Gate condition. The current gate on line 465 excludes Alpha001 and Alpha002 from the Amplify deployment steps:

Terminal window
# Current — blocks Alpha001/Alpha002
if [[ "${infrastructure}" != "Alpha001" && "${infrastructure}" != "Alpha002" ]]; then

This must be updated to check a list of allowed (Infrastructure, Partition) pairs. The list is a constant in amm.sh:

Terminal window
AMPLIFY_DEPLOY_TARGETS=("SandboxKyle002:kyle" "Alpha001:demo")

Partition-to-branch-name mapping. The Amplify branch resource names are a legacy of the original branch-sync model — the prod app’s branch resource is named main (not prod). Rather than special-casing this, amm.sh defines a uniform mapping for all partitions:

Terminal window
declare -A AMPLIFY_BRANCH_NAMES=(
[dev]="dev"
[stage]="stage"
[demo]="demo"
[prod]="main"
[kyle]="main"
)

The mapping is used when deploying the AmplifyBranch stack (as the Branch parameter) and when deploying lightweight export stacks for existing apps. This is the single source of truth for branch names — the workflow never derives or assumes them.

Repo/AppName parameterization. Currently hardcoded to kyle-frontend-app. Maintained as an explicit mapping in amm.sh (alongside the existing AMPLIFY_BRANCH_NAMES and AMPLIFY_DEPLOY_TARGETS constants), keyed by (Infrastructure, Partition). No new properties are added to purpose-configuration.

EnableAutoBuild parameter. Pass EnableAutoBuild=false for the demo partition’s AmplifyBranch stack:

Terminal window
branch_name="${AMPLIFY_BRANCH_NAMES[${partition}]}"
aws cloudformation deploy \
--stack-name "${infrastructure}-${partition}-AmplifyBranch" \
--template-file "src/main/cfn/amplifyBranch.cfn.yaml" \
--parameter-overrides \
"Infrastructure=${infrastructure}" "Partition=${partition}" \
"Branch=${branch_name}" "EnableAutoBuild=false"

Create: infrastructure/src/main/cfn/amplifyExports.cfn.yaml

Section titled “Create: infrastructure/src/main/cfn/amplifyExports.cfn.yaml”

A lightweight CloudFormation template that publishes AmplifyAppId and AmplifyBranchName as CloudFormation exports. Used for existing manually-created Amplify apps (dev, stage, prod) that have no CloudFormation stacks and therefore no exports. This gives the GitHub Actions workflow a uniform lookup mechanism.

AWSTemplateFormatVersion: "2010-09-09"
Description: "Publish Amplify App ID and Branch Name as CloudFormation exports"
Parameters:
Infrastructure:
Type: String
Partition:
Type: String
AmplifyAppId:
Type: String
Description: "The existing Amplify App ID"
AmplifyBranchName:
Type: String
Description: "The Amplify branch resource name"
Outputs:
AmplifyAppId:
Value: !Ref AmplifyAppId
Export:
Name: !Sub "${Infrastructure}-${Partition}-I-AmplifyAppId"
AmplifyBranchName:
Value: !Ref AmplifyBranchName
Export:
Name: !Sub "${Infrastructure}-${Partition}-I-AmplifyBranchName"

The amm.sh script deploys this stack for partitions that have existing Amplify apps but are not in the AMPLIFY_DEPLOY_TARGETS list. At cutover, the dev/stage/prod entries are added to trigger these lightweight stacks. The stack name follows the standard pattern: ${infrastructure}-${partition}-AmplifyExports.

Modify: infrastructure/.github/workflows/amm.yml — fix secret selection

Section titled “Modify: infrastructure/.github/workflows/amm.yml — fix secret selection”

The workflow currently hardcodes Kyle/Stage-specific secrets for the ARDA_API_KEY. Per-environment API key secrets already exist as GitHub org secrets (provisioned by infrastructure/tools/sync-secrets-from-1password.sh): ARDA_API_KEY_DEV, ARDA_API_KEY_STAGE, ARDA_API_KEY_DEMO, ARDA_API_KEY_PROD, ARDA_API_KEY_KYLE.

Update the ARDA_API_KEY env var to use dynamic secret selection:

ARDA_API_KEY: "${{ secrets[format('ARDA_API_KEY_{0}', steps.partition.outputs.partition)] }}"

The other four secrets (ARDA_SIGNUP_KEY, HUBSPOT_CLIENT_KEY, HUBSPOT_PAT, PYLON_WIDGET_KEY) are shared across environments and can remain as-is. The partition names (demo, dev, stage, prod, kyle) match the secret name suffixes exactly.

Deploy via the amm GitHub Actions workflow

Section titled “Deploy via the amm GitHub Actions workflow”

The infrastructure repo has an amm.yml workflow (workflow_dispatch) that runs amm.sh. The deployment is triggered by selecting Alpha001/demo from the environment dropdown — this option already exists in the workflow.

The workflow:

  1. Fetches purpose-configuration for the demo partition
  2. Assumes the Alpha001-I-GitHubActionInfrastructure role via OIDC
  3. Runs ./amm.sh Alpha001 demo

This runs the full amm.sh sequence: CDK infrastructure, EKS logging, load balancer, external secrets, nginx, target groups, partition secrets (with the correct ARDA_API_KEY_DEMO), and (after the gate is updated) the Amplify stacks.

The stacks can also be deployed locally via amm.sh Alpha001 demo with the right AWS profile and 1Password secrets. The local path reads ARDA_API_KEY from 1Password (Arda-SandboxKyle/ARDA-API-KEY), which is Kyle-specific — if deploying locally, override it with the correct demo value: export ARDA_API_KEY="$(op read 'op://Arda-DemoOAM/ARDA-API-KEY/password')" before running amm.sh.

  1. The demo branch exists in arda-frontend-app
  2. amplifyBranch.cfn.yaml accepts EnableAutoBuild as a parameter with default true
  3. amplifyBranch.cfn.yaml exports ${Infrastructure}-${Partition}-I-AmplifyBranchName
  4. amplifyExports.cfn.yaml exists as the lightweight export template
  5. amm.sh has a partition-to-branch-name mapping constant and an AMPLIFY_DEPLOY_TARGETS allow-list
  6. The amm.sh gate condition allows the demo partition on Alpha001 to execute the Amplify steps
  7. The amm.sh Amplify steps use an explicit Repo/AppName mapping constant (uses arda-frontend-app for Alpha001/demo)
  8. The amm.yml workflow uses secrets[format('ARDA_API_KEY_{0}', partition)] for per-environment secret selection
  9. The existing Kyle deployment (SandboxKyle002) is unaffected by all template, script, and workflow changes
  10. Alpha001-demo-Amplify stack is deployed with the demo partition’s environment variables and correct secrets
  11. Alpha001-demo-AmplifyBranch stack is deployed with EnableAutoBuild=false
  12. CloudFormation exports Alpha001-demo-I-AmplifyAppId and Alpha001-demo-I-AmplifyBranchName are available
  13. The Amplify app exists and is connected to the demo branch
  14. The initial deployment completes successfully (triggered by start-job as part of the amm.sh execution — see Step 2.5.4 in Amplify App Provisioning)
  15. The site at demo.app.arda.cards loads and is functional

Before deployment — template and script validation:

  • Validate the template: aws cloudformation validate-template --template-body file://src/main/cfn/amplifyBranch.cfn.yaml
  • Review the amm.sh changes: confirm the gate condition allows Alpha001/demo and the Repo/AppName are correct

Kyle regression check:

  • Deploy to Kyle first: trigger the amm workflow with SandboxKyle002/kyle (or run amm.sh SandboxKyle002 kyle locally)
  • Verify the Kyle AmplifyBranch stack updates successfully (or reports “No changes to deploy”)
  • Verify Kyle auto-build is still enabled: aws amplify get-branch --app-id {kyle-app-id} --branch-name main --query "branch.enableAutoBuild" returns true
  • Push a commit to kyle-frontend-app main and confirm auto-build triggers

Demo deployment:

  • Trigger the amm workflow with Alpha001/demo (or run amm.sh Alpha001 demo locally)
  • Verify the branch exists: git ls-remote --heads origin demo
  • Verify the stacks: aws cloudformation describe-stacks --stack-name Alpha001-demo-Amplify and Alpha001-demo-AmplifyBranch
  • Verify auto-build is disabled: aws amplify get-branch --app-id {id} --branch-name demo --query "branch.enableAutoBuild" returns false
  • Verify the site loads at demo.app.arda.cards after the initial start-job
  • Step 1 is complete: the Alpha001-demo-Amplify and Alpha001-demo-AmplifyBranch CloudFormation stacks are deployed and the demo Amplify app is functional.
  • The infrastructure repository changes from Step 1 (amplifyBranch.cfn.yaml, amm.sh, amm.yml) are merged to main.
  • AWS CLI access to both Alpha001 (009765408297) and Alpha002 (139852620346) accounts is available (via amm.sh locally or the amm.yml workflow).

Create the dedicated frontend IAM role in both AWS accounts (Alpha001 and Alpha002). The role grants GitHub Actions permission to trigger Amplify deployments and poll their status.

  • infrastructure

Modify: src/main/cdk/constructs/oam/gh-oidc-provider.ts

Section titled “Modify: src/main/cdk/constructs/oam/gh-oidc-provider.ts”

Add a new frontendDeploymentRole() private method alongside the existing role methods. The new role:

  • Name: ${prefix}-API-GitHubActionFrontEnd
  • OIDC trust: repo:Arda-cards/arda-frontend-app scoped to refs/heads/main, refs/heads/patch, and refs/heads/demo
  • Permissions:
    • amplify:StartJob — trigger builds
    • amplify:GetJob — poll deployment status
    • amplify:GetApp — resolve app metadata
    • amplify:GetBranch — resolve branch metadata
    • cloudformation:ListExports — read Amplify App ID and Branch Name from CloudFormation exports
  • Resource scope: Amplify: arn:aws:amplify:${region}:${account}:apps/*. CloudFormation: * (ListExports does not support resource-level restrictions).

Expose the new role via the GhOidcProviderBuilt interface.

  • amm.sh Alpha001 demo prod — deploys to Alpha001 (demo + prod partitions)
  • amm.sh Alpha002 dev stage — deploys to Alpha002 (dev + stage partitions)

The role is part of the infrastructure-level CDK app, so it deploys alongside the existing OIDC constructs with no separate step.

  1. The GhOidcProvider construct creates a fourth role (${prefix}-API-GitHubActionFrontEnd) alongside the existing three
  2. npx cdk synth succeeds for both Alpha001 and Alpha002 infrastructure apps
  3. The synthesized CloudFormation template contains the new role with the correct OIDC trust conditions and Amplify permissions
  4. amm.sh deploys successfully to both accounts
  5. The role exists in both accounts after deployment
  6. A GitHub Actions workflow on the demo branch of arda-frontend-app can assume the role in Alpha001 (verified via a temporary minimal test workflow — see Verification below)
  • Run npx cdk synth and inspect the generated template for the role
  • Deploy via amm.sh to both accounts
  • Verify the role: aws iam get-role --role-name Alpha001-API-GitHubActionFrontEnd (Alpha001) and aws iam get-role --role-name Alpha002-API-GitHubActionFrontEnd (Alpha002)
  • Test OIDC assume: create a temporary minimal workflow (.github/workflows/test-oidc.yaml) on the demo branch that assumes the role and runs aws sts get-caller-identity. This workflow is deleted in Step 3 when the real workflows are created.
  • Step 1 is complete: the demo branch exists in arda-frontend-app and the demo Amplify app is deployed and functional at demo.app.arda.cards. CloudFormation exports Alpha001-demo-I-AmplifyAppId and Alpha001-demo-I-AmplifyBranchName are available.
  • Step 2 is complete: the Alpha001-API-GitHubActionFrontEnd IAM role exists and can be assumed via OIDC from the demo branch.
  • The Arda-cards/purpose-configuration-action GitHub Action is available (used by the backend; no changes needed).
  • The ARDA_PURPOSE_LOCATOR_READER_TOKEN and ARDA_PURPOSE_LOCATOR_BASE_URL org-level secrets/vars exist (used by the backend workflows).

Create the GitHub Actions workflows on the demo branch. The workflows use StartJob to trigger Amplify builds and GetJob to poll status. All build and environment variable logic remains within Amplify.

  • arda-frontend-app (on demo branch)
  • arda-frontend-app (GitHub settings — environments)

Main deployment workflow:

  • Trigger: workflow_dispatch during development; switches to workflow_run on ci.yaml success on main at cutover
  • Matrix over partitions with max-parallel: 1 for sequential deployment
    • During development on demo branch: matrix is [demo]
    • After cutover on main: matrix is [dev, stage, demo, prod]
  • Each matrix entry:
    • Sets environment: ${{ matrix.purpose }} for authorization gates
    • Calls reusable_deployment.yaml with the partition as input

Manual redeploy workflow:

  • Trigger: workflow_dispatch with inputs partition (choice: dev/stage/demo/prod) and commit_sha (string)
  • Single partition, no matrix
  • Verifies CI passed for the SHA (commit status API); runs CI inline if not
  • Calls reusable_deployment.yaml with the partition and commit SHA
  • Uses environment: ${{ inputs.partition }} for authorization gates

Create: .github/workflows/reusable_deployment.yaml

Section titled “Create: .github/workflows/reusable_deployment.yaml”

Reusable workflow called by both deploy.yaml and redeploy.yaml:

  • Inputs: partition (string), commit_sha (string, optional — defaults to HEAD of connected branch)
  • Sets environment: ${{ inputs.partition }}
  • Permissions: contents: read, id-token: write
  • Steps:
    1. Fetch purpose-configuration for the partition using Arda-cards/purpose-configuration-action → get aws_role, aws_region
    2. Derive the frontend role ARN: append FrontEnd to the role name in aws_role (e.g., Alpha001-API-GitHubActionAlpha001-API-GitHubActionFrontEnd)
    3. Parse the infrastructure prefix from the role name (text before -API-GitHubAction, e.g., Alpha001)
    4. Configure AWS credentials via OIDC using the derived frontend role ARN
    5. Read CloudFormation exports: ${Infrastructure}-${Partition}-I-AmplifyAppId and ${Infrastructure}-${Partition}-I-AmplifyBranchName
    6. Call aws amplify start-job --app-id {AmplifyAppId} --branch-name {AmplifyBranchName} --job-type RELEASE (with --commit-id if commit_sha is provided)
    7. Poll aws amplify get-job until status is SUCCEED or FAILED
    8. Fail the workflow if the deployment fails
  • dev: no protection rules
  • stage: required_reviewers — denisa, jmpicnic, danmerb, davequinta
  • prod: required_reviewers — denisa, jmpicnic, danmerb, davequinta
  1. All three workflow files exist on the demo branch and pass YAML lint
  2. Manually triggering deploy.yaml on the demo branch:
    • Fetches demo.properties from purpose-configuration
    • Assumes the Alpha001-API-GitHubActionFrontEnd role via OIDC
    • Calls StartJob against the demo Amplify app
    • Polls GetJob until the deployment completes
    • The Amplify build succeeds (Amplify runs npm ci, resolves env vars, runs npm run build, deploys)
  3. The site at demo.app.arda.cards is updated with the latest code from the demo branch
  4. GitHub environments are configured with the correct protection rules
  5. Triggering with a partition that has required_reviewers pauses for approval
  • Trigger deploy.yaml via GitHub Actions UI on the demo branch
  • In the workflow logs, confirm:
    • Purpose-configuration fetch succeeded (check logged properties — not secret values)
    • OIDC role assumption succeeded (aws sts get-caller-identity)
    • StartJob returned a jobId
    • GetJob polling shows status progressing to SUCCEED
  • Visit demo.app.arda.cards and verify the page loads
  • Check the Amplify Console for the demo app: confirm the build was triggered by the StartJob call (not auto-build)
  • Step 3 is complete: all three workflow files (deploy.yaml, redeploy.yaml, reusable_deployment.yaml) exist on the demo branch and deploy.yaml has successfully triggered at least one Amplify build.
  • The demo site at demo.app.arda.cards is accessible.
  • GitHub environments (dev, stage, prod) are configured on arda-frontend-app with the correct protection rules.
  • Test user credentials are available for signing in to the demo site (user performs the sign-in manually).
  • The demo branch has at least two distinct commits on it so redeploy.yaml can be tested with an older SHA.

Validate the full pipeline by deploying to the demo partition and verifying functional equivalence with the existing deployments. Test the redeploy workflow and CI check gate.

  • arda-frontend-app (on demo branch — no code changes, workflow triggers only)

No code changes. This step is purely execution and verification.

  1. deploy.yaml successfully deploys to the demo partition
  2. The application at demo.app.arda.cards is functional:
    • Page loads without errors
    • User can sign in with test credentials (user logs in manually)
    • Navigation to Items, Order Queue, and other pages works
    • No NEXT_PUBLIC_* configuration errors in the browser console
  3. redeploy.yaml successfully deploys a specific commit SHA to the demo partition:
    • The deployed version matches the requested SHA (verify in the Amplify Console build log)
  4. The CI check gate works:
    • Attempting to redeploy a SHA that has no CI status is handled correctly (CI runs inline, or the workflow aborts)
  5. The demo deployment does not affect any other partition (dev, stage, prod remain on their existing branch-sync deployments)
  • Trigger deploy.yaml on the demo branch, verify deployment succeeds
  • Visit demo.app.arda.cards:
    • User signs in with test credentials
    • Agent verifies page navigation via Playwright MCP or reusing existing E2E test patterns
  • Trigger redeploy.yaml with a specific earlier commit SHA from demo:
    • Verify the Amplify Console shows the build used the specified commit
    • Verify the site reflects the older version
  • Redeploy the latest commit to restore current state
  • Confirm dev/stage/prod sites are unchanged by checking their Amplify Console (no new builds triggered)
  • Step 4 is complete: the demo pipeline is validated — deploy.yaml and redeploy.yaml work correctly against the demo partition, the CI check gate is functional, and the demo site is confirmed functional.
  • The Alpha002-API-GitHubActionFrontEnd IAM role exists in the Alpha002 account (deployed upfront in Step 2) and has been verified to be assumable from the arda-frontend-app repository (test via a minimal workflow on main, similar to the Step 2 OIDC test).
  • The existing Amplify app IDs for dev (d38w5m1ngjza76), stage (d1kbrvra79y8sc), and prod (duhexavnwh88g) have been verified against the live AWS accounts via aws amplify get-app --app-id {id}.
  • AWS CLI access to both Alpha001 and Alpha002 accounts is available for reconfiguring existing Amplify apps.
  • The rollback-plan.md is written and reviewed before any changes to existing Amplify apps.
  • All stakeholders (denisa, jmpicnic, danmerb, davequinta) are aware of the cutover and available as reviewers for the stage and prod environment gates.

Migrate the existing Amplify apps (dev, stage, prod) to the new GitHub Actions pipeline. Disable the existing branch-sync auto-build, connect all apps to main, and perform the first full sequential deployment with authorization gates.

  • arda-frontend-app (merge to main)
  • infrastructure (deploy lightweight export stacks; remove demo from OIDC scope — follow-up)
  • Amplify app settings (AWS CLI — disable auto-build on existing apps)

1. Create: rollback-plan.md (in the project working directory)

Section titled “1. Create: rollback-plan.md (in the project working directory)”

Document manual rollback procedures before beginning the cutover:

  • How to re-enable auto-build on each Amplify app: aws amplify update-branch --app-id {id} --branch-name {branch} --enable-auto-build
  • How to trigger a manual build from the original branch: aws amplify start-job --app-id {id} --branch-name {branch} --job-type RELEASE
  • How to redeploy a previous commit via redeploy.yaml with a known-good SHA
  • Contact list and escalation path

2. Deploy lightweight export stacks for existing apps

Section titled “2. Deploy lightweight export stacks for existing apps”

The existing dev, stage, and prod Amplify apps were created manually and have no CloudFormation exports. Deploy the amplifyExports.cfn.yaml stack for each, using the known app IDs and the branch name mapping from amm.sh:

Terminal window
# Alpha002 — dev
aws cloudformation deploy \
--stack-name "Alpha002-dev-AmplifyExports" \
--template-file "src/main/cfn/amplifyExports.cfn.yaml" \
--parameter-overrides \
"Infrastructure=Alpha002" "Partition=dev" \
"AmplifyAppId=d38w5m1ngjza76" "AmplifyBranchName=dev"
# Alpha002 — stage
aws cloudformation deploy \
--stack-name "Alpha002-stage-AmplifyExports" \
--template-file "src/main/cfn/amplifyExports.cfn.yaml" \
--parameter-overrides \
"Infrastructure=Alpha002" "Partition=stage" \
"AmplifyAppId=d1kbrvra79y8sc" "AmplifyBranchName=stage"
# Alpha001 — prod
aws cloudformation deploy \
--stack-name "Alpha001-prod-AmplifyExports" \
--template-file "src/main/cfn/amplifyExports.cfn.yaml" \
--parameter-overrides \
"Infrastructure=Alpha001" "Partition=prod" \
"AmplifyAppId=duhexavnwh88g" "AmplifyBranchName=main"

These are deployed via amm.sh (which reads the app IDs and branch names from its constants). The stacks can also be deployed locally with the correct AWS profile.

3. Migrate dev from auto-build to matrix deployment

Section titled “3. Migrate dev from auto-build to matrix deployment”

Disable auto-build on the dev Amplify app and add it to the workflow matrix. This is the first partition migrated — verify it works before proceeding.

  1. Disable auto-build:
    Terminal window
    aws amplify update-branch --app-id d38w5m1ngjza76 --branch-name dev --no-enable-auto-build
  2. Modify .github/workflows/deploy.yaml:
    • Switch trigger from workflow_dispatch to workflow_run on ci.yaml success on main (retain workflow_dispatch as secondary trigger)
    • Update matrix from [demo] to [dev, demo]
  3. Verify: trigger deploy.yaml via workflow_dispatch and confirm dev deploys successfully at dev.alpha002.app.arda.cards

4. Migrate stage from auto-build to matrix deployment

Section titled “4. Migrate stage from auto-build to matrix deployment”
  1. Disable auto-build:
    Terminal window
    aws amplify update-branch --app-id d1kbrvra79y8sc --branch-name stage --no-enable-auto-build
  2. Modify .github/workflows/deploy.yaml:
    • Update matrix from [dev, demo] to [dev, stage, demo]
  3. Verify: trigger deploy.yaml via workflow_dispatch and confirm stage deploys successfully (requires reviewer approval) at stage.alpha002.app.arda.cards

5. Migrate prod from auto-build to matrix deployment

Section titled “5. Migrate prod from auto-build to matrix deployment”
  1. Disable auto-build:
    Terminal window
    aws amplify update-branch --app-id duhexavnwh88g --branch-name main --no-enable-auto-build
  2. Modify .github/workflows/deploy.yaml:
    • Update matrix from [dev, stage, demo] to [dev, stage, demo, prod]

6. Enable PR preview deployments on the dev app

Section titled “6. Enable PR preview deployments on the dev app”

Enable Amplify’s built-in pull request preview feature on the dev Amplify app so developers get fast feedback on PRs before merge:

  1. Enable PR previews:
    Terminal window
    aws amplify update-branch --app-id d38w5m1ngjza76 --branch-name dev --enable-pull-request-preview
  2. Test: open a test PR against main, verify Amplify builds and deploys to a preview URL, verify the preview URL is posted as a comment on the PR, verify sign-in works.

Note: No Cognito callback URL changes are needed. The application uses direct password authentication (USER_PASSWORD_AUTH), not an OAuth/OIDC authorization code flow — sign-in works via API calls regardless of the serving domain.

PR previews are Amplify-managed and build independently of GitHub Actions CI — they trigger immediately on PR open/push via a GitHub webhook. The Amplify build spec includes npm run test (Jest unit tests) before npm run build as a quality gate: if tests fail, the build fails and the preview is not deployed. Lint and e2e tests remain GitHub Actions-only and gate the merge to main, not the preview. Each push to the PR branch redeploys to the same preview URL. The preview is automatically deleted when the PR is closed or merged.

Modify: Amplify build spec — add unit tests

Section titled “Modify: Amplify build spec — add unit tests”

Update the Amplify build spec (inline on the dev app, or via amplify.yml in the repository) to run Jest unit tests before the build:

build:
commands:
- npm run test
- npm run build

This applies to both PR preview builds and official StartJob deployments, since Amplify uses the same build spec for both. Unit tests running twice (in GitHub Actions CI and in Amplify) is acceptable — Jest tests add only seconds to the 3-5 minute Amplify build.

7. Modify: infrastructure/src/main/cdk/constructs/oam/gh-oidc-provider.ts (follow-up)

Section titled “7. Modify: infrastructure/src/main/cdk/constructs/oam/gh-oidc-provider.ts (follow-up)”

Remove refs/heads/demo from the OIDC trust conditions. Deploy via amm.sh.

8. Create: “How to Develop in the Front End” guide

Section titled “8. Create: “How to Develop in the Front End” guide”

Create documentation/src/content/docs/process/craft/implementation/frontend-development.md — a developer-facing guide that documents the new workflow. This is the primary reference for developers working on the frontend after the pipeline transition. It should cover:

  • Development workflow: open a PR against main → Amplify PR preview deploys automatically → verify at preview URL → push updates → merge when ready
  • PR preview details: how to find the preview URL (Amplify comment on the PR), what backend it talks to (dev partition), what quality gate applies (unit tests in Amplify build spec), automatic cleanup on PR close/merge
  • Production deployment pipeline: what happens after merge to main — CI runs, then sequential deployment to dev → stage → demo → prod with authorization gates
  • Manual redeploy / rollback: how to use redeploy.yaml to deploy a specific SHA to a single partition
  • Local development: existing npm run dev workflow, environment variable setup, .env.local configuration
  • Environment map: which URLs correspond to which partitions, accounts, and purposes

9. Create: post-cutover-instructions.md (in the project working directory)

Section titled “9. Create: post-cutover-instructions.md (in the project working directory)”

Document deferred procedures for the user to follow:

  • Production verification procedure: Step-by-step instructions for the user to verify the prod migration is working correctly at live.app.arda.cards — sign-in, page navigation, key workflows. This verification must be coordinated with notice to business stakeholders and customers, as it confirms the production site is now served by the new pipeline.
  • validate-pr-source.yml relaxation sequence (three options: remove, two-step, invert)
  • dev, stage, and demo branch deletion in arda-frontend-app
  • Alpha001-demo-Amplify and Alpha001-demo-AmplifyBranch stack deletion (if demo becomes permanent, skip this)
  1. rollback-plan.md exists and documents rollback procedures for each partition
  2. Lightweight export stacks are deployed for dev, stage, and prod with correct AmplifyAppId and AmplifyBranchName values
  3. CloudFormation exports are available for all four partitions (demo from amplify.cfn.yaml/amplifyBranch.cfn.yaml; dev/stage/prod from amplifyExports.cfn.yaml)
  4. Each partition was migrated incrementally and verified before proceeding to the next:
    • dev: auto-build disabled, matrix [dev, demo], deployment verified at dev.alpha002.app.arda.cards
    • stage: auto-build disabled, matrix [dev, stage, demo], deployment verified at stage.alpha002.app.arda.cards (with reviewer approval)
    • prod: auto-build disabled, matrix [dev, stage, demo, prod]
  5. deploy.yaml triggers automatically when CI succeeds on main (and retains workflow_dispatch as secondary trigger)
  6. The first full sequential deployment from main deploys successfully: dev (no gate) → stage (reviewer approval) → demo (no gate) → prod (reviewer approval)
  7. All four sites are functional after deployment
  8. Rollback mechanism verified: redeploy.yaml successfully deploys a previous SHA to dev as a dry-run
  9. PR preview deployments work on the dev app: opening a PR triggers a preview build, the preview URL is posted on the PR, sign-in works, and the preview is deleted on PR close
  10. “How to Develop in the Front End” guide exists at documentation/src/content/docs/process/craft/implementation/frontend-development.md
  11. post-cutover-instructions.md exists with deferred cleanup procedures

Relevant decisions: C2 (Alpha002 role upfront), C3 (no CloudFormation management of existing apps), C4 (matrix includes demo), C5 (branches deleted after stable), C6 (reusable workflows only), D5 (matrix hardcoded, changed at merge), D6 (Amplify branch names unchanged).

  • Verify rollback plan: review rollback-plan.md; test aws amplify update-branch --enable-auto-build on a non-production partition
  • Verify exports: for each partition, aws cloudformation list-exports --query "Exports[?Name=='${Infra}-${Part}-I-AmplifyAppId'].Value" returns the correct app ID
  • Verify auto-build disabled: for each app, aws amplify get-branch --app-id {id} --branch-name {branch} --query "branch.enableAutoBuild" returns false
  • Merge a PR to main and confirm ci.yaml runs, then deploy.yaml triggers via workflow_run
  • Monitor the sequential deployment: dev auto-deploys, stage waits for approval, demo auto-deploys, prod waits for approval
  • After prod deployment, visit live.app.arda.cards and verify:
    • Page loads, authentication works, API calls succeed
    • The deployed version matches the merged commit SHA
  • Rollback dry-run: use redeploy.yaml to deploy the previous production SHA to dev as a test of the rollback mechanism
  • PR preview: open a test PR against main, verify:
    • Amplify builds the PR branch and deploys to a preview URL
    • The preview URL is posted as a comment on the GitHub PR
    • The preview site loads and sign-in works (direct password auth, no callback URL needed)
    • Pushing a new commit to the PR branch redeploys to the same URL
    • Closing the PR deletes the preview deployment
IdStepQuestionDecision
D11amm.sh gate update strategy: The gate currently blocks both Alpha001 and Alpha002. For Step 1 we only need Alpha001/demo. Should we (a) allow all Alpha001 partitions (risks accidentally deploying Amplify to prod), (b) allow only demo on Alpha001 (more conditional logic), or (c) remove the gate entirely and parameterize Repo/AppName per infrastructure/partition (cleanest long-term, larger change)?Gate should check a list of (Infrastructure, Partition) pairs that should be deployed. They should only include the Kyle Infra + Partition and (Alpha001, demo). The list should be a constant in amm.sh.
D21amm.sh Repo/AppName parameterization: Currently hardcoded to kyle-frontend-app. For Alpha001/demo it must be arda-frontend-app. How should this be parameterized — a case statement on ${infrastructure}, an environment variable, or a parameter read from purpose-configuration?Explicit mapping constant in amm.sh, keyed by (Infrastructure, Partition). No new properties in purpose-configuration — all additional mappings (Repo, AppName, branch names, app IDs, regions) are maintained as constants in amm.sh.
D31amm.yml workflow secrets: The workflow passes Kyle/Stage-specific secrets (ARDA_API_KEY_KYLE, HUBSPOT_CLIENT_KEY_STAGE, etc.) for the partitionSecrets CloudFormation step. For Alpha001/demo, these values may be wrong. Should the workflow be updated with per-environment secret selection (similar to the backend’s secrets[format('KEY_{0}', matrix.purpose)] pattern), or should Step 1 be deployed locally to sidestep this?Update the workflow to use secrets[format('ARDA_API_KEY_{0}', partition)] — per-environment API key secrets already exist as GitHub org secrets (see infrastructure/tools/sync-secrets-from-1password.sh). The other four secrets (signup, hubspot, pylon) are shared across environments and can remain as-is.
D41EnableAutoBuild parameter type: CloudFormation AWS::Amplify::Branch expects EnableAutoBuild as a boolean, but CloudFormation parameters don’t support boolean types. Should we use String with AllowedValues: ["true", "false"] and then !Equals condition, or is there a simpler pattern used elsewhere in this codebase?Use a String with those allowed values.
D53Workflow on demo branch with [demo] matrix: The deploy.yaml workflow on the demo branch needs a [demo] matrix. After cutover on main it becomes [dev, stage, demo, prod]. Should this be (a) a workflow_dispatch input with a default, (b) hardcoded and changed at merge time, or (c) derived from the branch name?(b)
D65Branch connection change for existing apps: At cutover, existing apps (connected to dev/stage/main branches) must be reconnected to main. StartJob uses the branchName parameter to identify the Amplify branch resource, not the git branch. The workflow reads the branch name from CloudFormation exports (AmplifyBranchName), so it always uses the correct Amplify branch resource name regardless of which git branch is connected.Amplify Branch names remain unchanged throughout this migration. The branch name mapping in amm.sh and CloudFormation exports handle the partition→branch-name indirection uniformly.