Running API Tests
This guide explains how to build, deploy, and run Bruno API tests for any component against a local Kubernetes cluster, then interpret the results. The workflow applies to any component that has properly labelled tests in the api-test project. The operations component is used as the primary example throughout.
For Bruno test authoring conventions — how to structure collections, write workflows, handle authentication, and use variables — see api-testing-bruno.md.
Prerequisites
Section titled “Prerequisites”Before running tests, ensure:
- Docker and a local Kubernetes cluster (e.g.,
minikubeork3d) are running. - Your
kubectlcontext is pointing at the local cluster. - The
api-testproject is checked out and its dependencies are installed (npm installin theapi-testdirectory). - Environment credential files are populated under
api-test/1Password/(see theapi-testREADME for 1Password CLI setup). - The component you are testing has a
make buildandmake localInstalltarget.
Workflow
Section titled “Workflow”Step 1: Build the Component
Section titled “Step 1: Build the Component”From the component’s repository root:
make buildVerify the build exits successfully before continuing. A failed build will produce a stale or absent image and the install step will either fail or run with old code.
For operations:
# In the operations/ directorymake buildStep 2: Install to the Local Cluster
Section titled “Step 2: Install to the Local Cluster”make localInstallAfter the install completes, confirm the pod is running and has reached Ready status:
kubectl get pods -n <component-namespace>For operations:
kubectl get pods -n operationsWait until the pod shows Running and all containers are ready (e.g., 1/1). If the pod is in ContainerCreating or Pending, give it a few seconds and re-check.
Step 3: Run the API Tests
Section titled “Step 3: Run the API Tests”The api-test Makefile provides targets scoped to each component and environment. For local development, use the local environment.
General pattern:
# In the api-test/ directorymake test-<component>-<env>For operations against the local cluster:
# In the api-test/ directorymake test-operations-localIf the first run fails immediately after pod startup, this is usually a timing issue — the pod is healthy but not yet fully initializing its request handlers. Wait five seconds and retry.
Available Environments
Section titled “Available Environments”| Environment | When to use |
|---|---|
local | Local Kubernetes cluster on your machine |
dev | Shared development environment |
ci | CI pipeline (typically driven by automated tooling, not manually) |
Step 4: Verify Test Results
Section titled “Step 4: Verify Test Results”The test runner writes a JSON results file to api-test/out/. Parse it with the results checker to get a structured pass/fail summary:
# In the api-test/ directorynpx tsx /workspace/instructions/claude/scripts/check-api-results.ts \ out/<results-file>.json \ --ignore UploadFor operations-local:
npx tsx /workspace/instructions/claude/scripts/check-api-results.ts \ out/operations-local.json \ --ignore UploadThe --ignore Upload flag excludes tests that require S3/LocalStack infrastructure, which is not present in the standard local setup. See api-testing-bruno.md for full details on reading output, adding ignore patterns, and distinguishing regressions from pre-existing failures.
Step 5: Act on Results
Section titled “Step 5: Act on Results”All non-ignored tests pass: The component is ready. Produce a summary of the test counts (requests, tests, assertions) and proceed.
Non-ignored failures present:
- Review each failure in the script output. It will show the request path, HTTP status, and individual test/assertion errors.
- If a failure is clearly pre-existing and unrelated to your current change, note it and add an
--ignorepattern for the affected path. - If a failure is a regression introduced by your change, investigate and fix it before continuing.
- If a failure looks flaky (e.g., inconsistent HTTP 503 or timeout on a polling endpoint), wait five seconds and re-run the full test suite.
Output File Naming Convention
Section titled “Output File Naming Convention”The Makefile targets write output files according to this convention:
| Make Target | JSON Output | HTML Report |
|---|---|---|
test-<component>-<env> | out/<component>-<env>.json | out/<component>-index-<env>.html |
test-<env> | out/allresults-<env>.json | out/index-<env>.html |
test/<path>-<env> | out/<path>-results.json | out/<path>-<env>-index.html |
The HTML report is useful for a visual summary when reviewing results in a browser.
Generalizing to Other Components
Section titled “Generalizing to Other Components”The workflow above applies to any component in the Arda backend stack, provided:
- The component has a
make buildandmake localInstalltarget that produces and deploys a container image to the local cluster. - The
api-testproject contains a collection folder for that component with workflow tests. - There is a corresponding
test-<component>-<env>target in theapi-testMakefile.
If you are adding API tests for a new component for the first time, see api-testing-bruno.md for how to structure the collection, and add the corresponding Makefile target in api-test/Makefile.
Related Pages
Section titled “Related Pages”- api-testing-bruno.md — Bruno test authoring: collection structure, authentication, variables, workflows, and interpreting test results.
- release-lifecycle.md — When to run API tests as part of a multi-repository release sequence.
- Arda workspace repository — Project-specific test configurations and CI pipeline definitions.
Copyright: (c) Arda Systems 2025-2026, All rights reserved
Copyright: © Arda Systems 2025-2026, All rights reserved