Skip to content

Running API Tests

This guide explains how to build, deploy, and run Bruno API tests for any component against a local Kubernetes cluster, then interpret the results. The workflow applies to any component that has properly labelled tests in the api-test project. The operations component is used as the primary example throughout.

For Bruno test authoring conventions — how to structure collections, write workflows, handle authentication, and use variables — see api-testing-bruno.md.

Before running tests, ensure:

  • Docker and a local Kubernetes cluster (e.g., minikube or k3d) are running.
  • Your kubectl context is pointing at the local cluster.
  • The api-test project is checked out and its dependencies are installed (npm install in the api-test directory).
  • Environment credential files are populated under api-test/1Password/ (see the api-test README for 1Password CLI setup).
  • The component you are testing has a make build and make localInstall target.

From the component’s repository root:

Terminal window
make build

Verify the build exits successfully before continuing. A failed build will produce a stale or absent image and the install step will either fail or run with old code.

For operations:

Terminal window
# In the operations/ directory
make build
Terminal window
make localInstall

After the install completes, confirm the pod is running and has reached Ready status:

Terminal window
kubectl get pods -n <component-namespace>

For operations:

Terminal window
kubectl get pods -n operations

Wait until the pod shows Running and all containers are ready (e.g., 1/1). If the pod is in ContainerCreating or Pending, give it a few seconds and re-check.

The api-test Makefile provides targets scoped to each component and environment. For local development, use the local environment.

General pattern:

Terminal window
# In the api-test/ directory
make test-<component>-<env>

For operations against the local cluster:

Terminal window
# In the api-test/ directory
make test-operations-local

If the first run fails immediately after pod startup, this is usually a timing issue — the pod is healthy but not yet fully initializing its request handlers. Wait five seconds and retry.

EnvironmentWhen to use
localLocal Kubernetes cluster on your machine
devShared development environment
ciCI pipeline (typically driven by automated tooling, not manually)

The test runner writes a JSON results file to api-test/out/. Parse it with the results checker to get a structured pass/fail summary:

Terminal window
# In the api-test/ directory
npx tsx /workspace/instructions/claude/scripts/check-api-results.ts \
out/<results-file>.json \
--ignore Upload

For operations-local:

Terminal window
npx tsx /workspace/instructions/claude/scripts/check-api-results.ts \
out/operations-local.json \
--ignore Upload

The --ignore Upload flag excludes tests that require S3/LocalStack infrastructure, which is not present in the standard local setup. See api-testing-bruno.md for full details on reading output, adding ignore patterns, and distinguishing regressions from pre-existing failures.

All non-ignored tests pass: The component is ready. Produce a summary of the test counts (requests, tests, assertions) and proceed.

Non-ignored failures present:

  1. Review each failure in the script output. It will show the request path, HTTP status, and individual test/assertion errors.
  2. If a failure is clearly pre-existing and unrelated to your current change, note it and add an --ignore pattern for the affected path.
  3. If a failure is a regression introduced by your change, investigate and fix it before continuing.
  4. If a failure looks flaky (e.g., inconsistent HTTP 503 or timeout on a polling endpoint), wait five seconds and re-run the full test suite.

The Makefile targets write output files according to this convention:

Make TargetJSON OutputHTML Report
test-<component>-<env>out/<component>-<env>.jsonout/<component>-index-<env>.html
test-<env>out/allresults-<env>.jsonout/index-<env>.html
test/<path>-<env>out/<path>-results.jsonout/<path>-<env>-index.html

The HTML report is useful for a visual summary when reviewing results in a browser.

The workflow above applies to any component in the Arda backend stack, provided:

  1. The component has a make build and make localInstall target that produces and deploys a container image to the local cluster.
  2. The api-test project contains a collection folder for that component with workflow tests.
  3. There is a corresponding test-<component>-<env> target in the api-test Makefile.

If you are adding API tests for a new component for the first time, see api-testing-bruno.md for how to structure the collection, and add the corresponding Makefile target in api-test/Makefile.

  • api-testing-bruno.md — Bruno test authoring: collection structure, authentication, variables, workflows, and interpreting test results.
  • release-lifecycle.md — When to run API tests as part of a multi-repository release sequence.
  • Arda workspace repository — Project-specific test configurations and CI pipeline definitions.

Copyright: (c) Arda Systems 2025-2026, All rights reserved