Skip to content

API Testing with Bruno

Bruno is the API testing tool used at Arda for both manual exploration and automated integration tests in CI/CD pipelines.

GUI:

Terminal window
brew install bruno

CLI:

Terminal window
# Global
npm install -g @usebruno/cli
# Local (project dependency)
npm install --save-dev @usebruno/cli
api-test/
├─ 1Password/ # Environment files populated by 1Password CLI
│ ├─ local.env
│ ├─ dev.env
│ ├─ stage.env
│ └─ prod.env
├─ bruno.json
├─ environments/
│ └─ ci.bru
├─ .env # Other local env values (gitignored)
├─ collections/ # Root for all collections
│ ├── partitionCheck
│ └─ <module>
│ ├─ routes/ # Example requests, tagged as disabled
│ └─ workflows/ # Executable integration tests
├─ scripts/ # TypeScript helpers for tests
├─ types/
│ └─ bruno.d.ts
└─ resources/ # Shared resources

Collections are folders. Routes are non-executable examples. Workflows are executable tests.

A single environment file in environments/ holds values that depend on regular environment variables. The environment variables are defined in *.env files in the 1Password/ folder and populated by the 1Password CLI. See the README file in the api-test repository.

Define shared variables at the highest folder level possible and let lower-level requests inherit them.

Every module (API Endpoint) should provide an example request for each defined route. These requests:

  • Are tagged as disabled to prevent execution by CI pipelines.
  • Serve to document the expected behavior.
  • Can be created by importing the OpenAPI spec for the relevant API endpoint.

Workflows are sequences of requests that implement features and act as integration tests.

  • Each workflow should be executable as a unit.
  • Requests are parameterized and connected through variables set in pre and post scripts using bru.setVar(<name>, <value>).

Use the Auth tab (collection level preferred):

headers {
Authorization: Bearer {{accessToken}}
}

Since Bruno v2, OAuth2 is built in and supports collection, folder, or request scoping. The GUI approach is the easiest starting point when working interactively.

Setup (collection-level recommended):

  1. Open your collection’s Auth tab and choose OAuth 2.0 (v2).
  2. Select your grant type (e.g., Client Credentials). Fill in the token URL, client_id, and client_secret — use environment variable references such as {{client_id}}.
  3. Enable Auto-fetch (and Auto-refresh if your provider supports refresh tokens).
  4. Set a Token ID (e.g., idp).

Bruno injects the access token into requests automatically via the Inherit auth mode on child folders and requests. You can also reference the token directly in scripts or headers:

# In a header or script
Authorization: Bearer {{$oauth2.idp.access_token}}
# In a script
const token = bru.getOauth2CredentialVar('$oauth2.idp.access_token');
Section titled “JWT with OAuth2: Script Setup (Recommended for CLI/CI)”

Create a request that calls the token endpoint and stores the token:

users/00-get-oauth-token.bru
meta { name: OAuth token (client_credentials) type: http seq: 0 }
post { url: {{oauth_token_url}} }
headers { content-type: application/x-www-form-urlencoded }
body:form-urlencoded {
grant_type: client_credentials
client_id: {{client_id}}
client_secret: {{client_secret}}
scope: {{scope}}
}
script:post-response {
bru.setVar("access_token", res.body.access_token);
}

Reference {{access_token}} in subsequent requests.

JWT with OAuth2: Native Config in .bru File

Section titled “JWT with OAuth2: Native Config in .bru File”
auth { mode: oauth2 }
auth:oauth2 {
grant_type: client_credentials
access_token_url: {{oauth_token_url}}
client_id: {{client_id}}
client_secret: {{client_secret}}
scope: {{scope}}
add_token_to: header
header_prefix: Bearer
auto_fetch_token: true
auto_refresh_token: true
}
script:pre-request {
const ids = [1, 2, 3, 4];
for (const id of ids) {
bru.setVar("userId", id);
await bru.runRequest("api/users/GET user by id");
}
}
tests {
test("follow until done", function () {
if (res.body.status === "PENDING") {
bru.setNextRequest("api/workflows/POLL order status");
} else {
bru.setNextRequest(null);
expect(["COMPLETED","FAILED"]).to.include(res.body.status);
}
});
}

Avoid infinite loops when using dynamic request hopping.

In bruno/bruno.json:

{
"scripts": {
"additionalContextRoots": ["./scripts"]
}
}

In bruno/scripts/utils.js:

exports.pickRandom = (arr) => arr[Math.floor(Math.random() * arr.length)];

Usage in a request:

script:pre-request {
const { pickRandom } = require("utils.js");
bru.setVar("userId", pickRandom([10, 11, 12]));
}

From the root of the collection (api-test/):

Terminal window
# Run all requests
bru run
# Run a subfolder or single request
bru run api/users
bru run api/users/"GET user by id.bru"
# Pick an environment and inject additional env vars
bru run --env local --env-var userId=123
# Fail fast, add delay between requests
bru run --fail-fast --delay-request 200
# Generate CI reports
bru run --reporter-json out/results.json \
--reporter-junit out/results.xml \
--reporter-html out/results.html
Terminal window
--env <name> Pick an environment
--env-var <k=v> Override a variable (repeatable)
--env-file <path> Use a specific .bru env file
--sandbox <developer|safe> Choose JS sandbox (default: developer)
Terminal window
--csv-file-path <file> Drive iteration from a CSV file
--json-file-path <file> Drive iteration from a JSON file
--iteration-count <n> Number of times to repeat the collection
-r Recurse into subfolders
Terminal window
--grep <pattern> Run only requests whose names match the pattern
--tests-only Run only requests that have tests
--bail Stop on first failure
--tags <t1,t2> Run requests with all these tags
--exclude-tags <t1,t2> Skip requests with any of these tags
--delay <ms> Delay in milliseconds between requests
--parallel Run requests in parallel
Terminal window
--cacert <file> Add a CA certificate
--ignore-truststore Use only your custom CA(s), ignore system trust store
--client-cert-config Client certificate config (mTLS)
--insecure Allow insecure server connections
--disable-cookies Do not persist or send cookies
--noproxy Disable Bruno and system proxies
Exit CodeMeaning
0Success
1A request/test/assertion failed
2Output dir does not exist
3Request chain looped endlessly
4Called outside a collection root
5Input file does not exist
6Environment does not exist
7--env-var value is not a string or object
8--env-var value is malformed (cannot be parsed)
9Invalid output format specified
255Other error
Terminal window
bru import openapi --source <spec.yaml|URL> \
--output <dir> \
--collection-name "My Collection"

After running a test suite (see running-api-tests.md for the full workflow), the Bruno CLI writes JSON output to api-test/out/. Use the results parser to produce a structured pass/fail summary.

Make TargetJSON OutputHTML Report
test-operations-<env>out/operations-<env>.jsonout/operations-index-<env>.html
test-<env>out/allresults-<env>.jsonout/index-<env>.html
test/<path>-<env>out/<path>-results.jsonout/<path>-<env>-index.html

Common environments: local, dev, ci.

Terminal window
# In the api-test/ directory
npx tsx /workspace/instructions/claude/scripts/check-api-results.ts \
out/<results-file>.json \
[--ignore <pattern>] [--ignore <pattern>]

The script exits with code 0 if all non-ignored tests passed, or 1 if there are real failures. It produces a structured summary with:

  • Overall request, test, and assertion counts.
  • Detailed failure report: request path, HTTP response status, and individual test/assertion errors.
  • List of ignored failures matched by --ignore patterns.
  • Final PASS or FAIL verdict.

The --ignore flag takes a case-insensitive substring matched against the full request path. Pass multiple --ignore flags to exclude more than one path:

Terminal window
npx tsx /workspace/instructions/claude/scripts/check-api-results.ts \
out/operations-local.json \
--ignore Upload \
--ignore Drafts
PatternReason
UploadRequires S3/LocalStack infrastructure not always present in local environments

When non-ignored failures appear in the output:

  1. Pre-existing, unrelated failure: Note it and add an --ignore pattern for the affected path. Document why it is being skipped.
  2. Regression from your current change: Investigate and fix it before marking the task done.
  3. Flaky failure (e.g., inconsistent HTTP 503, pod startup timing): Wait five seconds and re-run the full suite.