API Testing with Bruno
Bruno is the API testing tool used at Arda for both manual exploration and automated integration tests in CI/CD pipelines.
Installation
Section titled “Installation”GUI:
brew install brunoCLI:
# Globalnpm install -g @usebruno/cli
# Local (project dependency)npm install --save-dev @usebruno/cliFolder Structure
Section titled “Folder Structure”api-test/ ├─ 1Password/ # Environment files populated by 1Password CLI │ ├─ local.env │ ├─ dev.env │ ├─ stage.env │ └─ prod.env ├─ bruno.json ├─ environments/ │ └─ ci.bru ├─ .env # Other local env values (gitignored) ├─ collections/ # Root for all collections │ ├── partitionCheck │ └─ <module> │ ├─ routes/ # Example requests, tagged as disabled │ └─ workflows/ # Executable integration tests ├─ scripts/ # TypeScript helpers for tests ├─ types/ │ └─ bruno.d.ts └─ resources/ # Shared resourcesCollections are folders. Routes are non-executable examples. Workflows are executable tests.
Environments
Section titled “Environments”A single environment file in environments/ holds values that depend on regular environment variables. The environment variables are defined in *.env files in the 1Password/ folder and populated by the 1Password CLI. See the README file in the api-test repository.
Headers and Request Variables
Section titled “Headers and Request Variables”Define shared variables at the highest folder level possible and let lower-level requests inherit them.
Example Requests
Section titled “Example Requests”Every module (API Endpoint) should provide an example request for each defined route. These requests:
- Are tagged as
disabledto prevent execution by CI pipelines. - Serve to document the expected behavior.
- Can be created by importing the OpenAPI spec for the relevant API endpoint.
Workflows
Section titled “Workflows”Workflows are sequences of requests that implement features and act as integration tests.
- Each workflow should be executable as a unit.
- Requests are parameterized and connected through variables set in pre and post scripts using
bru.setVar(<name>, <value>).
Authentication
Section titled “Authentication”Bearer Token
Section titled “Bearer Token”Use the Auth tab (collection level preferred):
headers { Authorization: Bearer {{accessToken}}}JWT with OAuth2: GUI Setup
Section titled “JWT with OAuth2: GUI Setup”Since Bruno v2, OAuth2 is built in and supports collection, folder, or request scoping. The GUI approach is the easiest starting point when working interactively.
Setup (collection-level recommended):
- Open your collection’s Auth tab and choose OAuth 2.0 (v2).
- Select your grant type (e.g., Client Credentials). Fill in the token URL,
client_id, andclient_secret— use environment variable references such as{{client_id}}. - Enable Auto-fetch (and Auto-refresh if your provider supports refresh tokens).
- Set a Token ID (e.g.,
idp).
Bruno injects the access token into requests automatically via the Inherit auth mode on child folders and requests. You can also reference the token directly in scripts or headers:
# In a header or scriptAuthorization: Bearer {{$oauth2.idp.access_token}}
# In a scriptconst token = bru.getOauth2CredentialVar('$oauth2.idp.access_token');JWT with OAuth2: Script Setup (Recommended for CLI/CI)
Section titled “JWT with OAuth2: Script Setup (Recommended for CLI/CI)”Create a request that calls the token endpoint and stores the token:
meta { name: OAuth token (client_credentials) type: http seq: 0 }post { url: {{oauth_token_url}} }headers { content-type: application/x-www-form-urlencoded }body:form-urlencoded { grant_type: client_credentials client_id: {{client_id}} client_secret: {{client_secret}} scope: {{scope}}}script:post-response { bru.setVar("access_token", res.body.access_token);}Reference {{access_token}} in subsequent requests.
JWT with OAuth2: Native Config in .bru File
Section titled “JWT with OAuth2: Native Config in .bru File”auth { mode: oauth2 }auth:oauth2 { grant_type: client_credentials access_token_url: {{oauth_token_url}} client_id: {{client_id}} client_secret: {{client_secret}} scope: {{scope}} add_token_to: header header_prefix: Bearer auto_fetch_token: true auto_refresh_token: true}Conditionals and Loops
Section titled “Conditionals and Loops”Loop Over IDs
Section titled “Loop Over IDs”script:pre-request { const ids = [1, 2, 3, 4]; for (const id of ids) { bru.setVar("userId", id); await bru.runRequest("api/users/GET user by id"); }}Conditional Flow with setNextRequest
Section titled “Conditional Flow with setNextRequest”tests { test("follow until done", function () { if (res.body.status === "PENDING") { bru.setNextRequest("api/workflows/POLL order status"); } else { bru.setNextRequest(null); expect(["COMPLETED","FAILED"]).to.include(res.body.status); } });}Avoid infinite loops when using dynamic request hopping.
Shared Scripts
Section titled “Shared Scripts”In bruno/bruno.json:
{ "scripts": { "additionalContextRoots": ["./scripts"] }}In bruno/scripts/utils.js:
exports.pickRandom = (arr) => arr[Math.floor(Math.random() * arr.length)];Usage in a request:
script:pre-request { const { pickRandom } = require("utils.js"); bru.setVar("userId", pickRandom([10, 11, 12]));}Command Line
Section titled “Command Line”From the root of the collection (api-test/):
# Run all requestsbru run
# Run a subfolder or single requestbru run api/usersbru run api/users/"GET user by id.bru"
# Pick an environment and inject additional env varsbru run --env local --env-var userId=123
# Fail fast, add delay between requestsbru run --fail-fast --delay-request 200
# Generate CI reportsbru run --reporter-json out/results.json \ --reporter-junit out/results.xml \ --reporter-html out/results.htmlKey CLI Options
Section titled “Key CLI Options”Core and setup
Section titled “Core and setup”--env <name> Pick an environment--env-var <k=v> Override a variable (repeatable)--env-file <path> Use a specific .bru env file--sandbox <developer|safe> Choose JS sandbox (default: developer)Data and iteration
Section titled “Data and iteration”--csv-file-path <file> Drive iteration from a CSV file--json-file-path <file> Drive iteration from a JSON file--iteration-count <n> Number of times to repeat the collection-r Recurse into subfoldersRequest selection and flow
Section titled “Request selection and flow”--grep <pattern> Run only requests whose names match the pattern--tests-only Run only requests that have tests--bail Stop on first failure--tags <t1,t2> Run requests with all these tags--exclude-tags <t1,t2> Skip requests with any of these tags--delay <ms> Delay in milliseconds between requests--parallel Run requests in parallelSSL, security, and networking
Section titled “SSL, security, and networking”--cacert <file> Add a CA certificate--ignore-truststore Use only your custom CA(s), ignore system trust store--client-cert-config Client certificate config (mTLS)--insecure Allow insecure server connections--disable-cookies Do not persist or send cookies--noproxy Disable Bruno and system proxiesExit Codes
Section titled “Exit Codes”| Exit Code | Meaning |
|---|---|
| 0 | Success |
| 1 | A request/test/assertion failed |
| 2 | Output dir does not exist |
| 3 | Request chain looped endlessly |
| 4 | Called outside a collection root |
| 5 | Input file does not exist |
| 6 | Environment does not exist |
| 7 | --env-var value is not a string or object |
| 8 | --env-var value is malformed (cannot be parsed) |
| 9 | Invalid output format specified |
| 255 | Other error |
OpenAPI Import
Section titled “OpenAPI Import”bru import openapi --source <spec.yaml|URL> \ --output <dir> \ --collection-name "My Collection"Reading and Interpreting Test Results
Section titled “Reading and Interpreting Test Results”After running a test suite (see running-api-tests.md for the full workflow), the Bruno CLI writes JSON output to api-test/out/. Use the results parser to produce a structured pass/fail summary.
Output File Naming
Section titled “Output File Naming”| Make Target | JSON Output | HTML Report |
|---|---|---|
test-operations-<env> | out/operations-<env>.json | out/operations-index-<env>.html |
test-<env> | out/allresults-<env>.json | out/index-<env>.html |
test/<path>-<env> | out/<path>-results.json | out/<path>-<env>-index.html |
Common environments: local, dev, ci.
Running the Parser
Section titled “Running the Parser”# In the api-test/ directorynpx tsx /workspace/instructions/claude/scripts/check-api-results.ts \ out/<results-file>.json \ [--ignore <pattern>] [--ignore <pattern>]The script exits with code 0 if all non-ignored tests passed, or 1 if there are real failures. It produces a structured summary with:
- Overall request, test, and assertion counts.
- Detailed failure report: request path, HTTP response status, and individual test/assertion errors.
- List of ignored failures matched by
--ignorepatterns. - Final
PASSorFAILverdict.
Ignore Patterns
Section titled “Ignore Patterns”The --ignore flag takes a case-insensitive substring matched against the full request path. Pass multiple --ignore flags to exclude more than one path:
npx tsx /workspace/instructions/claude/scripts/check-api-results.ts \ out/operations-local.json \ --ignore Upload \ --ignore DraftsStandard Ignores
Section titled “Standard Ignores”| Pattern | Reason |
|---|---|
Upload | Requires S3/LocalStack infrastructure not always present in local environments |
Responding to Failures
Section titled “Responding to Failures”When non-ignored failures appear in the output:
- Pre-existing, unrelated failure: Note it and add an
--ignorepattern for the affected path. Document why it is being skipped. - Regression from your current change: Investigate and fix it before marking the task done.
- Flaky failure (e.g., inconsistent HTTP 503, pod startup timing): Wait five seconds and re-run the full suite.
Copyright: © Arda Systems 2025-2026, All rights reserved