Skip to content

Frontend and Backend Testing Patterns

Reference patterns for writing reliable tests across the stack: MockK and Kotest for Kotlin backend code, and Jest with React Testing Library for TypeScript frontend code.

Never pass mockk() inline in constructors. When the mock later requires stubs, a refactor is needed. Always assign mocks to named variables even when no stubs are initially needed.

// Avoid
val service = ItemService.Impl(mockk(), mockk(), itemDb)
// Prefer
val supplierService = mockk<SupplierService>()
val itemDb = mockk<Database>()
val service = ItemService.Impl(supplierService, itemDb)

relaxed = true cannot generate valid default return values for deeply nested generic types like Result<EntityRecord<X, Y>?>. It returns Result.success(Object()), which causes ClassCastException at runtime.

Always provide explicit coEvery stubs for complex generic returns.

Simplify Matchers to Avoid Reflection Errors

Section titled “Simplify Matchers to Avoid Reflection Errors”

Mocking the Universe interface’s invoke pattern (universe.create(...)()) can cause KotlinReflectionInternalError. Use any<Type>() matchers with returnsMany instead of complex match<> lambdas.

When mocking methods returning DBIO (a suspend lambda), use the lambda form:

// Correct
coEvery { universe.create(any(), any(), any(), any()) } returns { Result.success(record) }
// Incorrect — will not work for DBIO
coEvery { universe.create(any(), any(), any(), any()) } returns Result.success(record)

Verify Test Coverage Before Writing New Tests

Section titled “Verify Test Coverage Before Writing New Tests”

Before creating new tests, verify existing test coverage at different layers to avoid duplicating what is already there.

shouldBeSuccess() and shouldBeFailure() interpret string arguments as expected values, not assertion messages. Use result.isSuccess shouldBe true for assertions with additional context.

Use clearMocks() or clearAllMocks() in beforeEach to avoid state leakage between tests.

Add retry and backoff logic around bucket creation during startup for LocalStack S3 tests.

Use start() and stop() in beforeSpec/afterSpec (not beforeEach/afterEach) for reliable database integration tests.

For full ContainerizedPostgres patterns — factory methods, init SQL scripts, harness wiring, and lifecycle rules — see Backend Testing.


Frontend Testing (Jest / React Testing Library)

Section titled “Frontend Testing (Jest / React Testing Library)”
PurposeLibrary
Test frameworkJest 29 with ts-jest
Component rendering@testing-library/react 16.x
User interaction simulation@testing-library/user-event 14.x
Mockingjest.fn() / jest.mock() (no additional library needed)
Test environmentjest-environment-jsdom
  • Use meaningful test names that describe the scenario and expected behavior, not the method name.
  • Group related scenarios under describe blocks named after the condition (e.g., describe('when API returns empty results', ...)).
  • Use beforeEach with jest.clearAllMocks() to reset state between tests.
  • Prefer screen.getByRole, screen.getByText, screen.getByLabelText over getByTestId.
  • Use await waitFor(() => ...) for assertions on async state changes.
  • Use userEvent.setup() (not fireEvent) for user interactions.

Mocks serve two purposes: isolation AND branch coverage. Every mock should be configured to force the component through a specific code path — not just to prevent side effects.

Define default happy-path mocks at file scope, then override per describe block to target specific branches:

// File-level mock setup
jest.mock('@/lib/ardaClient', () => ({
queryItems: jest.fn(),
getItemById: jest.fn(),
}));
import { queryItems, getItemById } from '@/lib/ardaClient';
describe('ItemsPage', () => {
beforeEach(() => {
jest.clearAllMocks();
// Default: happy path with data
(queryItems as jest.Mock).mockResolvedValue({
items: [{ id: '1', name: 'Test Item', sku: 'SKU-1' }],
totalCount: 1,
});
});
describe('when API returns empty results', () => {
beforeEach(() => {
(queryItems as jest.Mock).mockResolvedValue({ items: [], totalCount: 0 });
});
it('shows empty state message', async () => { /* ... */ });
it('hides pagination controls', async () => { /* ... */ });
});
describe('when API fails with network error', () => {
beforeEach(() => {
(queryItems as jest.Mock).mockRejectedValue(new Error('Network error'));
});
it('shows error toast', async () => { /* ... */ });
it('does not crash', async () => { /* ... */ });
});
describe('when API returns 401', () => {
beforeEach(() => {
(queryItems as jest.Mock).mockRejectedValue({ response: { status: 401 } });
});
it('calls handleAuthError', async () => { /* ... */ });
});
});

Create a mutable mock function for context hooks so each describe can inject different state:

const mockUseAuth = jest.fn().mockReturnValue({
user: { name: 'Test User', tenantId: 'T1' }, loading: false
});
jest.mock('@/contexts/AuthContext', () => ({ useAuth: () => mockUseAuth() }));
describe('when user is loading', () => {
beforeEach(() => {
mockUseAuth.mockReturnValue({ user: null, loading: true });
});
it('shows loading spinner', () => { /* ... */ });
});
describe('when user is unauthenticated', () => {
beforeEach(() => {
mockUseAuth.mockReturnValue({ user: null, loading: false });
});
it('redirects to signin', () => { /* ... */ });
});

For components that call APIs multiple times or implement retry logic:

(queryItems as jest.Mock)
.mockRejectedValueOnce(new Error('fail')) // First call fails
.mockResolvedValueOnce({ items: [...] }); // Retry succeeds

For every component under test, systematically create scenarios covering these categories:

  1. Loading states — mock slow or pending responses (e.g., new Promise(() => {}) that never resolves)
  2. Empty states — mock empty arrays, null objects, missing fields
  3. Error states — mock rejected promises, HTTP error codes (401, 403, 500)
  4. Auth states — unauthenticated, expired token, loading auth
  5. Data shape variants — missing optional fields, null nested objects, empty strings
  6. User action branches — every button click, form submission, modal dismiss
  7. Conditional rendering — feature flags, user roles, tenant-specific behavior
  8. Edge cases — very long strings, special characters, zero counts, boundary values

After writing initial tests, use coverage reports to find remaining gaps:

  1. Run tests with --coverage and inspect uncovered line numbers.
  2. Read the source file at those lines to identify the condition guarding that branch.
  3. Create a new describe block with mocks configured to satisfy that condition.
  4. Re-run coverage to confirm the lines are now covered.
  5. Repeat until all reachable branches are exercised.

Before assigning files to agents or investing time in coverage work, run a dead code pre-scan to avoid chasing structurally unreachable branches:

  1. Collect coverage JSON: Run the test suite with --coverage --coverageReporters=json and parse the results.
  2. Identify zero-hit branches: Find functions, branches, or statements that have 0 hits across all existing tests — not merely low coverage, but completely unexercised.
  3. Cross-reference call sites: Search the codebase for actual call sites of the zero-hit functions. If no call site exists in application code (only in the definition itself), the code is structurally unreachable.
  4. Flag in uncovered-lines report: Mark dead-code lines so they are skipped rather than targeted with tests. This saves 15-30 minutes per engineer on files containing unused exports or vestigial branches left behind by removed feature flags.

Tests where the only assertion is confirming the component rendered without throwing are prohibited:

// BAD — "no crash" test with no behavioral assertion
it('renders without crashing', () => {
render(<MyComponent />);
expect(screen.getByText('Some Title')).toBeInTheDocument();
});

Every test must verify at least one of:

  • (a) A specific behavior change — clicking a button changes displayed text, triggers navigation, or toggles state.
  • (b) A mock function call with specific arguments — e.g., expect(mockOnSubmit).toHaveBeenCalledWith({ name: 'test' }).
  • (c) A DOM state change — an element appears or disappears, a class is added or removed, a form field becomes disabled.

A single toBeInTheDocument() call confirming static text is not a behavioral assertion. If a test only proves the component rendered without throwing, delete it or add a real assertion.

Tests that wrap assertions inside conditional checks are prohibited. Conditional guards silently pass when the element is absent, never exercising the code under test:

// BAD — if the element doesn't exist, the test passes without asserting anything
const button = screen.queryByRole('button', { name: 'Submit' });
if (button) {
fireEvent.click(button);
expect(mockOnSubmit).toHaveBeenCalled();
}

Replace with hard assertions:

// GOOD — fails immediately if the element is missing
const button = screen.getByRole('button', { name: 'Submit' });
await userEvent.click(button);
expect(mockOnSubmit).toHaveBeenCalled();

Rules:

  • Use getBy* (which throws on missing elements) instead of queryBy* + conditional.
  • If the element legitimately might not exist in a scenario, restructure the test setup to ensure it does, or remove the test entirely.
  • Never use if (element), element &&, or element?. to guard fireEvent, userEvent, or expect calls inside a test body.

When planning coverage work across multiple files or engineers, balance assignments by total source lines rather than file count.

Target: 1,500–3,000 source lines per engineer per phase.

ScenarioAssignment guidance
A single 2,700-line fileSole assignment for one engineer — do not pair with other work
Two 500-line componentsFull-phase job for one engineer
Seven 200-line identical components (e.g., a family of form components)Full-phase job for one engineer

Estimating required tests:

estimated_tests = (target_coverage - current_coverage) × total_stmts / avg_stmts_per_test

Where avg_stmts_per_test is typically:

  • 3–5 for large, complex files (many branches per test)
  • 8–15 for small, straightforward files (each test covers more statements)

Template-replicable work: Discount identical component families when sizing. An engineer writing one test suite and replicating it with minor modifications finishes much faster than an engineer working on the same line count spread across unique files. Assign 2–3x the normal line count for template-replicable batches.


  • Backend Testing — Kotest, MockK, ContainerizedPostgres, and LocalStack patterns in depth

Copyright: (c) Arda Systems 2025-2026, All rights reserved