Skip to content

Back End Implementation Pattenrs

Implementation patterns used in the back end development of Arda Systems, organized by component layer plus cross-cutting/general concerns. References are intentionally mixed: “Docs” point to technical-documentation, while “Code” points to common-module/operations implementation or the supporting code for the patterns.

General Patterns

Modeling

  • Values vs Entities — Model identity-bearing, mutable state as Entities and pure immutable data as Values; use this split to drive module boundaries, persistence decisions, and API shapes. Docs: Entities, Value types.
  • Journalled entities — Treat changes as an immutable record lineage to support auditability and “as-of” queries. Docs: Journalled Entities Specification
  • Inter-entity relationship patterns — Choose explicit relationship styles (extension, aggregation, composition/parent-child, association entity) and implement them via references + universes/queries. Docs: Entities
  • Entity references (pinned vs floating) + ADN — Standardize cross-module references as (a) structured reference URIs and/or (b) ADN strings; use pinned references for immutable version addressing and floating references for “latest-as-of context” semantics. Docs: Entity References, Arda Domain Name (ADN)
  • Reference data lifecycle + Edit–Draft–Publish — Separate “draft editing” from bitemporal history; publishing creates new history records. Docs: Reference Data Management, Edit-Draft-Publish of Reference Data

Runtime Component: Wiring, Configuration

  • Runtime containment model — Structure runtime as Infrastructure → Partition → Component; use this to reason about isolation and naming.
  • Composite config loading — Load and layer configuration from multiple files/directories via -Darda.config.location=... over Typesafe defaults.
  • Typed configuration accessors — Centralize reading/validation of component/module configuration (including auth and datasource defaults).
  • Endpoint locator values — Carry endpoint identity (baseUrl, version, name, resource) as data and validate compatibility with strictValidate vs floatingValidate.
  • Ktor module composition — Compose multiple endpoints under a module root, consistently applying secure vs unsecure routes and installing context plugins for secure routes.

Cross-Cutting Aspects: Auth, Logging, …

  • Authentication abstraction — Unify bearer key and JWT auth under one Authentication interface (with composite auth for “try multiple schemes”).
  • Request → ApplicationContext injection — Build an ApplicationContext from request headers + principal + call id and inject it into coroutine context for downstream code.
  • Standard error responses — Normalize thrown Throwable to structured HTTP errors via StatusPages and ErrorResponse.
  • Structured logging + trace correlation — Use call ids + MDC propagation to correlate logs across request handling.
  • Performance logging — Emit per-request timing as JSON CallPerfData for lightweight latency visibility.
  • Standard JSON configuration — Centralize JSON behaviors (ignore unknown keys, contextual serializers, no discriminator) for consistent wire/persistence encoding.

Building and DevOps

Persistence

  • Bitemporal persistence — Store effective + recorded time for each record and query “as-of” any point in either timeline.
  • DBIO effect type — Represent DB work as composable DBIO/TDBIO values to separate describing vs executing and enable transactional composition.
  • Universe repository pattern — Implement persistence access as a Universe<Payload, Metadata> with persistence mapping + validator + universal condition(s).
  • Validation layering — Keep invariants close to data (EntityPayload.validate) and collection/scoping rules in the universe validator/orchestrator.
  • Scoping (tenant/parent) as a universal condition — Enforce scoping by metadata + universal filter + validator checks.
  • Parent–child persistence (ordered vs unordered) — Implement child collections as universes scoped to their parent; use ranked ordering only when users need explicit ordering semantics.
  • Exposed mapping patterns — Define persistence with table configuration + record mapping + universe + migrations.
  • Draft persistence (out-of-line drafts table) — Store drafts separately with (entity_id, tenant_id) uniqueness and JSON payload/metadata to support Edit–Draft–Publish.
  • Transaction bubble helpers — Use inTransaction to consistently create/reuse Exposed transactions and propagate ApplicationContext across coroutines.
  • DB bootstrap + migrations — Standardize DB creation via Hikari/Exposed (DataSource.newDb) and migrations via Flyway (DbMigration).

Business

  • Entity local behavior — Implement validations and small invariants on the payload/metadata types themselves (no external calls).
  • Domain lifecycle modeling — Model lifecycles explicitly (states, events, transitions) for entities that have more than CRUD behavior.
  • Lifecycle mixins in services (example) — Define lifecycle interfaces (e.g., KanbanCardLifecycle, KanbanCardPrintLifecycle) and provide shared implementations that translate events into bitemporal updates + derived “details” payloads.
  • Proto-to-domain mapping layer (CSV ingestion) — Convert validated protobuf messages to domain objects via explicit mappers, isolating wire/schema evolution from domain evolution.

Service

  • Layered module design (protocol adaptor/service/persistence/proxy) — Use services as transaction boundaries and orchestration points over one or more universes.
  • Data authority service pattern (CRUDQ over a universe) — Provide a consistent service façade that wraps universe operations in transactions and returns stable API DTOs.
  • Editable data authority service (draft + publish semantics) — Require drafts for updates and prevent delete with active draft; close draft on publish.
  • Service notifications and observers — Emit DataAuthorityNotification events on successful mutations and allow observer registration; use as a lightweight pub/sub for change propagation.
  • Action abstraction for workflows — Model non-trivial operations as Action/TargetedAction/EntityAction with explicit validation, author/effective time, and optional context wrappers (e.g. “in transaction”).
  • Reusable state engine — Implement state machines declaratively (states, transitions, guards, entry/exit actions) and generate executors.

APIs (including Rest and CSV upload)

REST APIs

  • Standard REST endpoint routing + docs — Mount endpoints under a module root path and expose OpenAPI JSON + Swagger UI + Redoc routes consistently.
  • Endpoint configurator interface — Express endpoints as “configure secure/unsecure routes” units for composition and testability
  • Data authority REST endpoint template — Use reify(...) factories to capture serializers and build consistent CRUDQ routes + OpenAPI specs with minimal boilerplate
  • Filtering/query endpoint pattern — Provide /query routes using Query DSL with page cursors for stable pagination.
  • Declarative route/spec co-definition (ServiceEndpointDsl) — Define route trees with typed request/response messages and auto-derived OpenAPI operation ids.
  • Standard request parsing (RequestUtils) — Use requireHeader/requireBody/requirePathParam etc. returning Result to keep route handlers small and error handling consistent.
  • OpenAPI spec helpers + standard headers — Use EndpointSpec.withErrorResponsesAnd and StandardHeaders to keep OpenAPI specs consistent across endpoints.

CSV upload APIs (direct-to-S3 + ingestion)

  • Direct-to-S3 upload with presigned PUT URLs — Return a presigned PUT URL and require S3 object metadata headers (tenant/author) so uploads don’t transit the API server.
  • Streaming ingestion from S3 — Read CSV from S3 into a Flow, batch rows, and process in chunks while accumulating row-level outcomes.
  • Protobuf CSV contract + validation — Define CSV row structures as protobuf messages (ItemRow), attach validation rules with buf.validate (field constraints + CEL), validate with protovalidate, then map to domain objects.
  • Header aliasing to field paths (MappingConfig) — Translate multiple CSV header spellings into canonical protobuf field paths via JSON config.
  • Ingestion job tracking API — Expose endpoints to get upload URL, trigger ingestion, and poll job status; represent ingestion as a durable Job entity with event history.

Comments