Runtime Environments Structure
Runtime Environments are those intended to run Arda’s Products either for development, testing, staging, or production.
They are organized in a structure that allows for the separation of resources and processes, while still sharing common infrastructure and services. This document describes the structure of Runtime Environments, including the concepts of Infrastructure, Partition, and Component.
Runtime Platform Model¶
The structure of the Runtime Platform can be described in terms of a Class Diagram with the following elements which are defined in the Arda Platform Structure:
Current Structure as of August-September 2025¶
Note
This is the intended configuration, which will map to the future dev, stage
and prod purposes. Not yet fully implemented (08/12/25).
Sandbox001 will be decommissioned in favor of Alpha001-prod once Coda Packs and customers are migrated to it.
Current Runtime Environments¶
- Sandbox0001: Currently serving as the Production Runtime to support Arda’s Coda product through Pack integrations. It is built as a single
Environmentthat includes all the resources needed to run components. It is built using CloudFormation templates and Helm Charts choreographed through a Shell Script and deploys as a monolith. Sandbox0001 is based on a branch in theinfrastructurerepository. - Sandbox0003: Is a bare-bones
Environmentexclusively used for UI development with AWSAmplifyoperations. -
Alpha001: is built with an
Infrastructurelayer and a singlePartitionlayer. It is built with a combination of CDK scripts and Helm charts with minimal coordination through a Shell Script. The shell script deploys the complete Infrastructure and Partition as a single operation for convenience, although the internal structure is already layered.It is expected that it will be decommissioned once
Alpha002completes integration testing with the UI.- dev Is a
Partitionfor the development of deployment mechanisms and pipelines of components to a pre-existing Environment. (Milestone Al1a) - Alpha002: A layered
Infrastructurewith one currentPartition. It introduces Cognito Authentication for its API’s and other modularity improvements overAlpha001. It is expected that a secondPartitionto support the upcoming Demo202509 will be added to this infrastructure. - dev Is the Development environment for integration of Frontend and Backend components mediated by an OAuth2 authentication mechanism at the API Gateway level. After this integration, it will serve as the standard development purpose for Product Development.
- dev Is a
Environment Configurations as of 2025-07¶
Alpha002 Configuration¶
Alpha002 Infrastructure and Alpha002.dev are the current standard for environments. They will evolve over time and this documentation should reflect that.
Infrastructure Resources¶
- Networking Resources:
- VPC
- Subnets
- Route Tables
- Internet Gateway
- NAT Gateway
- Security Groups
- Vpc Endpoint for Secrets Manager Access
- Optionally an NLB (Deprecated)
- VPC
- Computing Resources:
- GitHub Oidc Provider (should move to OAM resources)
- EKS Cluster
- Cluster Role
- General Pod Role
- Secrets Access Role
- Access Entries for GitHub Oidc and Admin Users
- Logs and Log Access Policy
- Security Group inbound rules
- CoreDns
- Fargate Profiles for
addOns, - Load Balancer Controller
- External Secrets Controller
- Cluster Certificate Authority Data in SSM
- Ingress Resources:
- Route53 Hosted Zones for See Url Structure:
- Route53 Records in the
Rootaccount for routing theapp,io,api, andauthdomains to this Infrastructure. - TLS certificates for each domain.
Partition Resources¶
- Storage Resources:
- Aurora Postgres Database Cluster
- Writer and Reader Instances
- Logs
- Monitoring Dashboard
- Master user and password in Secrets Manager
- Aurora Postgres Database Cluster
- OAM
- Cognito Authentication Service See Cognito Structure
- User Pool
- Post-Signup Lambda Trigger to auto-confirm users.
- Password Reset Lambda Trigger to send password recovery to users.
- Resource Server
- M2M Client Application
- Web Client Application
- User Pool Client Application
- User Custom Attributes
- Custom Scopes
- Cognito Authentication Service See Cognito Structure
- Ingress
- NLB with Listeners and Target Groups. This is optional for now and should be coordinated with the NLB in the infrastructure to ensure that at least one of them is avaiable.
- API Gateway with:
- Authorizer linked to Cognito Service
- VPC Link and Integration to send traffic to NLB
- Default Stage.
- Custom Domain for the API:
<partition>.<infrastructure>.api.arda.cardsor<partition>.<infrastructure>.io.arda.cards - Logging
- Monitoring Dashboard for the API.
- CloudFront Distribution for the API (support for http to https redirection, caching, etc.)
- Eks Ingress Controller.
- Eks ALB binding to NLB Target Groups.
Alpha001 Configuration¶
This configuration will be eliminated in the future as soon as Alpha002 is tested and integrated.
Similar to Alpha002 with the following differences:
- The NLB is created by the Infrastructure layer and not by the Partition Layer. Note that this limits the ability to route to multiple partitions.
- The Cognito Authentication Service is not implemented, and the API Gateway does not have an Authorizer.
Components deployed to this infrastructure need to provide their own authentication mechanism (e.g. Bearer Token).
Sandbox0003 Configuration¶
This is a minimal environment for Amplify development. It is not yet standardized.
Sandbox0001 Configuration¶
Current configuration that supports the Coda product.
It will be deprecated once the new Al1b configuration is fully operational and Coda customers can be migrated to
new packs that connect with an environment based on the new Al1b configuration.
It does not have a layered structure and is built in a monolithic way. It contains the same
elements as Al1a.
Accessing a Runtime Environment¶
Runtime Environments are implemented using CDK and CloudFormation Templates. They use CloudFormation Outputs to provide the information needed to access their resources and
to coordinate with other deployment scripts and tools.
The export names of the Cloud Formation Outputs follow the naming conventions:
For values that can be used outside the defining repository:
identifier regular expression Logical ID /.*_?API$/Export Name /.*-API-.*/
For values that are used internally in the defining repository but outside the CDK or CloudFormation code:
identifier regular expression Logical ID /.*_?I$/Export Name /.*-I-.*/
This same rules apply to infrastructure outputs that are created by individual components.
Accessing Outputs in Application Code¶
Accessing the value of a specific output in the Application Code (Kotlin) a module relies
on the application.conf mechanism. It should ONLY access *-API-* outputs from other repositories,
while it can access both *-API-* and *-I-* outputs from its own repository.
-
Make the value available to helm at deployment time; in
${PROJECT_ROOT}/src/helm/read-cloudformation-values.cmdadd a line:Where
<path>.<to>.<key>is the path in theapplication.conffile where the key was defined.Consult helm-deploy-pipeline-action
for the syntax and capabilities of theread-cloudformation-values.cmd. -
Define the configuration key in
application.confor the appropriate module configuration file (e.g.reference/item/application.conf):If a placeholder is required for unit tests, and it can’t be set in test setup itself,
add the placeholder value to anapplication-test.conffile. -
In
${PROJECT_ROOT}/src/helm/templates/configmap.yamlinclude the key in the configmap:Note that the
\{\{and\}\}are escaped to avoid markdown macro processing but should not be escaped in the actual file. -
In
build.gradle.ktsensure that thelintValuesvariable includes the new key:```kotlin
val lintValues = mapOf(
…
“path.to.key.extras.some.myKey” to “lint-value-placeholder”
…
) -
Run
./gradlew clean buildto ensure that it lints correctly. - Run
./gradlew clean helmInstallToLocalto test-deploy the new configuration to the local kubernetes cluster.