Skip to content

Design Session 02: Upload Workflow (Write Path)

Deep-dive into DQ-005: Upload Workflow from Design Session 01, which was left as “more analysis required.” This session explores the three workflow options with detailed sequence diagrams showing the interaction between all system layers.

The following decisions from Design Session 01 constrain this analysis:

  • DQ-001 — Two buckets (http-assets durable, ephemeral-uploads short TTL). Key format: ${tenantId}/${owning-module}/${entity-type}/${property-name}/${asset-uuid}.${extension}
  • DQ-003 — Option C: shared library in common-module. The Operations service generates presigned URLs directly via the S3 abstraction, not through a separate Lambda.
  • DQ-005 Discussion — Presigned POST (not PUT) is required for server-side enforcement of Content-Type and Content-Length constraints.
  • DQ-007 — New S3BucketAccess abstraction in common-module handles presigned URL generation and key construction.
  • DQ-008 — Immutable objects (new UUID per upload). Orphan cleanup deferred.
OptionDescription
A. Decoupled upload + entity updatePresigned POST fields returned independently. Client uploads to S3, then updates the entity with the asset key in a separate call.
B. Upload-and-confirmServer tracks pending uploads. Client uploads to S3, then confirms. Server validates the S3 object and links it to the entity.
C. Upload via existing updateClient constructs the key, uploads to a well-known path, then references it in a normal entity create/update payload.

The operations service already implements a file upload workflow for CSV batch imports. This section documents that design as a reference point for the image upload discussion. Implementation details (job tracking, CSV parsing, row validation) are abstracted away to focus on the upload-relevant interactions.

Key differences from the image upload use case:

  • Uses presigned PUT (no server-side Content-Type/Content-Length enforcement at S3 level).
  • The server reads the file back from S3 for processing (Step 3), rather than just storing a reference.
  • Entity updates happen server-side in bulk during processing, not via a separate client call.

PlantUML diagram

Presigned URL Configuration (Current CSV Upload)

Section titled “Presigned URL Configuration (Current CSV Upload)”

The CsvS3BucketDirectAccess implementation enforces the following constraints on the presigned PUT URL it generates:

S3 PutObjectRequest parameters (embedded in the signed URL):

ParameterValueSource
bucketConfigured bucket nameCsvS3BucketDirectAccess constructor
key${keyNamespace}/${tenantId}/${uuid}.csv.gzfileKey() — keyNamespace is the module name (e.g., item), tenantId scopes the path, uuid is random per upload
contentType"text/csv"Hardcoded in presignedPutUrl()
contentEncoding"gzip"Derived from compression parameter (default: GZIP). Also supports "bzip2" (BZIP2) or null (NONE)
metadata{ "tenant-id": "<tenantId>", "author": "<author>" }Passed as headerRequirements from CsvUploadService.requiredS3ObjectAttributes()

Presign request parameters:

ParameterValueSource
signatureDuration15 minutesConstructor default: Duration.ofMinutes(15)

S3 client configuration:

SettingValueRationale
chunkedEncodingEnabledfalseBrowser compatibility — ensures HTTP clients that don’t support chunked Transfer-Encoding can upload

What the client must include in the PUT request:

The presigned PUT URL embeds the Content-Type, Content-Encoding, and x-amz-meta-* headers in the signature. The client must send matching headers:

  • Content-Type: text/csv
  • Content-Encoding: gzip (if compression is GZIP)
  • x-amz-meta-tenant-id: <tenantId>
  • x-amz-meta-author: <author>

If any of these headers are missing or mismatched, S3 rejects the request with a SignatureDoesNotMatch error.

What the server validates on read-back (getMetadata()):

When processing the uploaded file (Step 3), the server calls HeadObject and validates:

  • All keys in requiredAttributes are present in the S3 object metadata.
  • All entries in matchingAttributes are present and match the expected values (e.g., tenant-id must match the requesting tenant).

Limitations (relevant to DQ-005):

  • No Content-Length enforcement — presigned PUT cannot constrain file size. A client can upload a 10 GB file and it will be accepted.
  • No Content-Type enforcement at S3 level — while contentType is set in the signed request, S3 presigned PUT does not reject mismatched Content-Type from the client; it only sets the stored metadata. The actual file content is not validated against the declared type.
  • These limitations are acceptable for CSV upload because the server reads back and parses the file (bad files fail at parse time). For image upload, where there is no server-side processing, presigned POST is needed to enforce these constraints at the S3 edge.

Relevant observations for DQ-005:

AspectCSV Upload (existing)Image Upload (DQ-005)
Presigned methodPUT (no policy enforcement)POST (Content-Type, Content-Length enforcement)
Endpoint styleEntity-specific (/v1/item/upload-job/...)Entity-specific (Options A, B)
Server reads S3 objectYes (batchFlow for processing)Option B only (HeadObject for validation)
Entity update mechanismServer-side bulk (inside processing job)Client-initiated (separate PUT call) or server-side (Option B confirm)
S3 abstractionCsvS3BucketDirectAccessNew S3BucketAccess (DQ-007)

ParticipantDescription
UserHuman interacting with the browser
SPASingle-page application (arda-frontend-app client-side React)
BFFBackend-for-Frontend (Next.js proxy.ts in arda-frontend-app, running on Amplify)
Item ModuleEntity logic within the Operations microservice on EKS. Handles Item CRUD and entity-specific upload endpoints.
S3 AccessS3BucketAccess abstraction from common-module (DQ-007). Generates presigned URLs, constructs keys, verifies objects.
S3 BucketAWS S3 http-assets bucket (target for durable asset storage)

Both Item Module and S3 Access run inside the operations microservice (shown as a box in the diagrams), reflecting DQ-003’s decision to use a shared library in common-module rather than a separate Lambda.


Option A: Decoupled Upload + Entity Update

Section titled “Option A: Decoupled Upload + Entity Update”

The simplest workflow. Two independent server interactions: one to obtain upload credentials, one to persist the entity reference. The SPA orchestrates both steps.

PlantUML diagram

Characteristics:

  • Three round-trips from SPA perspective (upload-url, S3 POST, entity update).
  • Steps 2 and 3 are independent from the server’s perspective — the entity update does not verify that the S3 object exists.
  • If Step 3 never happens (user closes browser), the uploaded object becomes an orphan. Mitigated by DQ-008 (immutable objects, deferred cleanup).
  • The entity stores objectKey (not a full URL), enabling URL format changes across DQ-006 phases without data migration.

JMP: operations micro service could verify existence of the uploaded file before persisting changes and return error if not found.

CLD-AI: Agreed. This would be a lightweight enhancement to Option A: in Step 3 (entity update), the Item Module calls S3Access.headObject(objectKey) before persisting. If the object does not exist, it returns a 409 Conflict or 422 Unprocessable Entity. This adds orphan-reference protection without the pending-upload state machinery of Option B. The trade-off is one additional S3 API call per entity update that includes an imageKey, but no new endpoints or database state. This makes Option A a closer competitor to Option B — essentially “Option B without the pending record.”


The server tracks the upload lifecycle. A pending upload record is created before the client uploads, and a confirmation step validates and links the object to the entity.

PlantUML diagram

Pending upload state is managed internally by the Item Module via its own database (the same database used for Item entities). This is consistent with the existing CSV upload pattern, where JobService tracks upload job lifecycle within the Item Module. The self-calls in the diagram represent internal persistence operations:

  • createPendingUpload — inserts a PENDING record during initiation.
  • lookupPendingUpload — retrieves and validates the record during confirmation.
  • confirmUpload — updates the Item entity and marks the record CONFIRMED in a single transaction.
  • expirePendingUploads — background cleanup of expired PENDING records.

S3 Access has no knowledge of pending uploads — it remains a stateless abstraction for S3 operations only (presigned URLs, HeadObject, key construction). This keeps the common-module library reusable across modules without entity-specific concerns.

Characteristics:

  • Three round-trips from SPA perspective (initiate, S3 POST, confirm). The entity update happens server-side inside the confirm step.
  • Server validates the upload via HeadObject before linking to the entity — no orphaned references.
  • Requires server-side state: a pending_uploads table or equivalent.
  • Two new endpoints (POST .../image, POST .../image/confirm) per entity type that supports uploads.
  • Expired pending uploads are cleaned up by a background job or lifecycle rule.

JMP: The Characteristics section of Option B says Four round-trips but I only count 3 in the diagram. Clarify.

CLD-AI: You are correct — the diagram shows 3 round-trips from the SPA’s perspective: (1) initiate, (2) S3 POST, (3) confirm. The entity update happens server-side inside the confirm step, not as a separate SPA call. The Characteristics text and Comparison table have been corrected to say 3.


Option C: Upload via Existing Update — DISCARDED

Section titled “Option C: Upload via Existing Update — DISCARDED”

Decision: Option C is discarded. The system’s Domain-Driven architecture requires API endpoints tied closely to the domain’s information model. A generic “upload-anything” endpoint (/storage/upload-url) violates this principle. See Discussion & Decisions.

The client constructs the object key following a known convention, uploads directly, and includes the key in a standard entity create/update payload. No dedicated upload endpoints.

PlantUML diagram

Characteristics:

  • Three round-trips, same as Option A. The difference is that the presigned URL endpoint is generic (/storage/upload-url) rather than entity-specific (/items/upload-url).
  • The entity update endpoint has no knowledge of the upload — imageKey is treated as any other field.
  • The SPA must know the key convention parameters (module, entityType, propertyName) to request the presigned URL. This is coupling, but it is explicit and typed (not a string format the SPA constructs).
  • No server-side upload state. Orphan behavior identical to Option A.
  • The generic /storage/upload-url endpoint is reusable across entity types and modules without new endpoints per use case.

AspectOption AOption BOption C
Round-trips (SPA)333
New endpoints per entity type1 (upload-url)2 (image, confirm)0 (generic + existing PUT)
Server-side upload stateNonePending upload recordNone
S3 object validationNoneHeadObject on confirmNone
Orphan riskYes (deferred cleanup)Minimal (expiry cleanup)Yes (deferred cleanup)
Entity endpoint awarenessUpload-awareUpload-awareUpload-unaware
Reusability across modulesPer-entity endpointPer-entity endpointsSingle generic endpoint

Discussion & Decisions (To keep at the end of the document)

Section titled “Discussion & Decisions (To keep at the end of the document)”

JMP: Decision 1: Option C is discarded. A key design aspect of the system is to emphasize a Domain Driven architecture that includes having API endpoints tied closely with the information model of the domain. A generic “upload-anything” endpoint violates this principle.

  • Action: Do not delete the option from the document, note the decision and the reasoning.

CLD-AI: Acknowledged. Option C has been marked as DISCARDED in its heading with a blockquote explaining the rationale. The diagram and characteristics are preserved for reference. The remaining discussion focuses on Options A and B.

JMP: Additional Clarification: Inspect closely the Current implementation of the CSV upload feature and create an Existing Work section with a sequence diagram that shows that design as reference for this discussion. Abstract the diagram from implementation details irrelevant to this discussion.

CLD-AI: Done. The Existing Work section has been added before the Detailed Scenarios section. The CSV upload workflow uses a 4-step pattern: (1) request presigned PUT URL via entity-specific endpoint, (2) direct upload to S3, (3) trigger server-side processing, (4) poll job status. Key observations relevant to DQ-005:

  • The upload URL endpoint is entity-specific (/v1/item/upload-job/upload-url) — consistent with the DDD principle that eliminated Option C.
  • It uses presigned PUT (not POST), which means no server-side policy enforcement of Content-Type or Content-Length at the S3 level. The new image upload should use presigned POST per the DQ-005 discussion.
  • The “process” step (Step 3) is where S3Access reads back from S3 and the Item Module updates entities — analogous to the entity-update step in Options A/B.
  • CsvS3BucketDirectAccess in common-module is the precursor to the new S3BucketAccess abstraction (DQ-007).

JMP: Request to clarify: Provide a summary of the differences between using POST and PUT presigned URLs in the context of this design and the tradeoffs involved.

CLD-AI: Both presigned PUT and presigned POST allow a client to upload an object to S3 without having AWS credentials. They differ in what the server can enforce at the S3 level before the object is accepted.

Presigned PUT (what the CSV upload uses today):

  • The server signs a single URL for a specific S3 key.
  • The client sends a plain PUT request with the file as the body.
  • The server can constrain: the exact key, expiration time, and required S3 metadata headers (e.g., x-amz-meta-tenant-id).
  • The server cannot constrain: Content-Type, Content-Length, or file size. Any file of any type and any size will be accepted by S3 as long as the signature is valid.
  • Simpler client-side code (single PUT request, no form encoding).

Presigned POST (what DQ-005 requires for image upload):

  • The server generates a POST policy — a JSON document listing conditions that S3 will enforce when the upload is received.
  • The client sends a multipart/form-data POST with the policy fields + file.
  • The server can constrain:
    • content-length-range: minimum and maximum file size (e.g., [1, 10MB]).
    • Content-Type: exact match or prefix match (e.g., starts-with: "image/").
    • key: exact match on the generated object key.
    • Custom metadata conditions.
  • S3 rejects the upload at the edge if any condition is violated — the object is never written. This is server-side enforcement without the server needing to be in the data path.
  • More complex client-side code (must construct a multipart form with all policy fields before appending the file).

Why this matters for image upload:

ConcernPresigned PUTPresigned POST
File size limitNot enforceable at S3Enforced by content-length-range
Content-Type validationNot enforceable at S3Enforced by starts-with or exact match
Malicious file uploadAny file accepted, must validate afterRejected at S3 edge before storage
Storage cost of bad uploadsIncurred (must clean up)Avoided (never written)
Client complexitySimple PUTMultipart form with policy fields
Server complexitySign a URLGenerate policy JSON + form fields

For the CSV upload, presigned PUT was acceptable because the server reads back and validates the file during processing (Step 3) — bad files are caught then. For image uploads, there is no server-side processing step in Option A; the entity update just stores a key reference. Presigned POST closes this gap by having S3 itself enforce content constraints at upload time.

Trade-off summary: Presigned POST adds client-side and server-side complexity (multipart form, policy generation) in exchange for server-side enforcement without a server-side data path — the server never touches the file bytes, yet controls what gets stored. This is the right choice for user-facing asset uploads where Content-Type and size constraints are security and cost concerns.

JMP: Question: For image uploading (first use case), what are the recommended verifications to perform as part of the POST presigned URL generation?

CLD-AI: The presigned POST policy should enforce two categories of conditions: S3-level constraints (enforced at the edge before the object is written) and server-side pre-checks (validated by the Item Module before generating the policy).

S3 POST policy conditions (enforced by S3 on upload):

ConditionTypeValueRationale
keyeqExact generated key (${tenantId}/operations/item/image/${uuid}.${ext})Prevents the client from writing to arbitrary keys
Content-Typestarts-with"image/"Accepts image/png, image/jpeg, image/webp, etc. Rejects non-image MIME types
content-length-rangerange[1, 10485760] (1 byte to 10 MB)Prevents zero-byte uploads and enforces a maximum file size. The 10 MB limit is a reasonable default for product images; should be configurable
x-amz-meta-tenant-ideq"<tenantId>"Binds the uploaded object to the requesting tenant for audit and read-back validation
x-amz-meta-authoreq"<author>"Records who initiated the upload, consistent with CSV upload pattern
x-amz-meta-entity-typeeq"item"Tags the object with its owning entity type for lifecycle management and orphan cleanup
x-amz-meta-property-nameeq"image"Tags the object with the property it belongs to, enabling future multi-property upload support
bucket(implicit)Target bucket from configurationThe POST URL itself targets the bucket; the client cannot redirect to another bucket

Server-side pre-checks (validated by Item Module before generating the policy):

CheckRationale
tenantId is valid and the caller has write accessAuthorization gate — prevents uploading to another tenant’s namespace
itemId exists (Option B) or is a valid future reference (Option A)Option B: the pending upload record needs a valid entity target. Option A: optional, but prevents generating URLs for nonexistent entities
Requested Content-Type is in an allow-list (image/png, image/jpeg, image/webp, image/gif)Defense-in-depth: the starts-with "image/" policy condition catches broad misuse, but the server should also reject exotic image types (e.g., image/svg+xml which can contain scripts) before generating the policy
Requested file size is within the configured limitFail fast: reject obviously oversized requests before generating a policy, rather than letting the client attempt a 10 MB upload that will be rejected by S3

What is NOT enforced (and why):

  • File content magic bytes — S3 POST policies operate on headers, not file content. A client could send a non-image file with Content-Type: image/png and S3 would accept it. If content verification is needed (e.g., for security or display correctness), it must happen server-side after upload — either in the confirm step (Option B) or as an asynchronous validation job. This is a candidate for a follow-up DQ if the team wants content-level validation.
  • Image dimensions — not enforceable at upload time. If dimension constraints are needed (e.g., minimum 200x200 for product images), this requires server-side image processing after upload.

JMP: Question: Image Format and Min/Max File Size Discussion: Recommend what should be the accepted formats and min/max file size for images that are suitable for printing at 300 DPI on a 3.5x5 format. Provide the rationale for it.

CLD-AI:

Target print specification:

  • Print size: 3.5” x 5” at 300 DPI
  • Minimum pixel dimensions: 1050 x 1500 pixels (1.575 megapixels)
  • For 2x oversampling (sharpness margin / retina proofing): 2100 x 3000 pixels (6.3 megapixels)

Recommended accepted formats:

FormatMIME TypeRecommendationRationale
JPEGimage/jpegPrimaryIndustry standard for photographic images. Lossy compression keeps file sizes manageable. All print workflows, browsers, and image processing libraries support it. Best compression-to-quality ratio for photographs
PNGimage/pngAcceptedLossless compression, supports transparency. Useful for product images with sharp edges, text overlays, or transparent backgrounds (e.g., product cutouts). Larger file sizes than JPEG for photographic content
WebPimage/webpAcceptedModern format with superior compression (25-35% smaller than JPEG at equivalent quality). All modern browsers support it. Caution: some legacy print workflows and desktop tools may not support WebP natively — the system may need a server-side conversion step for print output
GIFimage/gifRejectedLimited to 256 colors. Completely unsuitable for print-quality photographic images
SVGimage/svg+xmlRejectedVector format — not applicable to photographic product images. Also a security concern (can embed JavaScript)
TIFFimage/tiffRejectedExcellent for print workflows but impractical for web upload: file sizes are 10-50 MB uncompressed, browsers cannot display them natively, and the SPA would need special handling. If TIFF support is needed, it should be a separate bulk/professional upload workflow, not the standard product image path

POST policy Content-Type condition: starts-with "image/" combined with the server-side allow-list: ["image/jpeg", "image/png", "image/webp"].

Recommended file size limits:

LimitValueRationale
Minimum50 KBA 1050x1500 JPEG at any reasonable quality level will be at least 100-200 KB. A file under 50 KB at these dimensions would indicate extreme compression artifacts (quality < 10) that would produce visible banding and blocking in print. Setting the floor at 50 KB catches corrupt, placeholder, or thumbnail-sized files while allowing for small but legitimate product graphics
Maximum10 MBA 2100x3000 PNG (lossless, photographic content) is typically 8-12 MB. A 10 MB ceiling accommodates high-quality PNG uploads at 2x oversampling. JPEG at this resolution rarely exceeds 3-4 MB even at quality 95. The limit also acts as a cost control — at scale, unbounded uploads create storage and bandwidth costs

File size expectations by format at target resolution:

Format1050x1500 (1x)2100x3000 (2x)Notes
JPEG (quality 85)200 KB — 800 KB600 KB — 2.5 MBVaries with image complexity (solid backgrounds compress well, detailed textures less so)
JPEG (quality 95)500 KB — 1.5 MB1.5 MB — 4 MBNear-lossless, suitable for archival
PNG2 — 5 MB6 — 12 MBLossless; large for photographic content
WebP (quality 85)150 KB — 600 KB400 KB — 1.8 MB~30% smaller than equivalent JPEG

Recommendation summary:

  • Accept: image/jpeg, image/png, image/webp
  • content-length-range: [51200, 10485760] (50 KB to 10 MB)
  • The system should not enforce minimum pixel dimensions at upload time (S3 policies cannot inspect image headers). Dimension validation, if required, should happen server-side after upload — either in the confirm step (Option B) or as an asynchronous quality check. This is a candidate for a follow-up DQ.

DQ-005: Upload Workflow

  • Option A with the enhancement to check for existence of target image in S3 using HEAD (similar to option B)
  • POST protocol for upload with the checks described in the discussion above, including the more refined content-type and size discussion.

This section consolidates the Discussion & Decisions and Decision above into a single authoritative specification for the image upload workflow up to Design Session 2.

Selected approach: Option A (Decoupled Upload + Entity Update) with a mandatory HEAD check enhancement. The SPA orchestrates three steps: (1) request presigned POST credentials, (2) upload directly to S3, (3) update the entity with the asset key. The entity update endpoint validates that the S3 object exists before persisting.

Upload protocol: Presigned POST (rationale) with a policy document enforcing Content-Type, Content-Length, key, and metadata constraints at the S3 edge.

Key prior decisions (from Design Session 01):

  • DQ-001 — Key format: ${tenantId}/${owning-module}/${entity-type}/${property-name}/${asset-uuid}.${extension}
  • DQ-002 — Two buckets: http-assets (durable), ephemeral-uploads (short TTL)
  • DQ-003 — Shared library in common-module (not Lambda)
  • DQ-007 — New S3BucketAccess abstraction in common-module
  • DQ-008 — Immutable objects, orphan cleanup deferred

PlantUML diagram

The S3BucketAccess library (DQ-007) generates presigned POST credentials by building an S3 POST policy document and signing it. The table below defines the complete policy. Key format follows DQ-001.

POST policy conditions:

#Condition FieldOperatorValueConfigurableNotes
1bucketeqConfigured bucket name (http-assets)No (bootstrap)Implicit in POST URL
2keyeq${tenantId}/${moduleName}/${entityType}/${propertyName}/${uuid}.${ext}No (computed)UUID generated per request; extension derived from contentType
3Content-Typestarts-with"image/"NoBroad gate; server-side allow-list provides defense-in-depth
4content-length-rangerange[51200, 10485760] (50 KB — 10 MB)YesBoth min and max should be configurable per module/entity
5x-amz-meta-tenant-ideq"${tenantId}"No (from context)Binds object to tenant
6x-amz-meta-authoreq"${author}"No (from context)Audit trail
7x-amz-meta-entity-typeeq"${entityType}" (e.g., "item")No (from context)Lifecycle management
8x-amz-meta-property-nameeq"${propertyName}" (e.g., "image")No (from context)Multi-property support

Signature parameters:

ParameterValueConfigurable
signatureDuration15 minutesYes (constructor parameter)
regionDeployment region (e.g., us-east-1)No (bootstrap)
credentialsDefault credentials chainNo

Server-side allow-list (checked before generating the policy):

ParameterAllow-listDefault
Content-Type["image/jpeg", "image/png", "image/webp"]Configurable per entity type
File size min51,200 bytes (50 KB)Configurable
File size max10,485,760 bytes (10 MB)Configurable

Return value (to the calling endpoint):

FieldTypeDescription
urlURLThe S3 bucket endpoint URL for the POST
formFieldsMap<String, String>All form fields the client must include (policy, signature, key, metadata, etc.)
objectKeyStringThe full S3 key (needed by the client for the subsequent entity update)

SPA (arda-frontend-app — Client-Side React)

Section titled “SPA (arda-frontend-app — Client-Side React)”
#RequirementDetails
S-1Request upload credentialsWhen the user selects an image file, POST to /api/items/<itemId>/image/upload-url with the file’s contentType and size (bytes).
S-2Client-side pre-validationBefore requesting credentials: validate file is in the allow-list (image/jpeg, image/png, image/webp), validate file size is between 50 KB and 10 MB. Show user-facing error messages for violations.
S-3Construct multipart formBuild a FormData object from the formFields returned by the server. Append each field in order, then append the file as the last field (S3 requires the file to be the last form field in a POST upload).
S-4Upload to S3POST the FormData to the url returned by the server. Handle: 204 (success), 403 (policy violation — display error), network errors (display retry prompt).
S-5Update entityOn successful S3 upload, PUT to /api/items/<itemId> with imageKey: objectKey. Handle: 200 (success — update UI with new image), 422 (HEAD check failed — display error indicating upload may not have completed).
S-6Progress feedbackShow upload progress during Step 2 (S3 POST supports XMLHttpRequest progress events or fetch with ReadableStream). Show a loading indicator during Steps 1 and 3.
S-7Error recoveryIf Step 2 fails, allow the user to retry without re-requesting credentials (the presigned POST is valid for 15 minutes). If Step 3 fails with 422, offer to retry the upload from Step 1.
#RequirementDetails
B-1Proxy upload-url requestRoute POST /api/items/<itemId>/image/upload-url to operations POST /v1/item/<itemId>/image/upload-url. Forward X-Tenant-ID, X-Author headers from session. Pass contentType and size from the request body.
B-2Proxy entity updateRoute PUT /api/items/<itemId> to operations PUT /v1/item/<itemId>. Forward X-Tenant-ID, X-Author headers. This is the existing entity update route — no new BFF route needed; the imageKey field is part of the standard Item payload.
B-3No file proxyingThe BFF does not proxy the S3 upload (Step 2). The SPA uploads directly to S3 using the presigned POST URL. This avoids file-size-related timeouts and memory pressure on the BFF/Amplify layer.
B-4Error forwardingForward all error responses (400, 404, 409, 422, 500) from operations to the SPA with their original status codes and error bodies.
#EndpointMethodPathRequestResponseErrors
E-1Request upload credentialsPOST/v1/item/<itemId>/image/upload-urlHeaders: X-Tenant-ID, X-Author. Body: { contentType: string, size: number }200 OK { url: string, formFields: Map<string,string>, objectKey: string }400 invalid contentType or size; 403 unauthorized; 404 itemId not found
E-2Update item (existing)PUT/v1/item/<itemId>Headers: X-Tenant-ID, X-Author. Body: Item payload including optional imageKey: string200 OK { item }422 HEAD check failed (object not found or metadata mismatch); 403 unauthorized; 404 itemId not found

E-1 is a new endpoint. E-2 is the existing item update endpoint, extended to handle the imageKey field with HEAD validation.

#BehaviorDetails
M-1Pre-check: authorizationValidate X-Tenant-ID and X-Author against the security context. The caller must have write permission on the target item.
M-2Pre-check: entity existenceVerify itemId exists in the Item universe for the given tenant. Return 404 if not found.
M-3Pre-check: Content-Type allow-listReject contentType values not in the configurable allow-list. Default: ["image/jpeg", "image/png", "image/webp"]. Return 400 with a descriptive error listing accepted types.
M-4Pre-check: file sizeReject size values outside the configurable range. Default: 51200—10485760 (50 KB — 10 MB). Return 400 with the allowed range.
M-5Delegate to S3AccessCall S3Access.generatePresignedPost(...) with the validated parameters. Return the result to the caller.
M-6HEAD validation on entity updateWhen an entity update includes an imageKey field, call S3Access.headObject(imageKey) before persisting. Validate: (a) object exists, (b) Content-Type is in the allow-list, (c) x-amz-meta-tenant-id matches the requesting tenant. If any check fails, return 422 Unprocessable Entity with a descriptive error. Do not persist the imageKey.
M-7Persist imageKeyIf HEAD validation passes, persist imageKey on the Item entity. The stored value is the full S3 key (not a URL). Read-time URL resolution is handled by the read path (DQ-006, separate session).
M-8ConfigurationUpload constraints (allow-list, size limits, signature duration) are module-level configuration, injectable via ModuleConfig. They should not be hardcoded.

S3 Access Library (common-moduleS3BucketAccess)

Section titled “S3 Access Library (common-module — S3BucketAccess)”

This is a new abstraction in common-module/lib/src/main/kotlin/cards/arda/common/lib/infra/storage/, parallel to CsvS3BucketDirectAccess (DQ-007).

#RequirementDetails
L-1Class: S3BucketAccessParametrized at bootstrap with: bucket: String, moduleName: String, region: Region, credentialsProvider: AwsCredentialsProvider, signatureDuration: Duration. Stateless — no entity-specific knowledge.
L-2Method: generatePresignedPostSignature: fun generatePresignedPost(tenantId: UUID, entityType: String, propertyName: String, contentType: String, maxSize: Long, minSize: Long, author: String): Result<PresignedPostResult>. Generates UUID, constructs key per DQ-001 format, builds POST policy with all conditions from the specification above, signs with S3Presigner, returns PresignedPostResult(url, formFields, objectKey).
L-3Method: headObjectSignature: suspend fun headObject(objectKey: String): Result<S3ObjectMetadata>. Calls S3 HeadObject, returns metadata (Content-Type, Content-Length, ETag, user metadata map). Returns Result.failure with AppError.ExternalService if the object does not exist or S3 returns an error.
L-4Key constructionPrivate method: fun buildKey(tenantId: UUID, entityType: String, propertyName: String, uuid: UUID, extension: String): String returning "${tenantId}/${moduleName}/${entityType}/${propertyName}/${uuid}.${extension}". The moduleName is fixed at construction (e.g., "operations"). Extension is derived from contentType via a mapping (image/jpeg -> jpg, image/png -> png, image/webp -> webp).
L-5S3 client configurationUse S3AsyncClient with chunkedEncodingEnabled(false) for browser compatibility (consistent with CsvS3BucketDirectAccess). Use S3Presigner for POST policy signing.
L-6Data class: PresignedPostResultdata class PresignedPostResult(val url: URL, val formFields: Map<String, String>, val objectKey: String)
L-7Data class: S3ObjectMetadatadata class S3ObjectMetadata(val contentType: String, val contentLength: Long, val eTag: String, val metadata: Map<String, String>)
L-8Common extraction (nice-to-have)Per DQ-007, extract shared S3 client setup, presigner creation, and headObject logic into a base class or utility shared with CsvS3BucketDirectAccess. This is not blocking for the initial implementation.

S3 Bucket Creation and Configuration (CDK in infrastructure)

Section titled “S3 Bucket Creation and Configuration (CDK in infrastructure)”
#RequirementDetails
C-1Bucket: http-assetsCreate an S3 bucket named ${environment}-http-assets (per DQ-002). This is the durable bucket for production asset storage.
C-2CORS configurationEnable CORS on the bucket to allow POST uploads from the SPA origin(s). Required headers: Content-Type, x-amz-meta-*. Allowed methods: POST, HEAD, GET. Allowed origins: configurable per environment (e.g., https://app.arda.cards for production, http://localhost:3000 for development).
C-3Lifecycle rulesNo automatic deletion (this is the durable bucket). Orphan cleanup is deferred per DQ-008. Consider adding a transition rule to Infrequent Access after 90 days for cost optimization (deferred).
C-4EncryptionEnable SSE-S3 (AES-256) default encryption. No KMS key required for this use case.
C-5Public accessBlock all public access (BlockPublicAccess.BLOCK_ALL). Objects are accessed via presigned URLs or CloudFront (DQ-006 read path, separate session).
C-6IAM policyGrant the operations EKS service account: s3:PutObject, s3:GetObject, s3:HeadObject on arn:aws:s3:::${bucket}/*. Grant s3:GetBucketLocation on the bucket itself (required for presigning).
C-7Bucket ARN exportExport the bucket ARN as a CDK/CloudFormation output so the operations service can resolve it at startup (consistent with how the CSV upload bucket ARN is passed via bucketArn in the Item Module).
C-8Bucket: ephemeral-uploadsPer DQ-002, this bucket already exists with a 1-day TTL lifecycle rule. No changes needed for this feature. Confirm it is not used in the image upload workflow (images go directly to http-assets).

Copyright: (c) Arda Systems 2025-2026, All rights reserved