Skip to content

Implementation Changes

This document provides step-by-step implementation instructions for the Arda-cards/skills-mcp repository (greenfield). All file paths are relative to the repository root.

#QuestionOptionsRecommendationDecision
1Exact McpAgent API for accessing auth headers from the initial SSE connection(a) Override lifecycle method, (b) Access via this.request, (c) Forward as query param from fetch handlerInvestigate during implementation — the agents npm package API may vary by versionOpen
2Durable Objects config in wrangler.toml(a) Explicit [durable_objects] bindings, (b) Inferred by McpAgent.mount()Check agents package docs — may auto-configureOpen
3Test approach for Cloudflare Worker-specific code(a) Vitest with miniflare, (b) Unit test pure logic only, mock Worker APIs(b) for v1 — test auth, github client, cache logic, path validation as pure functions; integration test via manual deployOpen

All boilerplate files following the hypothesis-mcp patterns.

Create package.json:

{
"name": "@arda-cards/skills-mcp",
"version": "1.0.0",
"private": true,
"description": "MCP server serving skills, agent profiles, and documentation from GitHub repositories",
"type": "module",
"scripts": {
"build": "wrangler deploy --dry-run --outdir=dist",
"dev": "wrangler dev",
"deploy": "wrangler deploy",
"test": "vitest run",
"test:watch": "vitest",
"lint": "eslint src tests scripts",
"format": "prettier --write src tests scripts",
"typecheck": "tsc --noEmit",
"generate-indexes": "mkdir -p src/generated && node scripts/build-skills-agents-index.js && node scripts/build-docs-index.js"
},
"simple-git-hooks": {
"pre-commit": "npx lint-staged"
},
"lint-staged": {
"*.ts": [
"prettier --write",
"eslint --fix"
]
},
"repository": {
"type": "git",
"url": "https://github.com/Arda-cards/skills-mcp.git"
},
"license": "MIT",
"dependencies": {
"@modelcontextprotocol/sdk": "^1.12.1",
"agents": "^0.0.74",
"zod": "^3.23.8"
},
"devDependencies": {
"@cloudflare/workers-types": "^4.20250620.0",
"@types/node": "^22.10.0",
"eslint": "^9.27.0",
"eslint-plugin-unicorn": "^59.0.1",
"js-yaml": "^4.1.0",
"@types/js-yaml": "^4.0.9",
"lint-staged": "^16.0.0",
"prettier": "^3.5.3",
"simple-git-hooks": "^2.12.1",
"typescript": "^5.8.3",
"typescript-eslint": "^8.32.1",
"vitest": "^3.1.4",
"wrangler": "^4.0.0"
}
}

Note: agents package version should be verified against latest at implementation time. The js-yaml dependency is used only by build scripts but listed in devDependencies.

Create tsconfig.json:

{
"compilerOptions": {
"target": "ES2022",
"module": "ESNext",
"moduleResolution": "bundler",
"outDir": "dist",
"rootDir": "src",
"strict": true,
"noUncheckedIndexedAccess": true,
"exactOptionalPropertyTypes": true,
"noImplicitOverride": true,
"esModuleInterop": true,
"skipLibCheck": true,
"lib": ["ES2022"],
"types": ["@cloudflare/workers-types"],
"resolveJsonModule": true
},
"include": ["src"]
}

Create tsconfig.eslint.json:

{
"extends": "./tsconfig.json",
"include": ["src", "tests"]
}

Note: Uses bundler module resolution (Cloudflare Workers use esbuild), not NodeNext.

Task 1.3: Linting, Formatting, Testing Config

Section titled “Task 1.3: Linting, Formatting, Testing Config”

Create .prettierrc — identical to hypothesis-mcp:

{
"singleQuote": true,
"semi": false,
"trailingComma": "all"
}

Create eslint.config.js — identical to hypothesis-mcp:

import tseslint from 'typescript-eslint'
import unicorn from 'eslint-plugin-unicorn'
export default tseslint.config(
...tseslint.configs.strictTypeChecked,
{
languageOptions: {
parserOptions: {
project: './tsconfig.eslint.json',
},
},
plugins: {
unicorn,
},
rules: {
'unicorn/prefer-node-protocol': 'error',
'unicorn/no-nested-ternary': 'error',
'unicorn/prefer-module': 'error',
'unicorn/throw-new-error': 'error',
},
},
{
ignores: ['dist/', 'eslint.config.js', 'vitest.config.ts', 'scripts/'],
},
)

Create vitest.config.ts — identical to hypothesis-mcp:

import { defineConfig } from 'vitest/config'
export default defineConfig({
test: {
include: ['tests/**/*.test.ts'],
},
})

Create wrangler.toml — as specified:

name = "claude-context-mcp"
main = "src/index.ts"
compatibility_date = "2025-06-01"
compatibility_flags = ["nodejs_compat"]
[vars]
REPO = "Arda-cards/agentic-workspace"
SKILLS_PATH = "instructions/claude/skills"
AGENTS_PATH = "instructions/claude/agents"
BRANCH = "main"
DOCS_REPO = "Arda-cards/documentation"
DOCS_CONTENT_PATH = "src/content/docs"
DOCS_BRANCH = "main"

Durable Objects config TBD — depends on McpAgent requirements (see Open Question #2).

Replace .gitignore with project-specific entries:

node_modules/
dist/
.wrangler/
src/generated/
*.tsbuildinfo
.env
.env.*
!.env.example
.dev.vars
scratch/

Key addition: src/generated/ is gitignored — populated by build scripts.

Create CHANGELOG.md:

# Changelog
## [Unreleased]
### Added
- Initial MCP server with 7 tools: list_skills, get_skill, get_skill_file, list_agents, get_agent, list_docs, get_doc
- Bundled indexes for list_* tools (zero runtime GitHub API calls)
- URL-safe Base64 token authentication
- 10-minute cache with skipCache bypass on get_* tools
- Path traversal validation
- CI pipeline: build on merge, weekly Saturday schedule, manual dispatch

Create .github/clq/changemap.json — identical to hypothesis-mcp:

[
{ "name": "Added", "increment": "minor", "emoji": "" },
{ "name": "Changed", "increment": "major", "emoji": "💥" },
{ "name": "Deprecated", "increment": "minor", "emoji": "👎" },
{ "name": "Fixed", "increment": "patch", "emoji": "🐛" },
{ "name": "Removed", "increment": "major", "emoji": "🗑️" },
{ "name": "Security", "increment": "patch", "emoji": "🔒" }
]

Create scripts/ensure-generated.js:

A script that creates placeholder index files for local development when src/generated/ doesn’t exist.

import { mkdirSync, existsSync, writeFileSync } from 'node:fs'
const dir = 'src/generated'
if (!existsSync(dir)) mkdirSync(dir, { recursive: true })
const placeholders = {
'skills-index.json': '[]',
'agents-index.json': '[]',
'docs-index.json': '{}',
}
for (const [file, content] of Object.entries(placeholders)) {
const path = `${dir}/${file}`
if (!existsSync(path)) {
writeFileSync(path, content)
console.log(`Created placeholder: ${path}`)
}
}

Create src/types.ts:

export interface Env {
REPO: string
SKILLS_PATH: string
AGENTS_PATH: string
BRANCH: string
DOCS_REPO: string
DOCS_CONTENT_PATH: string
DOCS_BRANCH: string
}
export interface GitHubContentResponse {
name: string
path: string
sha: string
size: number
content: string // Base64-encoded
encoding: string // "base64"
type: 'file'
}
export interface SkillIndexEntry {
name: string
description: string | null
argumentHint: string | null
userInvocable: boolean
files: string[]
}
export interface AgentIndexEntry {
name: string
description: string | null
model: string | null
allowedTools: string[] | null
}
export interface McpErrorDetails {
repo?: string
path?: string
branch?: string
githubStatus?: number
}
export type ErrorCode =
| 'UNAUTHORIZED'
| 'FORBIDDEN'
| 'NOT_FOUND'
| 'UPSTREAM_ERROR'
| 'INVALID_PATH'

Create src/auth.ts:

Responsibility: extract and decode Bearer token from Authorization header.

Functions:

  • decodeToken(authHeader: string | null): string — extracts Bearer value, decodes URL-safe Base64, returns GitHub PAT. Throws on missing/invalid.

URL-safe Base64 decoding: replace - with + and _ with /, then atob().

Create src/validation.ts:

Responsibility: validate path segments against traversal attacks.

Functions:

  • validatePathSegment(segment: string): void — rejects .., leading /, and \.
  • validatePathSegments(segments: string[]): void — validates each segment in an array.

Create src/github.ts:

Responsibility: fetch files from GitHub Contents API.

Functions:

  • fetchGitHubContent(token: string, repo: string, path: string, branch: string): Promise<GitHubContentResponse> — makes authenticated GET request, handles errors, returns parsed JSON.
  • fetchGitHubContentOrNull(...) — same but returns null on 404 (for get_doc fallback).
  • decodeGitHubContent(response: GitHubContentResponse): string — decodes Base64 content to UTF-8 string.
  • mapGitHubError(status: number, repo: string, path: string, branch: string): McpError — maps GitHub HTTP status to MCP error with details.

Create src/cache.ts:

Responsibility: wrap Cloudflare Cache API for tool responses.

Functions:

  • getCached(cacheKey: string): Promise<Response | null> — checks cache, returns hit or null.
  • setCached(cacheKey: string, response: Response, ttlSeconds: number): Promise<void> — stores in cache.
  • buildCacheKey(tool: string, repo: string, branch: string, path: string): string — constructs cache key URL.

The cache key is a synthetic URL (e.g., https://cache.internal/{tool}/{repo}/{branch}/{path}) since Cloudflare Cache API requires URL keys.

TTL: 10 minutes (600 seconds).

Create src/index.ts:

This is the main file. It:

  1. Imports bundled indexes from ./generated/
  2. Defines the SkillsMcpAgent class extending McpAgent
  3. Registers all 7 tools in init()
  4. Exports the default fetch handler

Tool registrations:

ToolParameters (zod)Handler
list_skillsnoneReturn { skills: skillsIndex }
get_skill{ name: z.string(), skipCache: z.boolean().optional() }Validate name, fetch from GitHub, cache
get_skill_file{ skill: z.string(), file: z.string(), skipCache: z.boolean().optional() }Validate skill+file, fetch, return Base64
list_agentsnoneReturn { agents: agentsIndex }
get_agent{ name: z.string(), skipCache: z.boolean().optional() }Validate name, fetch from GitHub, cache
list_docsnoneReturn { index: docsIndex }
get_doc{ path: z.array(z.string()), skipCache: z.boolean().optional() }Validate segments, try index.md then *.md, cache

Each get_* handler follows this pattern:

  1. Extract auth token from connection context
  2. Validate path parameters
  3. Check cache (unless skipCache)
  4. Fetch from GitHub
  5. Cache the response
  6. Return MCP tool result

Create scripts/build-skills-agents-index.js:

Full implementation provided in specification.md — Build Process. Key points:

  • Uses js-yaml load() for frontmatter parsing
  • Extracts name, description, argument-hint, user-invocable for skills
  • Builds files manifest via recursive directory scan (excluding SKILL.md)
  • Extracts name, description, model, allowedTools for agents
  • Graceful fallback: missing frontmatter → { name: filename, description: null }
  • Writes to src/generated/skills-index.json and src/generated/agents-index.json

Create scripts/build-docs-index.js:

Full implementation provided in specification.md — Build Process. Key points:

  • Uses js-yaml load() for frontmatter parsing
  • Processes both .md and .mdx files
  • Builds hierarchical JSON tree
  • Writes to src/generated/docs-index.json

Create tests/auth.test.ts:

Tests for decodeToken():

  • AUTH-T01: Decodes valid URL-safe Base64 token
  • AUTH-T02: Handles -_ characters correctly
  • AUTH-T03: Throws on missing Authorization header
  • AUTH-T04: Throws on non-Bearer prefix
  • AUTH-T05: Throws on invalid Base64

Create tests/validation.test.ts:

Tests for validatePathSegment():

  • SEC-T01 through SEC-T05: .., absolute paths, backslashes, doc path segments

Create tests/github.test.ts:

Mock fetch globally. Tests for:

  • SKILL-T03/T04: Correct URL construction, content decoding
  • AUTH-T06/T07/T08: Error propagation for 401/403/404
  • ERR-T01/T02: Structured error responses with details
  • DOC-T03/T04/T05: index.md fallback logic

Create tests/tools.test.ts:

Tests for the tool handler logic (extracted as pure functions for testability):

  • SKILL-T01/T02: list_skills returns bundled index, no fetch
  • AGENT-T01/T02: list_agents returns bundled index, no fetch
  • DOC-T01/T02: list_docs returns bundled index, no fetch
  • SKILL-T07/T08: get_skill_file returns Base64 with encoding and size

Create tests/cache.test.ts:

Mock Cache API. Tests for:

  • SKILL-T05/T06: Cache hit, cache miss, skipCache bypass
  • Cache key construction

Create .github/workflows/publish.yml:

Combines patterns from hypothesis-mcp (changelog validation, build-and-test) with the index generation and Cloudflare deploy steps from specification.md — CI Pipeline.

Three triggers: push to main, Saturday cron, workflow_dispatch.

Jobs:

  1. validate-changelog — clq validation (same as hypothesis-mcp)
  2. build-and-test — typecheck, lint, test (same as hypothesis-mcp pattern)
  3. generate-and-deploy — clone source repos, generate indexes, wrangler deploy (only on main push, schedule, or dispatch)

Create .github/workflows/pull_request_upkeep.yml — identical to hypothesis-mcp:

---
name: "Pull Request Upkeep"
on:
pull_request:
types: [opened]
permissions: {}
jobs:
upkeep:
runs-on: ubuntu-latest
steps:
- uses: Arda-cards/pull-request-setup-action@v1
with:
iteration_field_name: "Iteration"
project_title: "${{ vars.ARDA_CLOUD_PROJECT_NAME }}"
pull_request_number: "${{ github.event.pull_request.number }}"
token: "${{ secrets.ARDA_GH_ACTION_PROJECT_WRITER }}"

Task 6.1: README.md (Installation and Usage Guide)

Section titled “Task 6.1: README.md (Installation and Usage Guide)”

Create README.md:

The repository README serves as the installation and usage guide for the MCP server. It must cover:

Sections:

  1. Overview — one-paragraph description of what the server does and who it is for.

  2. Prerequisites — what the reader needs before starting:

    • A Cloudflare account (free tier is sufficient)
    • A GitHub Fine-grained PAT with contents: read on Arda-cards/agentic-workspace and Arda-cards/documentation
    • Node.js 22+
    • wrangler CLI (installed as devDependency)
  3. Setup — step-by-step instructions:

    • Clone the repository
    • npm ci
    • Create .dev.vars with a test PAT for local development
    • Run npm run generate-indexes to create placeholder indexes for local dev (or run the full generation scripts against local clones of source repos)
    • npm run dev to start the local dev server
  4. Configuration — table of wrangler.toml environment variables with descriptions (reference the specification).

  5. Authentication — full walkthrough matching the specification’s Usage section:

    • Step 1: Create a GitHub PAT — link to fine-grained token page, exact permissions (Contents: Read-only on Arda-cards/agentic-workspace and Arda-cards/documentation), rotation reminder
    • Step 2: Encode as URL-safe Base64 — macOS/Linux command (echo -n | base64 | tr '+/' '-_' | tr -d '='), verification round-trip command
    • Step 3: Configure MCP clients — both claude.ai and Claude Code setup
  6. Available Tools — brief table of all 7 tools with parameters and descriptions. Link to specification for full details.

  7. Connecting to claude.ai — step-by-step:

    • Add MCP connector with the Worker URL and encoded token
    • Add the recommended Project system prompt (from specification)
    • Add the memory nudge (from specification)
  8. Connecting to Claude Code — the mcp.json configuration block.

  9. Development — commands reference:

    • npm run dev — local development server
    • npm run test — run unit tests
    • npm run lint — lint check
    • npm run typecheck — TypeScript check
    • npm run generate-indexes — regenerate bundled indexes locally
    • npm run deploy — deploy to Cloudflare (requires CLOUDFLARE_API_TOKEN)
  10. CI/CD — brief explanation of the three deployment triggers and what happens on each.

  11. License — MIT

After the initial deployment reaches main, configure branch protection matching hypothesis-mcp:

  • Require PR reviews before merging
  • Require status checks to pass (validate-changelog, build-and-test)
  • No direct pushes to main

Manual verification checklist:

  • Connect to the deployed Worker URL with an MCP client
  • Call list_skills — verify bundled index returns
  • Call get_skill("kotlin-coding") — verify live GitHub fetch
  • Call get_skill_file("agentation-feedback", "references/schema-v2.md") — verify Base64 return
  • Call list_agents — verify bundled index with model/allowedTools
  • Call get_agent("backend-engineer") — verify live fetch
  • Call list_docs — verify hierarchical index
  • Call get_doc(["architecture"]) — verify index.md fallback
  • Test path traversal rejection: get_skill_file("..", "../settings.json")
  • Test missing token: call without Authorization header
  • Test skipCache: call get_skill twice, second with skipCache: true

Single directory — no worktrees needed. This is a greenfield project in its own repository with a single implementing agent.


FilePhaseTask
package.json11.1
tsconfig.json11.2
tsconfig.eslint.json11.2
.prettierrc11.3
eslint.config.js11.3
vitest.config.ts11.3
wrangler.toml11.4
.gitignore11.5
CHANGELOG.md11.6
.github/clq/changemap.json11.6
scripts/ensure-generated.js11.7
src/types.ts22.1
src/auth.ts22.2
src/validation.ts22.3
src/github.ts22.4
src/cache.ts22.5
src/index.ts22.6
scripts/build-skills-agents-index.js33.1
scripts/build-docs-index.js33.2
tests/auth.test.ts44.1
tests/validation.test.ts44.2
tests/github.test.ts44.3
tests/tools.test.ts44.4
tests/cache.test.ts44.5
.github/workflows/publish.yml55.1
.github/workflows/pull_request_upkeep.yml55.2
README.md66.1