Skip to main content
The Copass SDK is an open-source family of packages — one for each major agent framework — plus a standalone MCP server, a create-copass-agent scaffold, and a shared core client. Every package gives the LLM window-aware retrieval over your knowledge graph with a couple lines of wiring.

copass-harness on GitHub

Source, READMEs, and release notes for every package below.

Pick your path

Every integration is thin glue over the same HTTP API. Pick based on the stack you already use.
Want a hosted agent with OAuth’d integrations? Use the Agent Router — one import, one call to router.integrations.connect('github', …), one call to router.run({ provider, model, … }). Swap between Anthropic and Google on a per-call flag.
SurfacePackage
TypeScript@copass/agent-router
Pythoncopass-agent-router
Building with an LLM framework and running the loop yourself? Use an adapter — the model picks between discover / interpret / search on each turn, and retrieval is window-aware automatically.
FrameworkPackage
Vercel AI SDK@copass/ai-sdk
LangChain / LangGraph@copass/langchain
Mastra@copass/mastra
Pydantic AI (Python)copass-pydantic-ai
On the Anthropic-managed stack? Use MCP — zero code, one config line.
ClientPackage
Claude Code · Claude Desktop · Cursor · Claude Agent SDK@copass/mcp
Starting from zero? Scaffold a ready-to-deploy Hono server + Claude agent with an embedded chat UI:
npx create-copass-agent my-app
See create-copass-agent. Going lower level? Talk to the API directly:
Use casePackage
Retrieval, ingestion, Context Window, sandbox/source management@copass/core
Filesystem → knowledge graph watcher driver@copass/datasource-fs

Fastest start — two minutes to a running chat UI

1

Bootstrap Copass (once)

npm install -g @copass/cli
copass login                             # email OTP
copass setup                             # creates a sandbox, writes .olane/refs.json
copass apikey create --name my-app       # prints an olk_... key — shown once, save it
copass ingest README.md                  # something real to retrieve
2

Scaffold the agent

npx create-copass-agent my-app
cd my-app
# .env has COPASS_SANDBOX_ID + COPASS_PROJECT_ID auto-filled from
# ../.olane/refs.json. Paste the two secrets by hand:
echo "COPASS_API_KEY=olk_your_key_from_step_1" >> .env
echo "ANTHROPIC_API_KEY=sk-ant-your-key" >> .env
pnpm install
pnpm dev
3

Chat

Open http://localhost:3000. You get an embedded chat UI wired to a Hono server running the Claude Agent SDK with @copass/mcp exposing the full Copass retrieval surface as tools. Every turn is window-aware.
The scaffold is ~150 lines across four files. Everything is editable — swap the model, tweak the system prompt, wire a different UI (Assistant UI is a drop-in upgrade). See the package README for the production-limitations checklist.

Direct SDK — your own framework

The Vercel AI SDK pattern below lives inside a normal generateText call. Two commented blocks show exactly which four lines are Copass-specific — the rest is vanilla Vercel AI SDK that any LLM-app developer already writes.
import { CopassClient } from '@copass/core';
import { copassTools } from '@copass/ai-sdk';
import { generateText } from 'ai';
import { anthropic } from '@ai-sdk/anthropic';

// ── Copass (these four lines are the entire integration) ──
const copass = new CopassClient({
  auth: { type: 'api-key', key: process.env.COPASS_API_KEY! },
});
const window = await copass.contextWindow.create({
  sandbox_id: process.env.COPASS_SANDBOX_ID!,
});

// ── Standard Vercel AI SDK call. Only `tools: copassTools(...)` is new ──
const { text } = await generateText({
  model: anthropic('claude-opus-4-7'),
  tools: copassTools({ client: copass, sandbox_id: window.sandboxId, window }),
  maxSteps: 5,
  prompt: 'why is checkout flaky?',
});

console.log(text);
What Copass is actually doing here:
  • new CopassClient({ auth }) — authenticates against the Copass REST API.
  • contextWindow.create(...) — opens an ephemeral data source for this conversation. Retrieval will know about turns inside this window.
  • copassTools({ ... }) — returns three tools (discover, interpret, search) that Claude can decide when to call on its own. Each one hits Copass’s knowledge graph with the window attached.
Everything else — generateText, model:, maxSteps:, prompt: — is vanilla Vercel AI SDK. If you already build with the AI SDK, you’re adding four Copass lines and one tools entry.
For multi-turn chat where retrieval should know what prior turns surfaced, wrap with createWindowTracker — it auto-mirrors the conversation into the window so turn 2 excludes items turn 1 already delivered. See the @copass/ai-sdk README for the ~3-line upgrade.
Equivalent snippets for LangChain, Mastra, and Pydantic AI live in each package’s README.

MCP path — zero code

Drop this into your MCP client’s config (Claude Code .mcp.json, Claude Desktop’s claude_desktop_config.json, Cursor’s ~/.cursor/mcp.json, or your own Claude Agent SDK app):
{
  "mcpServers": {
    "copass": {
      "command": "npx",
      "args": ["-y", "@copass/mcp"],
      "env": {
        "COPASS_API_KEY": "olk_... (from `copass apikey create`)",
        "COPASS_SANDBOX_ID": "sb_... (from .olane/refs.json)"
      }
    }
  }
}
Restart the client. Claude now has discover, interpret, search, context_window_*, and ingest as first-class tools — automatically window-aware when a Context Window is active. Detailed setup for each client: @copass/mcp README.

Core primitives

Every integration surfaces the same three:
  • Sandbox — your tenancy boundary. See Secure Storage.
  • Context Window — an agent conversation wrapped as an ephemeral data source. Retrieval is automatically window-aware across turns. See Window-Aware Retrieval.
  • Retrieval gradientdiscover (ranked menu) → interpret (brief) → search (synthesized answer). See Progressive Disclosure.

All packages

PackagePurpose
@copass/agent-routerHosted agent runtime — one-call OAuth integrations + provider-neutral event stream
copass-agent-routerPython mirror of @copass/agent-router
copass-core-agentsProvider-neutral agent ABCs (BaseAgent, AgentBackend, events) — zero vendor deps
copass-anthropic-agentsClaude Managed Agents backend — in-process or routed
copass-google-agentsVertex AI Agent Engine backend — in-process or routed
@copass/coreClient SDK — auth, retrieval, Context Window, sandboxes, sources, ingest
@copass/configShared tool descriptions + system prompts (consumed by adapters)
@copass/ai-sdkVercel AI SDK tool adapter
@copass/langchainLangChain / LangGraph tool adapter
@copass/mastraMastra tool adapter
@copass/mcpStandalone MCP server
create-copass-agentnpx scaffold
@copass/datasource-fsFilesystem → knowledge graph watcher
copass-pydantic-aiPydantic AI tool adapter + httpx client (Python)

Explore further

Each package ships its own README with install, quickstart, full tool surface, and a production-limitations section. The fastest way in is the repo’s root README — it’s a directory; every typescript/packages/*/ and python/*/ subdir has its own deep-dive.
  • Agent Router — hosted agent runtime with one-call OAuth integrations
  • Portable Context — why the runtime is swappable in the first place
  • CLI — the copass command (published as @copass/cli)
  • Claude Code — install the MCP server in Claude Code specifically
  • MCP Server — MCP setup patterns
  • Secure Storage — the containment model under every SDK
  • API Reference — raw HTTP endpoints underneath every SDK