Skip to content

Clasper Docs

Production Agent Runtime with Governance & Observability

Clasper is an API-first, stateless agent execution platform designed for production SaaS integration. It enables workspace-driven agents with traceable behavior, governance controls, role-based access, cost/risk guards, and operational visibility — suitable for multi-tenant backends and audit-intensive environments.

“AI agents are not demos. They are production systems.”

PillarDescription
Agent ObservabilityFull execution traces with replay, diff, annotations & retention policies
Skill RuntimeVersioned YAML manifests with lifecycle states and testing
Governance & SafetyTenant isolation, permissions, audit logs, redaction, risk scoring
Provider AbstractionNormalized interface across LLM providers
Operational ToolingBudget controls, cost forecasting, workspace pinning, environments, impact analysis
  • Stateless HTTP runtime for agent executions
  • Workspace-driven prompt config (SOUL.md, AGENTS.md, HEARTBEAT.md, skills)
  • Multi-tenant context isolation with per-user scoping
  • Skill registry with versioning, lifecycle states, and testing
  • Observable and explainable traces with diff, replay, and annotations
  • Operational guardrails (RBAC, budgets, risk scoring, audit logs)
  • Control Plane Contract for portable backend integration
  • Smart context selection (optional relevance-based skills + memory)
  • A daemon for OS/browser automation (no shell access, no file system)
  • A personal agent chatbot (designed for backend integration, not direct chat)
  • A general automation framework (stateless, no persistent sessions)
  • A replacement for your backend (your system remains the source of truth)

Clasper has a bidirectional relationship with your SaaS backend:

┌─────────────┐ ┌─────────────┐
│ Your │ ────── (1) send message ───▶ │ Clasper │ ──▶ LLM
│ Backend │ │ Runtime │
│ │ ◀── (2) agent calls APIs ─── │ │
└─────────────┘ └─────────────┘
│ │
│ Source of truth: │ Stateless:
│ • Users, auth │ • Loads workspace config
│ • Tasks, messages │ • Builds prompts
│ • Conversations │ • Routes LLM calls
│ • Documents │ • Mints agent JWTs
└──────────────────────────────────────────────┘
  1. Your backend sends messages to Clasper (POST /api/agents/send)
  2. Clasper calls an LLM with workspace-configured prompts (personas, rules, skills)
  3. The agent response may call your APIs to create tasks, post messages, etc.
  4. Your backend remains the source of truth — Clasper is stateless

This means you can run multiple Clasper instances behind a load balancer with no sticky sessions.

ModeEndpointDescription
Request/ResponsePOST /api/agents/sendSynchronous agent execution
Streaming (SSE)POST /api/agents/streamReal-time streaming responses
LLM TaskPOST /llm-taskStructured JSON-only output
Trace ReplayGET /traces/:id/replayReproduce past executions
Ops Console/ops/*OIDC-protected operational UI

Clasper supports multiple LLM providers out of the box:

ProviderModels
OpenAIGPT-4o, GPT-4.1, etc.
AnthropicClaude 4, Claude 3.5, etc.
GoogleGemini 2.5, Gemini 2.0, etc.
xAIGrok 2, Grok 2 Mini
GroqLlama 3.3, Mixtral
MistralMistral Large, Codestral
OpenRouterMultiple providers