Platform features
Everything here ships in the platform and supports real, long-running agent workflows.
Managed workspaces
Launch and run isolated workspaces without handling infra. Provisioning, restart, replacement, and cleanup are built in. Self-healing checks monitor Chrome and disk health, and in-place binary updates keep agents current without rebuilding containers.
Browser + shell execution
Agents use a real Chromium browser and a full Linux shell in the same session. Automate websites, run commands, install packages, and edit files end-to-end.
Live VNC viewer
Watch the agent work in real time. The browser is streamed to your dashboard via VNC so you see every page load, click, and form fill as it happens.
Streaming chat
Responses stream token-by-token over WebSocket. Open the same session in multiple tabs and they stay in sync automatically.
Workspace file explorer
Browse, upload, download, and delete files in the agent workspace directly from the dashboard. No shell commands needed for basic file management.
Messaging channels
Connect agents to WhatsApp, Telegram, and Discord. Bind channels per agent with routing rules so inbound messages reach the right agent and replies go back through the same platform.
Persistent memory
Store and recall structured memory across sessions. Context carries forward without mixing unrelated threads, powered by semantic search over pgvector.
Multi-session conversations
Run multiple independent conversation threads per agent. Each session has its own history, so parallel workstreams stay separate.
Scheduling and automation
Heartbeat checks, cron jobs, and event-driven tasks run continuously without manual prompting. Agents work while you sleep.
Multi-agent and sub-agents
Run multiple agents per workspace with a shared filesystem. Agents can spawn background sub-agents to parallelize research, file processing, or any independent task.
Persona and identity
Give each agent a personality, name, and working style. Identity files shape how agents communicate, make decisions, and remember preferences.
Any model, any provider
Bring your own API key for Anthropic, OpenAI, Google Gemini, OpenRouter, or ChatGPT/Codex via OAuth. Point any agent at a custom OpenAI-compatible endpoint with Bring Your Own LLM. Switch models per agent without changing anything else.
Observability built in
Track model calls, tool executions, errors, and runtime status in the dashboard.
Secure by default
Isolated containers with dropped capabilities, scoped credentials, signed workspace tokens, and forbidden-path enforcement. The container is the security boundary.