GnamiAI runs your AI agent in a hardened sandbox, pauses on every risky move, and ships with signed skills, zero-config memory, and multi-agent handoffs out of the box.
you › summarize this week's PRs and send a recap to #eng agent › drafted recap · 6 PRs · 412 words ⚠ approval requested · DESTRUCTIVE · integration.slack.post summary: post recap to #eng (42 members) preview: "Shipped this week: auth rewrite, ..." [a] approve with passkey [r] reject you › a agent › ✓ posted · message id: C084…Z9 · 0.12s
Every feature serves one of six foundational guarantees. None of them is optional.
Role-based capabilities. Your agent never touches a byte or dollar it wasn't explicitly granted.
Every destructive action pauses for a cryptographically signed approval from your device. No exceptions.
A real UI for approvals, memory, budgets, and integrations. No brittle group-chat hacks.
Every skill is statically analyzed, manually reviewed, and Ed25519-signed before it can install.
RAG out of the box. Your conversations summarize and index themselves — no JSON surgery required.
Specialized agents hand off tasks to each other with capability-scoped context slices. Pinned trust, no federation soup.
Other agent products ask you to install a process that owns your terminal. Hosted GnamiAI runs entirely in the browser — shell execution isn't toggled off, it's not registered. The capability literally does not exist in the hosted build.
Create your workspaceGnamiAI is a security-first AI agent runtime that runs in your browser. It gives your agent granular permissions, pauses destructive actions for human approval, and lets you compose specialized subagents, skills, long-term memory, and scheduled runs — without installing anything locally.
The hosted app is free to use. You bring your own provider API key (OpenAI, Anthropic, OpenRouter, or a local Ollama), which is billed directly by that provider at their rates.
OpenAI, Anthropic, OpenRouter (access to most open and closed models), and Ollama (self-hosted). You can switch providers per turn and select models from each provider's catalog.
Yes, encrypted at rest with AES-256-GCM using a server-held key. A database dump alone does not leak your credentials. You can disconnect any provider from Settings and the row is deleted immediately.
No. Chat transcripts live in your browser's localStorage. The server forwards each turn to your chosen provider and does not retain prompt or response bodies. See the Privacy page for the full data map.
Yes — Ollama is a first-class provider and your models stay on your machine. Because the hosted GnamiAI server runs on Vercel, it needs to reach your Ollama instance over the network; the simplest way is a tunnel (e.g. Cloudflare Tunnel, ngrok) that exposes your local ollama serve at a public URL. Inference still happens locally on your hardware — the tunnel only carries the request. SSRF guards block private-network and loopback targets, which is why a raw http://localhost:11434 won't work on the hosted build.
A skill is a plain SKILL.md file that teaches the agent how to do something specific (write changelogs, draft release notes, generate conventional commits). A subagent is a named specialization with its own system prompt and model preference, pinned into chat with /agent <name>.
It can create subagents, install skills, and remember facts when you ask it to, via structured gnami-action JSON blocks. Destructive operations pause for an approval in the UI before running.