9.2 KiB
nanobot
Ultra-lightweight personal AI assistant. TypeScript port of nanobot, inspired by Openclaw.
Runs as a chat-controlled bot (gateway) with pluggable "channels" for different services, or as a local interactive CLI (agent). Uses the Vercel AI SDK for LLM abstraction, so it works with Anthropic, OpenAI, Google, OpenRouter, and Ollama out of the box.
Requirements
- Bun ≥ 1.0
Install
git clone <this repo>
cd nanobot-ts
bun install # or use `mise install`
Quick start
1. Create a config file
mkdir -p ~/.nanobot
~/.nanobot/config.json:
{
"providers": {
"openrouter": {
"apiKey": "sk-or-v1-..."
}
},
"agent": {
"model": "openrouter/anthropic/claude-sonnet-4-5"
}
}
2. Chat
bun run start agent
That's it.
Commands
agent — local CLI
Chat with the agent from your terminal. Does not require a running gateway.
bun run start agent [options]
| Option | Description |
|---|---|
-c, --config <path> |
Path to config.json (default: ~/.nanobot/config.json) |
-m, --message <text> |
Send a single message and exit (non-interactive) |
-w, --workspace <path> |
Override the workspace directory |
-M, --model <model> |
Override the model for this session |
Interactive mode (default when no -m is given):
bun run start agent
bun run start agent -c ~/.nanobot-work/config.json
bun run start agent -w /tmp/scratch
Press Ctrl+C to exit.
Single-shot mode:
bun run start agent -m "What time is it in Tokyo?"
bun run start agent -m "Summarize the file ./notes.md"
gateway — Mattermost bot
Runs the full stack: Mattermost WebSocket channel, agent loop, cron scheduler, and heartbeat.
bun run start gateway [options]
| Option | Description |
|---|---|
-c, --config <path> |
Path to config.json (default: ~/.nanobot/config.json) |
bun run start gateway
bun run start gateway -c ~/.nanobot-work/config.json
Handles SIGINT / SIGTERM for graceful shutdown.
Configuration
Config file: ~/.nanobot/config.json (or pass -c <path> to any command).
Environment variable overrides:
| Variable | Config equivalent |
|---|---|
NANOBOT_CONFIG |
path to config file |
NANOBOT_MODEL |
agent.model |
NANOBOT_WORKSPACE |
agent.workspacePath |
Full config reference
{
"agent": {
"model": "openrouter/anthropic/claude-sonnet-4-5",
"workspacePath": "~/.nanobot",
"maxTokens": 4096,
"contextWindowTokens": 65536,
"temperature": 0.7,
"maxToolIterations": 40
},
"providers": {
"anthropic": { "apiKey": "..." },
"openai": { "apiKey": "..." },
"google": { "apiKey": "..." },
"openrouter": { "apiKey": "..." },
"ollama": { "apiBase": "http://localhost:11434/api" }
},
"channels": {
"sendProgress": true,
"sendToolHints": true,
"mattermost": {
"serverUrl": "mattermost.example.com",
"token": "YOUR_BOT_TOKEN",
"scheme": "https",
"port": 443,
"allowFrom": ["your-user-id"],
"groupPolicy": "mention",
"groupAllowFrom": [],
"dm": { "enabled": true, "allowFrom": [] },
"replyInThread": true
}
},
"tools": {
"restrictToWorkspace": false,
"exec": {
"timeout": 120,
"denyPatterns": [],
"restrictToWorkspace": false,
"pathAppend": ""
},
"web": {
"braveApiKey": "BSA...",
"proxy": "http://proxy:8080"
}
},
"heartbeat": {
"enabled": false,
"intervalMinutes": 30
}
}
Providers
Model names use a provider/model prefix scheme:
| Prefix | Provider | Example |
|---|---|---|
anthropic/ |
Anthropic direct | anthropic/claude-opus-4-5 |
openai/ |
OpenAI direct | openai/gpt-4o |
google/ |
Google direct | google/gemini-2.5-pro |
openrouter/ |
OpenRouter (any model) | openrouter/anthropic/claude-sonnet-4-5 |
ollama/ |
Local Ollama | ollama/llama3.2 |
For Ollama, set providers.ollama.apiBase (default: http://localhost:11434/api).
Mattermost setup
- In Mattermost: Main Menu → Integrations → Bot Accounts → Add Bot Account
- Enable
post:allpermission, copy the access token - Add to config:
{
"channels": {
"mattermost": {
"serverUrl": "mattermost.example.com",
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["your-mattermost-user-id"]
}
}
}
- Run
bun run start gateway
allowFrom controls which users the bot responds to. Use ["*"] to allow all users.
groupPolicy controls group channel behaviour:
"mention"(default) — only respond when @mentioned"open"— respond to all messages"allowlist"— restrict to users ingroupAllowFrom
Security
| Option | Default | Description |
|---|---|---|
tools.restrictToWorkspace |
false |
Restrict file and shell tools to the workspace directory |
tools.exec.denyPatterns |
[] |
Additional shell command patterns to block |
tools.exec.pathAppend |
"" |
Extra directories appended to PATH for shell commands |
channels.mattermost.allowFrom |
[] |
User ID allowlist. Empty denies everyone; ["*"] allows all |
Heartbeat
When heartbeat.enabled is true, the gateway wakes up every intervalMinutes and checks HEARTBEAT.md in the workspace. If the file lists tasks, the agent runs them and can deliver results via the message tool.
## Periodic Tasks
- [ ] Check the weather and send a summary
- [ ] Review open GitHub issues
Multiple instances
Run separate instances with different configs — useful for isolated workspaces, different models, or separate teams.
# Instance A
bun run start gateway -c ~/.nanobot-a/config.json
# Instance B
bun run start gateway -c ~/.nanobot-b/config.json
Each instance needs its own config file. Set a different agent.workspacePath per instance to keep memory, sessions, and cron jobs isolated:
{
"agent": {
"workspacePath": "~/.nanobot-a"
}
}
To run a local CLI session against a specific instance:
bun run start agent -c ~/.nanobot-a/config.json -m "Hello"
# Temporarily override the workspace for a one-off run
bun run start agent -c ~/.nanobot-a/config.json -w /tmp/scratch
Linux service (systemd)
# ~/.config/systemd/user/nanobot-gateway.service
[Unit]
Description=Nanobot Gateway
After=network.target
[Service]
Type=simple
ExecStart=/path/to/bun run --cwd /path/to/nanobot-ts start
Restart=always
RestartSec=10
[Install]
WantedBy=default.target
systemctl --user daemon-reload
systemctl --user enable --now nanobot-gateway
journalctl --user -u nanobot-gateway -f
To keep the service running after logout: loginctl enable-linger $USER
Project structure
nanobot-ts/
├── index.ts # Entry point
├── src/
│ ├── agent/
│ │ ├── loop.ts # Agent loop (LLM ↔ tool execution)
│ │ ├── context.ts # System prompt builder
│ │ ├── memory.ts # Long-term memory + consolidation
│ │ ├── skills.ts # Skills loader
│ │ ├── subagent.ts # Background task execution
│ │ └── tools/
│ │ ├── base.ts # Tool interface + ToolRegistry
│ │ ├── filesystem.ts # read_file, write_file, edit_file, list_dir
│ │ ├── shell.ts # exec
│ │ ├── web.ts # web_search (Brave), web_fetch
│ │ ├── message.ts # message (send to chat)
│ │ ├── spawn.ts # spawn (background subagent)
│ │ └── cron.ts # cron (schedule management)
│ ├── channels/
│ │ ├── base.ts # BaseChannel interface
│ │ ├── mattermost.ts # Mattermost (raw WebSocket + fetch)
│ │ └── manager.ts # Channel lifecycle + outbound routing
│ ├── bus/
│ │ ├── types.ts # InboundMessage, OutboundMessage schemas
│ │ └── queue.ts # AsyncQueue, MessageBus
│ ├── provider/
│ │ ├── types.ts # LLMResponse, ToolCall, ChatOptions
│ │ └── index.ts # LLMProvider (Vercel AI SDK wrapper)
│ ├── session/
│ │ ├── types.ts # SessionMessage, SessionMeta schemas
│ │ └── manager.ts # Session persistence (JSONL)
│ ├── cron/
│ │ ├── types.ts # CronJob, CronSchedule schemas
│ │ └── service.ts # CronService (at / every / cron expr)
│ ├── heartbeat/
│ │ └── service.ts # HeartbeatService
│ ├── config/
│ │ ├── types.ts # Zod config schemas
│ │ └── loader.ts # loadConfig, env overrides
│ └── cli/
│ └── commands.ts # gateway + agent commands
├── templates/ # Default workspace files (SOUL.md, etc.)
└── skills/ # Bundled skills (weather, github, etc.)