Files
nanobot-ts/README.md
2026-03-13 21:16:00 -06:00

9.7 KiB

nanobot

Ultra-lightweight personal AI assistant. TypeScript port of nanobot, inspired by Openclaw.

Runs as a chat-controlled bot (gateway) with pluggable "channels" for different services, or as a local interactive CLI (agent). Uses the Vercel AI SDK for LLM abstraction, so it works with Anthropic, OpenAI, Google, OpenRouter, and Ollama out of the box.

Requirements

Install

git clone <this repo>
cd nanobot-ts
bun install # or use `mise install`

Quick start

1. Initialize workspace

bun run nanobot onboard

This creates ~/.config/nanobot/ with a config file and templates.

2. Edit config

Add your API key and set provider/model:

{
  "providers": {
    "openrouter": {
      "apiKey": "sk-or-v1-..."
    }
  },
  "agent": {
    "provider": "openrouter",
    "model": "anthropic/claude-sonnet-4-5"
  }
}

3. Chat

bun run nanobot agent

That's it.

Commands

agent — local CLI

Chat with the agent from your terminal. Does not require a running gateway.

bun run nanobot agent [options]
Option Description
-c, --config <path> Path to config.json (default: ~/.config/nanobot/config.json)
-m, --message <text> Send a single message and exit (non-interactive)
-M, --model <model> Override the model for this session

Interactive mode (default when no -m is given):

bun run nanobot agent
bun run nanobot agent -c ~/.config/nanobot-work/config.json
bun run nanobot agent -w /tmp/scratch

Press Ctrl+C to exit.

Single-shot mode:

bun run nanobot agent -m "What time is it in Tokyo?"
bun run nanobot agent -m "Summarize the file ./notes.md"

gateway — Mattermost bot

Runs the full stack: Mattermost WebSocket channel, agent loop, cron scheduler, and heartbeat.

bun run nanobot gateway [options]
Option Description
-c, --config <path> Path to config.json (default: ~/.config/nanobot/config.json)
bun run nanobot gateway
bun run nanobot gateway -c ~/.config/nanobot-work/config.json

Handles SIGINT / SIGTERM for graceful shutdown.

Configuration

Config file: ~/.config/nanobot/config.json (or pass -c <path> to any command).

Environment variable overrides:

Variable Config equivalent
NANOBOT_CONFIG path to config file
NANOBOT_MODEL agent.model

Full config reference

{
  "agent": {
    "provider": "openrouter",
    "model": "anthropic/claude-sonnet-4-5",
    "workspacePath": "~/.config/nanobot",
    "maxTokens": 4096,
    "contextWindowTokens": 65536,
    "temperature": 0.7,
    "maxToolIterations": 40
  },
  "providers": {
    "anthropic":  { "apiKey": "..." },
    "openai":     { "apiKey": "..." },
    "google":     { "apiKey": "..." },
    "openrouter": { "apiKey": "..." },
    "ollama":     { "apiBase": "http://localhost:11434" }
  },
  "channels": {
    "sendProgress": true,
    "sendToolHints": true,
    "mattermost": {
      "serverUrl": "mattermost.example.com",
      "token": "YOUR_BOT_TOKEN",
      "scheme": "https",
      "port": 443,
      "allowFrom": ["your-user-id"],
      "groupPolicy": "mention",
      "groupAllowFrom": [],
      "dm": { "enabled": true, "allowFrom": [] },
      "replyInThread": true
    }
  },
  "tools": {
    "restrictToWorkspace": false,
    "exec": {
      "timeout": 120,
      "denyPatterns": [],
      "restrictToWorkspace": false,
      "pathAppend": ""
    },
    "web": {
      "braveApiKey": "BSA...",
      "proxy": "http://proxy:8080"
    }
  },
  "heartbeat": {
    "enabled": false,
    "intervalMinutes": 30
  }
}

Provider

The agent.provider field is required and must be one of:

Provider Description
anthropic Anthropic direct (Claude models)
openai OpenAI direct (GPT models)
google Google direct (Gemini models)
openrouter OpenRouter (access to many models)
ollama Local Ollama instance

The agent.model field is also required and should be the model ID without any provider prefix:

Provider Example Model
anthropic claude-sonnet-4-5, claude-opus-4-5
openai gpt-4o, gpt-4o-mini
google gemini-2.5-pro, gemini-2.0-flash
openrouter anthropic/claude-sonnet-4-5 (OpenRouter uses its own model IDs)
ollama llama3.2, qwen2.5

For Ollama, set providers.ollama.apiBase (default: http://localhost:11434).

Mattermost setup

  1. In Mattermost: Main Menu → Integrations → Bot Accounts → Add Bot Account
  2. Enable post:all permission, copy the access token
  3. Add to config:
{
  "channels": {
    "mattermost": {
      "serverUrl": "mattermost.example.com",
      "token": "YOUR_BOT_TOKEN",
      "allowFrom": ["your-mattermost-user-id"]
    }
  }
}
  1. Run bun run nanobot gateway

allowFrom controls which users the bot responds to. Use ["*"] to allow all users.

groupPolicy controls group channel behaviour:

  • "mention" (default) — only respond when @mentioned
  • "open" — respond to all messages
  • "allowlist" — restrict to users in groupAllowFrom

Security

Option Default Description
tools.restrictToWorkspace false Restrict file and shell tools to the workspace directory
tools.exec.denyPatterns [] Additional shell command patterns to block
tools.exec.pathAppend "" Extra directories appended to PATH for shell commands
channels.mattermost.allowFrom [] User ID allowlist. Empty denies everyone; ["*"] allows all

Heartbeat

When heartbeat.enabled is true, the gateway wakes up every intervalMinutes and checks HEARTBEAT.md in the workspace. If the file lists tasks, the agent runs them and can deliver results via the message tool.

## Periodic Tasks

- [ ] Check the weather and send a summary
- [ ] Review open GitHub issues

Multiple instances

Run separate instances with different configs — useful for isolated workspaces, different models, or separate teams.

# Instance A
bun run nanobot gateway -c ~/.config/nanobot-a/config.json

# Instance B
bun run nanobot gateway -c ~/.config/nanobot-b/config.json

Each instance needs its own config file. Set a different agent.workspacePath per instance to keep memory, sessions, and cron jobs isolated:

{
  "agent": {
    "workspacePath": "~/.config/nanobot-a"
  }
}

To run a local CLI session against a specific instance:

bun run nanobot agent -c ~/.config/nanobot-a/config.json -m "Hello"

# Temporarily override the workspace for a one-off run
bun run nanobot agent -c ~/.config/nanobot-a/config.json -w /tmp/scratch

Linux service (systemd)

# ~/.config/systemd/user/nanobot-gateway.service
[Unit]
Description=Nanobot Gateway
After=network.target

[Service]
Type=simple
ExecStart=/path/to/bun run --cwd /path/to/nanobot-ts start
Restart=always
RestartSec=10

[Install]
WantedBy=default.target
systemctl --user daemon-reload
systemctl --user enable --now nanobot-gateway
journalctl --user -u nanobot-gateway -f

To keep the service running after logout: loginctl enable-linger $USER

Project structure

nanobot-ts/
├── index.ts                  # Entry point
├── src/
│   ├── agent/
│   │   ├── loop.ts           # Agent loop (LLM ↔ tool execution)
│   │   ├── context.ts        # System prompt builder
│   │   ├── memory.ts         # Long-term memory + consolidation
│   │   ├── skills.ts         # Skills loader
│   │   ├── subagent.ts       # Background task execution
│   │   └── tools/
│   │       ├── base.ts       # Tool interface + ToolRegistry
│   │       ├── filesystem.ts # read_file, write_file, edit_file, list_dir
│   │       ├── shell.ts      # exec
│   │       ├── web.ts        # web_search (Brave), web_fetch
│   │       ├── message.ts    # message (send to chat)
│   │       ├── spawn.ts      # spawn (background subagent)
│   │       └── cron.ts       # cron (schedule management)
│   ├── channels/
│   │   ├── base.ts           # BaseChannel interface
│   │   ├── mattermost.ts     # Mattermost (raw WebSocket + fetch)
│   │   └── manager.ts        # Channel lifecycle + outbound routing
│   ├── bus/
│   │   ├── types.ts          # InboundMessage, OutboundMessage schemas
│   │   └── queue.ts          # AsyncQueue, MessageBus
│   ├── provider/
│   │   ├── types.ts          # LLMResponse, ToolCall, ChatOptions
│   │   └── index.ts          # LLMProvider (Vercel AI SDK wrapper)
│   ├── session/
│   │   ├── types.ts          # SessionMessage, SessionMeta schemas
│   │   └── manager.ts        # Session persistence (JSONL)
│   ├── cron/
│   │   ├── types.ts          # CronJob, CronSchedule schemas
│   │   └── service.ts        # CronService (at / every / cron expr)
│   ├── heartbeat/
│   │   └── service.ts        # HeartbeatService
│   ├── config/
│   │   ├── types.ts          # Zod config schemas
│   │   └── loader.ts         # loadConfig, env overrides
│   └── cli/
│       └── commands.ts       # gateway + agent commands
├── templates/                # Default workspace files (SOUL.md, etc.)
└── skills/                   # Bundled skills (weather, github, etc.)