Compare commits
15 Commits
f0252e663e
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c65a7160ba | ||
|
|
74a5e70322 | ||
|
|
b6df31fcbf | ||
|
|
14aa1c1e7f | ||
|
|
47c4db53af | ||
|
|
1dd953d17a | ||
|
|
e915dd2922 | ||
|
|
9ac92ed536 | ||
|
|
398b98393a | ||
|
|
2d99d17d60 | ||
|
|
3893d88365 | ||
|
|
4f54c9837f | ||
|
|
7e28a09345 | ||
|
|
345cfef425 | ||
|
|
a857bf95cd |
4
.gitignore
vendored
4
.gitignore
vendored
@@ -1,6 +1,10 @@
|
||||
# dependencies (bun install)
|
||||
node_modules
|
||||
|
||||
# editors
|
||||
.vscode
|
||||
.openvscode-server
|
||||
|
||||
# output
|
||||
out
|
||||
dist
|
||||
|
||||
@@ -1,7 +1,5 @@
|
||||
{
|
||||
"$schema": "./node_modules/oxfmt/configuration_schema.json",
|
||||
"ignorePatterns": ["*.md"],
|
||||
"options": {
|
||||
"singleQuote": true
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,7 +5,8 @@
|
||||
"correctness": "warn"
|
||||
},
|
||||
"rules": {
|
||||
"eslint/no-unused-vars": "error"
|
||||
"eslint/no-unused-vars": "error",
|
||||
"unicorn/no-nested-ternary": "error"
|
||||
},
|
||||
"options": {
|
||||
"typeAware": true,
|
||||
|
||||
@@ -54,7 +54,6 @@ I maintain two distinct layers of documentation. I must keep them in sync but ne
|
||||
- **PRD.md**: Core features, target audience, and business logic.
|
||||
- **Architecture.md**: Tech stack, folder structure, and data flow diagrams.
|
||||
- **API.md**: Endpoint definitions, request/response schemas.
|
||||
- **Schema.md**: Database tables, relationships, and types.
|
||||
- **Discoveries.md**: Things learned empirically that future sessions should know.
|
||||
|
||||
### Layer 2: The Memory Bank, or mb (/memory-bank)
|
||||
|
||||
343
README.md
343
README.md
@@ -1,15 +1,346 @@
|
||||
# nanobot-ts
|
||||
# nanobot
|
||||
|
||||
To install dependencies:
|
||||
Ultra-lightweight personal AI assistant. TypeScript port of [nanobot](https://github.com/HKUDS/nanobot), inspired by [Openclaw](https://github.com/openclaw/openclaw).
|
||||
|
||||
Runs as a chat-controlled bot (`gateway`) with pluggable "channels" for different services, or as a local interactive CLI (`agent`). Uses the [Vercel AI SDK](https://sdk.vercel.ai) for LLM abstraction, so it works with Anthropic, OpenAI, Google, OpenRouter, and Ollama out of the box.
|
||||
|
||||
## Requirements
|
||||
|
||||
- [Bun](https://bun.sh) ≥ 1.0
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
bun install
|
||||
git clone <this repo>
|
||||
cd nanobot-ts
|
||||
bun install # or use `mise install`
|
||||
```
|
||||
|
||||
To run:
|
||||
## Quick start
|
||||
|
||||
**1. Initialize workspace**
|
||||
|
||||
```bash
|
||||
bun run index.ts
|
||||
bun run nanobot onboard
|
||||
```
|
||||
|
||||
This project was created using `bun init` in bun v1.3.10. [Bun](https://bun.com) is a fast all-in-one JavaScript runtime.
|
||||
This creates `~/.config/nanobot/` with a config file and templates.
|
||||
|
||||
**2. Edit config**
|
||||
|
||||
Add your API key and set provider/model:
|
||||
|
||||
```json
|
||||
{
|
||||
"providers": {
|
||||
"openrouter": {
|
||||
"apiKey": "sk-or-v1-..."
|
||||
}
|
||||
},
|
||||
"agent": {
|
||||
"provider": "openrouter",
|
||||
"model": "anthropic/claude-sonnet-4-5"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**3. Chat**
|
||||
|
||||
```bash
|
||||
bun run nanobot agent
|
||||
```
|
||||
|
||||
That's it.
|
||||
|
||||
## Commands
|
||||
|
||||
### `agent` — local CLI
|
||||
|
||||
Chat with the agent from your terminal. Does not require a running gateway.
|
||||
|
||||
```
|
||||
bun run nanobot agent [options]
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `-c, --config <path>` | Path to `config.json` (default: `~/.config/nanobot/config.json`) |
|
||||
| `-m, --message <text>` | Send a single message and exit (non-interactive) |
|
||||
| `-M, --model <model>` | Override the model for this session |
|
||||
|
||||
**Interactive mode** (default when no `-m` is given):
|
||||
|
||||
```bash
|
||||
bun run nanobot agent
|
||||
bun run nanobot agent -c ~/.config/nanobot-work/config.json
|
||||
bun run nanobot agent -w /tmp/scratch
|
||||
```
|
||||
|
||||
Press `Ctrl+C` to exit.
|
||||
|
||||
**Single-shot mode:**
|
||||
|
||||
```bash
|
||||
bun run nanobot agent -m "What time is it in Tokyo?"
|
||||
bun run nanobot agent -m "Summarize the file ./notes.md"
|
||||
```
|
||||
|
||||
### `gateway` — Mattermost bot
|
||||
|
||||
Runs the full stack: Mattermost WebSocket channel, agent loop, cron scheduler, and heartbeat.
|
||||
|
||||
```
|
||||
bun run nanobot gateway [options]
|
||||
```
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| `-c, --config <path>` | Path to `config.json` (default: `~/.config/nanobot/config.json`) |
|
||||
|
||||
```bash
|
||||
bun run nanobot gateway
|
||||
bun run nanobot gateway -c ~/.config/nanobot-work/config.json
|
||||
```
|
||||
|
||||
Handles `SIGINT` / `SIGTERM` for graceful shutdown.
|
||||
|
||||
## Configuration
|
||||
|
||||
Config file: `~/.config/nanobot/config.json` (or pass `-c <path>` to any command).
|
||||
|
||||
Environment variable overrides:
|
||||
|
||||
| Variable | Config equivalent |
|
||||
|----------|-------------------|
|
||||
| `NANOBOT_CONFIG` | path to config file |
|
||||
| `NANOBOT_MODEL` | `agent.model` |
|
||||
|
||||
### Full config reference
|
||||
|
||||
```json
|
||||
{
|
||||
"agent": {
|
||||
"provider": "openrouter",
|
||||
"model": "anthropic/claude-sonnet-4-5",
|
||||
"workspacePath": "~/.config/nanobot",
|
||||
"maxTokens": 4096,
|
||||
"contextWindowTokens": 65536,
|
||||
"temperature": 0.7,
|
||||
"maxToolIterations": 40
|
||||
},
|
||||
"providers": {
|
||||
"anthropic": { "apiKey": "..." },
|
||||
"openai": { "apiKey": "..." },
|
||||
"google": { "apiKey": "..." },
|
||||
"openrouter": { "apiKey": "..." },
|
||||
"ollama": { "apiBase": "http://localhost:11434" }
|
||||
},
|
||||
"channels": {
|
||||
"sendProgress": true,
|
||||
"sendToolHints": true,
|
||||
"mattermost": {
|
||||
"serverUrl": "mattermost.example.com",
|
||||
"token": "YOUR_BOT_TOKEN",
|
||||
"scheme": "https",
|
||||
"port": 443,
|
||||
"allowFrom": ["your-user-id"],
|
||||
"groupPolicy": "mention",
|
||||
"groupAllowFrom": [],
|
||||
"dm": { "enabled": true, "allowFrom": [] },
|
||||
"replyInThread": true
|
||||
}
|
||||
},
|
||||
"tools": {
|
||||
"restrictToWorkspace": false,
|
||||
"exec": {
|
||||
"timeout": 120,
|
||||
"denyPatterns": [],
|
||||
"restrictToWorkspace": false,
|
||||
"pathAppend": ""
|
||||
},
|
||||
"web": {
|
||||
"braveApiKey": "BSA...",
|
||||
"proxy": "http://proxy:8080"
|
||||
}
|
||||
},
|
||||
"heartbeat": {
|
||||
"enabled": false,
|
||||
"intervalMinutes": 30
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Provider
|
||||
|
||||
The `agent.provider` field is **required** and must be one of:
|
||||
|
||||
| Provider | Description |
|
||||
|----------|-------------|
|
||||
| `anthropic` | Anthropic direct (Claude models) |
|
||||
| `openai` | OpenAI direct (GPT models) |
|
||||
| `google` | Google direct (Gemini models) |
|
||||
| `openrouter` | OpenRouter (access to many models) |
|
||||
| `ollama` | Local Ollama instance |
|
||||
|
||||
The `agent.model` field is also **required** and should be the model ID without any provider prefix:
|
||||
|
||||
| Provider | Example Model |
|
||||
|----------|---------------|
|
||||
| `anthropic` | `claude-sonnet-4-5`, `claude-opus-4-5` |
|
||||
| `openai` | `gpt-4o`, `gpt-4o-mini` |
|
||||
| `google` | `gemini-2.5-pro`, `gemini-2.0-flash` |
|
||||
| `openrouter` | `anthropic/claude-sonnet-4-5` (OpenRouter uses its own model IDs) |
|
||||
| `ollama` | `llama3.2`, `qwen2.5` |
|
||||
|
||||
For Ollama, set `providers.ollama.apiBase` (default: `http://localhost:11434`).
|
||||
|
||||
### Mattermost setup
|
||||
|
||||
1. In Mattermost: **Main Menu → Integrations → Bot Accounts → Add Bot Account**
|
||||
2. Enable `post:all` permission, copy the access token
|
||||
3. Add to config:
|
||||
|
||||
```json
|
||||
{
|
||||
"channels": {
|
||||
"mattermost": {
|
||||
"serverUrl": "mattermost.example.com",
|
||||
"token": "YOUR_BOT_TOKEN",
|
||||
"allowFrom": ["your-mattermost-user-id"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
4. Run `bun run nanobot gateway`
|
||||
|
||||
`allowFrom` controls which users the bot responds to. Use `["*"]` to allow all users.
|
||||
|
||||
`groupPolicy` controls group channel behaviour:
|
||||
- `"mention"` (default) — only respond when @mentioned
|
||||
- `"open"` — respond to all messages
|
||||
- `"allowlist"` — restrict to users in `groupAllowFrom`
|
||||
|
||||
### Security
|
||||
|
||||
| Option | Default | Description |
|
||||
|--------|---------|-------------|
|
||||
| `tools.restrictToWorkspace` | `false` | Restrict file and shell tools to the workspace directory |
|
||||
| `tools.exec.denyPatterns` | `[]` | Additional shell command patterns to block |
|
||||
| `tools.exec.pathAppend` | `""` | Extra directories appended to `PATH` for shell commands |
|
||||
| `channels.mattermost.allowFrom` | `[]` | User ID allowlist. Empty denies everyone; `["*"]` allows all |
|
||||
|
||||
### Heartbeat
|
||||
|
||||
When `heartbeat.enabled` is `true`, the gateway wakes up every `intervalMinutes` and checks `HEARTBEAT.md` in the workspace. If the file lists tasks, the agent runs them and can deliver results via the `message` tool.
|
||||
|
||||
```markdown
|
||||
## Periodic Tasks
|
||||
|
||||
- [ ] Check the weather and send a summary
|
||||
- [ ] Review open GitHub issues
|
||||
```
|
||||
|
||||
## Multiple instances
|
||||
|
||||
Run separate instances with different configs — useful for isolated workspaces, different models, or separate teams.
|
||||
|
||||
```bash
|
||||
# Instance A
|
||||
bun run nanobot gateway -c ~/.config/nanobot-a/config.json
|
||||
|
||||
# Instance B
|
||||
bun run nanobot gateway -c ~/.config/nanobot-b/config.json
|
||||
```
|
||||
|
||||
Each instance needs its own config file. Set a different `agent.workspacePath` per instance to keep memory, sessions, and cron jobs isolated:
|
||||
|
||||
```json
|
||||
{
|
||||
"agent": {
|
||||
"workspacePath": "~/.config/nanobot-a"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To run a local CLI session against a specific instance:
|
||||
|
||||
```bash
|
||||
bun run nanobot agent -c ~/.config/nanobot-a/config.json -m "Hello"
|
||||
|
||||
# Temporarily override the workspace for a one-off run
|
||||
bun run nanobot agent -c ~/.config/nanobot-a/config.json -w /tmp/scratch
|
||||
```
|
||||
|
||||
## Linux service (systemd)
|
||||
|
||||
```ini
|
||||
# ~/.config/systemd/user/nanobot-gateway.service
|
||||
[Unit]
|
||||
Description=Nanobot Gateway
|
||||
After=network.target
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
ExecStart=/path/to/bun run --cwd /path/to/nanobot-ts start
|
||||
Restart=always
|
||||
RestartSec=10
|
||||
|
||||
[Install]
|
||||
WantedBy=default.target
|
||||
```
|
||||
|
||||
```bash
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user enable --now nanobot-gateway
|
||||
journalctl --user -u nanobot-gateway -f
|
||||
```
|
||||
|
||||
To keep the service running after logout: `loginctl enable-linger $USER`
|
||||
|
||||
## Project structure
|
||||
|
||||
```
|
||||
nanobot-ts/
|
||||
├── index.ts # Entry point
|
||||
├── src/
|
||||
│ ├── agent/
|
||||
│ │ ├── loop.ts # Agent loop (LLM ↔ tool execution)
|
||||
│ │ ├── context.ts # System prompt builder
|
||||
│ │ ├── memory.ts # Long-term memory + consolidation
|
||||
│ │ ├── skills.ts # Skills loader
|
||||
│ │ ├── subagent.ts # Background task execution
|
||||
│ │ └── tools/
|
||||
│ │ ├── base.ts # Tool interface + ToolRegistry
|
||||
│ │ ├── filesystem.ts # read_file, write_file, edit_file, list_dir
|
||||
│ │ ├── shell.ts # exec
|
||||
│ │ ├── web.ts # web_search (Brave), web_fetch
|
||||
│ │ ├── message.ts # message (send to chat)
|
||||
│ │ ├── spawn.ts # spawn (background subagent)
|
||||
│ │ └── cron.ts # cron (schedule management)
|
||||
│ ├── channels/
|
||||
│ │ ├── base.ts # BaseChannel interface
|
||||
│ │ ├── mattermost.ts # Mattermost (raw WebSocket + fetch)
|
||||
│ │ └── manager.ts # Channel lifecycle + outbound routing
|
||||
│ ├── bus/
|
||||
│ │ ├── types.ts # InboundMessage, OutboundMessage schemas
|
||||
│ │ └── queue.ts # AsyncQueue, MessageBus
|
||||
│ ├── provider/
|
||||
│ │ ├── types.ts # LLMResponse, ToolCall, ChatOptions
|
||||
│ │ └── index.ts # LLMProvider (Vercel AI SDK wrapper)
|
||||
│ ├── session/
|
||||
│ │ ├── types.ts # SessionMessage, SessionMeta schemas
|
||||
│ │ └── manager.ts # Session persistence (JSONL)
|
||||
│ ├── cron/
|
||||
│ │ ├── types.ts # CronJob, CronSchedule schemas
|
||||
│ │ └── service.ts # CronService (at / every / cron expr)
|
||||
│ ├── heartbeat/
|
||||
│ │ └── service.ts # HeartbeatService
|
||||
│ ├── config/
|
||||
│ │ ├── types.ts # Zod config schemas
|
||||
│ │ └── loader.ts # loadConfig, env overrides
|
||||
│ └── cli/
|
||||
│ └── commands.ts # gateway + agent commands
|
||||
├── templates/ # Default workspace files (SOUL.md, etc.)
|
||||
└── skills/ # Bundled skills (weather, github, etc.)
|
||||
```
|
||||
|
||||
91
bun.lock
91
bun.lock
@@ -4,8 +4,25 @@
|
||||
"workspaces": {
|
||||
"": {
|
||||
"name": "nanobot-ts",
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "^3.0.58",
|
||||
"@ai-sdk/google": "^3.0.43",
|
||||
"@ai-sdk/openai": "^3.0.41",
|
||||
"@mozilla/readability": "^0.6.0",
|
||||
"@openrouter/ai-sdk-provider": "^2.3.0",
|
||||
"ai": "^6.0.116",
|
||||
"ai-sdk-ollama": "^3.8.0",
|
||||
"commander": "^14.0.3",
|
||||
"cron-parser": "^5.5.0",
|
||||
"js-tiktoken": "^1.0.21",
|
||||
"jsonrepair": "^3.13.3",
|
||||
"node-html-parser": "^7.1.0",
|
||||
"picocolors": "^1.1.1",
|
||||
"zod": "^4.3.6",
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest",
|
||||
"@types/mozilla__readability": "^0.4.2",
|
||||
"oxfmt": "^0.40.0",
|
||||
"oxlint": "^1.55.0",
|
||||
"oxlint-tsgolint": "^0.16.0",
|
||||
@@ -16,6 +33,24 @@
|
||||
},
|
||||
},
|
||||
"packages": {
|
||||
"@ai-sdk/anthropic": ["@ai-sdk/anthropic@3.0.58", "", { "dependencies": { "@ai-sdk/provider": "3.0.8", "@ai-sdk/provider-utils": "4.0.19" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-/53SACgmVukO4bkms4dpxpRlYhW8Ct6QZRe6sj1Pi5H00hYhxIrqfiLbZBGxkdRvjsBQeP/4TVGsXgH5rQeb8Q=="],
|
||||
|
||||
"@ai-sdk/gateway": ["@ai-sdk/gateway@3.0.66", "", { "dependencies": { "@ai-sdk/provider": "3.0.8", "@ai-sdk/provider-utils": "4.0.19", "@vercel/oidc": "3.1.0" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-SIQ0YY0iMuv+07HLsZ+bB990zUJ6S4ujORAh+Jv1V2KGNn73qQKnGO0JBk+w+Res8YqOFSycwDoWcFlQrVxS4A=="],
|
||||
|
||||
"@ai-sdk/google": ["@ai-sdk/google@3.0.43", "", { "dependencies": { "@ai-sdk/provider": "3.0.8", "@ai-sdk/provider-utils": "4.0.19" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-NGCgP5g8HBxrNdxvF8Dhww+UKfqAkZAmyYBvbu9YLoBkzAmGKDBGhVptN/oXPB5Vm0jggMdoLycZ8JReQM8Zqg=="],
|
||||
|
||||
"@ai-sdk/openai": ["@ai-sdk/openai@3.0.41", "", { "dependencies": { "@ai-sdk/provider": "3.0.8", "@ai-sdk/provider-utils": "4.0.19" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-IZ42A+FO+vuEQCVNqlnAPYQnnUpUfdJIwn1BEDOBywiEHa23fw7PahxVtlX9zm3/zMvTW4JKPzWyvAgDu+SQ2A=="],
|
||||
|
||||
"@ai-sdk/provider": ["@ai-sdk/provider@3.0.8", "", { "dependencies": { "json-schema": "^0.4.0" } }, "sha512-oGMAgGoQdBXbZqNG0Ze56CHjDZ1IDYOwGYxYjO5KLSlz5HiNQ9udIXsPZ61VWaHGZ5XW/jyjmr6t2xz2jGVwbQ=="],
|
||||
|
||||
"@ai-sdk/provider-utils": ["@ai-sdk/provider-utils@4.0.19", "", { "dependencies": { "@ai-sdk/provider": "3.0.8", "@standard-schema/spec": "^1.1.0", "eventsource-parser": "^3.0.6" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-3eG55CrSWCu2SXlqq2QCsFjo3+E7+Gmg7i/oRVoSZzIodTuDSfLb3MRje67xE9RFea73Zao7Lm4mADIfUETKGg=="],
|
||||
|
||||
"@mozilla/readability": ["@mozilla/readability@0.6.0", "", {}, "sha512-juG5VWh4qAivzTAeMzvY9xs9HY5rAcr2E4I7tiSSCokRFi7XIZCAu92ZkSTsIj1OPceCifL3cpfteP3pDT9/QQ=="],
|
||||
|
||||
"@openrouter/ai-sdk-provider": ["@openrouter/ai-sdk-provider@2.3.0", "", { "peerDependencies": { "ai": "^6.0.0", "zod": "^3.25.0 || ^4.0.0" } }, "sha512-LXbXDDxCmEnXqvLH+37ZgoQzlmkmAPCbrUaTYhsuquTNVAu8BiwNHW7sEHcA3NOE49k50TRtLHce9JKcgKeoGA=="],
|
||||
|
||||
"@opentelemetry/api": ["@opentelemetry/api@1.9.0", "", {}, "sha512-3giAOQvZiH5F9bMlMiv8+GSPMeqg0dbaeo58/0SlA9sxSqZhnUtxzX9/2FzyhS9sWQf5S0GJE0AKBrFqjpeYcg=="],
|
||||
|
||||
"@oxfmt/binding-android-arm-eabi": ["@oxfmt/binding-android-arm-eabi@0.40.0", "", { "os": "android", "cpu": "arm" }, "sha512-S6zd5r1w/HmqR8t0CTnGjFTBLDq2QKORPwriCHxo4xFNuhmOTABGjPaNvCJJVnrKBLsohOeiDX3YqQfJPF+FXw=="],
|
||||
|
||||
"@oxfmt/binding-android-arm64": ["@oxfmt/binding-android-arm64@0.40.0", "", { "os": "android", "cpu": "arm64" }, "sha512-/mbS9UUP/5Vbl2D6osIdcYiP0oie63LKMoTyGj5hyMCK/SFkl3EhtyRAfdjPvuvHC0SXdW6ePaTKkBSq1SNcIw=="],
|
||||
@@ -104,22 +139,78 @@
|
||||
|
||||
"@oxlint/binding-win32-x64-msvc": ["@oxlint/binding-win32-x64-msvc@1.55.0", "", { "os": "win32", "cpu": "x64" }, "sha512-ZFALNow2/og75gvYzNP7qe+rREQ5xunktwA+lgykoozHZ6hw9bqg4fn5j2UvG4gIn1FXqrZHkOAXuPf5+GOYTQ=="],
|
||||
|
||||
"@standard-schema/spec": ["@standard-schema/spec@1.1.0", "", {}, "sha512-l2aFy5jALhniG5HgqrD6jXLi/rUWrKvqN/qJx6yoJsgKhblVd+iqqU4RCXavm/jPityDo5TCvKMnpjKnOriy0w=="],
|
||||
|
||||
"@types/bun": ["@types/bun@1.3.10", "", { "dependencies": { "bun-types": "1.3.10" } }, "sha512-0+rlrUrOrTSskibryHbvQkDOWRJwJZqZlxrUs1u4oOoTln8+WIXBPmAuCF35SWB2z4Zl3E84Nl/D0P7803nigQ=="],
|
||||
|
||||
"@types/mozilla__readability": ["@types/mozilla__readability@0.4.2", "", {}, "sha512-mXHoZZ/Knps7GABAG9D512yDLCMQNOvQBqzIZpFJnRJveHc528rltnXh6EZTi1RiB1I2Z8yV35Dzs6DhE2WaYw=="],
|
||||
|
||||
"@types/node": ["@types/node@25.5.0", "", { "dependencies": { "undici-types": "~7.18.0" } }, "sha512-jp2P3tQMSxWugkCUKLRPVUpGaL5MVFwF8RDuSRztfwgN1wmqJeMSbKlnEtQqU8UrhTmzEmZdu2I6v2dpp7XIxw=="],
|
||||
|
||||
"@vercel/oidc": ["@vercel/oidc@3.1.0", "", {}, "sha512-Fw28YZpRnA3cAHHDlkt7xQHiJ0fcL+NRcIqsocZQUSmbzeIKRpwttJjik5ZGanXP+vlA4SbTg+AbA3bP363l+w=="],
|
||||
|
||||
"ai": ["ai@6.0.116", "", { "dependencies": { "@ai-sdk/gateway": "3.0.66", "@ai-sdk/provider": "3.0.8", "@ai-sdk/provider-utils": "4.0.19", "@opentelemetry/api": "1.9.0" }, "peerDependencies": { "zod": "^3.25.76 || ^4.1.8" } }, "sha512-7yM+cTmyRLeNIXwt4Vj+mrrJgVQ9RMIW5WO0ydoLoYkewIvsMcvUmqS4j2RJTUXaF1HphwmSKUMQ/HypNRGOmA=="],
|
||||
|
||||
"ai-sdk-ollama": ["ai-sdk-ollama@3.8.0", "", { "dependencies": { "@ai-sdk/provider": "^3.0.8", "@ai-sdk/provider-utils": "^4.0.15", "jsonrepair": "^3.13.2", "ollama": "^0.6.3" }, "peerDependencies": { "ai": "^6.0.89" } }, "sha512-Nlla8FpK8QFMNh9m8sPCZoNqnr+n+Ud0QTqpXNds4j/b/lbVZGaji13ZcRuuFvBwPwd4xnFkNrijJzi70Ih1Tg=="],
|
||||
|
||||
"base64-js": ["base64-js@1.5.1", "", {}, "sha512-AKpaYlHn8t4SVbOHCy+b5+KKgvR4vrsD8vbvrbiQJps7fKDTkjkDry6ji0rUJjC0kzbNePLwzxq8iypo41qeWA=="],
|
||||
|
||||
"boolbase": ["boolbase@1.0.0", "", {}, "sha512-JZOSA7Mo9sNGB8+UjSgzdLtokWAky1zbztM3WRLCbZ70/3cTANmQmOdR7y2g+J0e2WXywy1yS468tY+IruqEww=="],
|
||||
|
||||
"bun-types": ["bun-types@1.3.10", "", { "dependencies": { "@types/node": "*" } }, "sha512-tcpfCCl6XWo6nCVnpcVrxQ+9AYN1iqMIzgrSKYMB/fjLtV2eyAVEg7AxQJuCq/26R6HpKWykQXuSOq/21RYcbg=="],
|
||||
|
||||
"commander": ["commander@14.0.3", "", {}, "sha512-H+y0Jo/T1RZ9qPP4Eh1pkcQcLRglraJaSLoyOtHxu6AapkjWVCy2Sit1QQ4x3Dng8qDlSsZEet7g5Pq06MvTgw=="],
|
||||
|
||||
"cron-parser": ["cron-parser@5.5.0", "", { "dependencies": { "luxon": "^3.7.1" } }, "sha512-oML4lKUXxizYswqmxuOCpgFS8BNUJpIu6k/2HVHyaL8Ynnf3wdf9tkns0yRdJLSIjkJ+b0DXHMZEHGpMwjnPww=="],
|
||||
|
||||
"css-select": ["css-select@5.2.2", "", { "dependencies": { "boolbase": "^1.0.0", "css-what": "^6.1.0", "domhandler": "^5.0.2", "domutils": "^3.0.1", "nth-check": "^2.0.1" } }, "sha512-TizTzUddG/xYLA3NXodFM0fSbNizXjOKhqiQQwvhlspadZokn1KDy0NZFS0wuEubIYAV5/c1/lAr0TaaFXEXzw=="],
|
||||
|
||||
"css-what": ["css-what@6.2.2", "", {}, "sha512-u/O3vwbptzhMs3L1fQE82ZSLHQQfto5gyZzwteVIEyeaY5Fc7R4dapF/BvRoSYFeqfBk4m0V1Vafq5Pjv25wvA=="],
|
||||
|
||||
"dom-serializer": ["dom-serializer@2.0.0", "", { "dependencies": { "domelementtype": "^2.3.0", "domhandler": "^5.0.2", "entities": "^4.2.0" } }, "sha512-wIkAryiqt/nV5EQKqQpo3SToSOV9J0DnbJqwK7Wv/Trc92zIAYZ4FlMu+JPFW1DfGFt81ZTCGgDEabffXeLyJg=="],
|
||||
|
||||
"domelementtype": ["domelementtype@2.3.0", "", {}, "sha512-OLETBj6w0OsagBwdXnPdN0cnMfF9opN69co+7ZrbfPGrdpPVNBUj02spi6B1N7wChLQiPn4CSH/zJvXw56gmHw=="],
|
||||
|
||||
"domhandler": ["domhandler@5.0.3", "", { "dependencies": { "domelementtype": "^2.3.0" } }, "sha512-cgwlv/1iFQiFnU96XXgROh8xTeetsnJiDsTc7TYCLFd9+/WNkIqPTxiM/8pSd8VIrhXGTf1Ny1q1hquVqDJB5w=="],
|
||||
|
||||
"domutils": ["domutils@3.2.2", "", { "dependencies": { "dom-serializer": "^2.0.0", "domelementtype": "^2.3.0", "domhandler": "^5.0.3" } }, "sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw=="],
|
||||
|
||||
"entities": ["entities@4.5.0", "", {}, "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="],
|
||||
|
||||
"eventsource-parser": ["eventsource-parser@3.0.6", "", {}, "sha512-Vo1ab+QXPzZ4tCa8SwIHJFaSzy4R6SHf7BY79rFBDf0idraZWAkYrDjDj8uWaSm3S2TK+hJ7/t1CEmZ7jXw+pg=="],
|
||||
|
||||
"he": ["he@1.2.0", "", { "bin": { "he": "bin/he" } }, "sha512-F/1DnUGPopORZi0ni+CvrCgHQ5FyEAHRLSApuYWMmrbSwoN2Mn/7k+Gl38gJnR7yyDZk6WLXwiGod1JOWNDKGw=="],
|
||||
|
||||
"js-tiktoken": ["js-tiktoken@1.0.21", "", { "dependencies": { "base64-js": "^1.5.1" } }, "sha512-biOj/6M5qdgx5TKjDnFT1ymSpM5tbd3ylwDtrQvFQSu0Z7bBYko2dF+W/aUkXUPuk6IVpRxk/3Q2sHOzGlS36g=="],
|
||||
|
||||
"json-schema": ["json-schema@0.4.0", "", {}, "sha512-es94M3nTIfsEPisRafak+HDLfHXnKBhV3vU5eqPcS3flIWqcxJWgXHXiey3YrpaNsanY5ei1VoYEbOzijuq9BA=="],
|
||||
|
||||
"jsonrepair": ["jsonrepair@3.13.3", "", { "bin": { "jsonrepair": "bin/cli.js" } }, "sha512-BTznj0owIt2CBAH/LTo7+1I5pMvl1e1033LRl/HUowlZmJOIhzC0zbX5bxMngLkfT4WnzPP26QnW5wMr2g9tsQ=="],
|
||||
|
||||
"luxon": ["luxon@3.7.2", "", {}, "sha512-vtEhXh/gNjI9Yg1u4jX/0YVPMvxzHuGgCm6tC5kZyb08yjGWGnqAjGJvcXbqQR2P3MyMEFnRbpcdFS6PBcLqew=="],
|
||||
|
||||
"node-html-parser": ["node-html-parser@7.1.0", "", { "dependencies": { "css-select": "^5.1.0", "he": "1.2.0" } }, "sha512-iJo8b2uYGT40Y8BTyy5ufL6IVbN8rbm/1QK2xffXU/1a/v3AAa0d1YAoqBNYqaS4R/HajkWIpIfdE6KcyFh1AQ=="],
|
||||
|
||||
"nth-check": ["nth-check@2.1.1", "", { "dependencies": { "boolbase": "^1.0.0" } }, "sha512-lqjrjmaOoAnWfMmBPL+XNnynZh2+swxiX3WUE0s4yEHI6m+AwrK2UZOimIRl3X/4QctVqS8AiZjFqyOGrMXb/w=="],
|
||||
|
||||
"ollama": ["ollama@0.6.3", "", { "dependencies": { "whatwg-fetch": "^3.6.20" } }, "sha512-KEWEhIqE5wtfzEIZbDCLH51VFZ6Z3ZSa6sIOg/E/tBV8S51flyqBOXi+bRxlOYKDf8i327zG9eSTb8IJxvm3Zg=="],
|
||||
|
||||
"oxfmt": ["oxfmt@0.40.0", "", { "dependencies": { "tinypool": "2.1.0" }, "optionalDependencies": { "@oxfmt/binding-android-arm-eabi": "0.40.0", "@oxfmt/binding-android-arm64": "0.40.0", "@oxfmt/binding-darwin-arm64": "0.40.0", "@oxfmt/binding-darwin-x64": "0.40.0", "@oxfmt/binding-freebsd-x64": "0.40.0", "@oxfmt/binding-linux-arm-gnueabihf": "0.40.0", "@oxfmt/binding-linux-arm-musleabihf": "0.40.0", "@oxfmt/binding-linux-arm64-gnu": "0.40.0", "@oxfmt/binding-linux-arm64-musl": "0.40.0", "@oxfmt/binding-linux-ppc64-gnu": "0.40.0", "@oxfmt/binding-linux-riscv64-gnu": "0.40.0", "@oxfmt/binding-linux-riscv64-musl": "0.40.0", "@oxfmt/binding-linux-s390x-gnu": "0.40.0", "@oxfmt/binding-linux-x64-gnu": "0.40.0", "@oxfmt/binding-linux-x64-musl": "0.40.0", "@oxfmt/binding-openharmony-arm64": "0.40.0", "@oxfmt/binding-win32-arm64-msvc": "0.40.0", "@oxfmt/binding-win32-ia32-msvc": "0.40.0", "@oxfmt/binding-win32-x64-msvc": "0.40.0" }, "bin": { "oxfmt": "bin/oxfmt" } }, "sha512-g0C3I7xUj4b4DcagevM9kgH6+pUHytikxUcn3/VUkvzTNaaXBeyZqb7IBsHwojeXm4mTBEC/aBjBTMVUkZwWUQ=="],
|
||||
|
||||
"oxlint": ["oxlint@1.55.0", "", { "optionalDependencies": { "@oxlint/binding-android-arm-eabi": "1.55.0", "@oxlint/binding-android-arm64": "1.55.0", "@oxlint/binding-darwin-arm64": "1.55.0", "@oxlint/binding-darwin-x64": "1.55.0", "@oxlint/binding-freebsd-x64": "1.55.0", "@oxlint/binding-linux-arm-gnueabihf": "1.55.0", "@oxlint/binding-linux-arm-musleabihf": "1.55.0", "@oxlint/binding-linux-arm64-gnu": "1.55.0", "@oxlint/binding-linux-arm64-musl": "1.55.0", "@oxlint/binding-linux-ppc64-gnu": "1.55.0", "@oxlint/binding-linux-riscv64-gnu": "1.55.0", "@oxlint/binding-linux-riscv64-musl": "1.55.0", "@oxlint/binding-linux-s390x-gnu": "1.55.0", "@oxlint/binding-linux-x64-gnu": "1.55.0", "@oxlint/binding-linux-x64-musl": "1.55.0", "@oxlint/binding-openharmony-arm64": "1.55.0", "@oxlint/binding-win32-arm64-msvc": "1.55.0", "@oxlint/binding-win32-ia32-msvc": "1.55.0", "@oxlint/binding-win32-x64-msvc": "1.55.0" }, "peerDependencies": { "oxlint-tsgolint": ">=0.15.0" }, "optionalPeers": ["oxlint-tsgolint"], "bin": { "oxlint": "bin/oxlint" } }, "sha512-T+FjepiyWpaZMhekqRpH8Z3I4vNM610p6w+Vjfqgj5TZUxHXl7N8N5IPvmOU8U4XdTRxqtNNTh9Y4hLtr7yvFg=="],
|
||||
|
||||
"oxlint-tsgolint": ["oxlint-tsgolint@0.16.0", "", { "optionalDependencies": { "@oxlint-tsgolint/darwin-arm64": "0.16.0", "@oxlint-tsgolint/darwin-x64": "0.16.0", "@oxlint-tsgolint/linux-arm64": "0.16.0", "@oxlint-tsgolint/linux-x64": "0.16.0", "@oxlint-tsgolint/win32-arm64": "0.16.0", "@oxlint-tsgolint/win32-x64": "0.16.0" }, "bin": { "tsgolint": "bin/tsgolint.js" } }, "sha512-4RuJK2jP08XwqtUu+5yhCbxEauCm6tv2MFHKEMsjbosK2+vy5us82oI3VLuHwbNyZG7ekZA26U2LLHnGR4frIA=="],
|
||||
|
||||
"picocolors": ["picocolors@1.1.1", "", {}, "sha512-xceH2snhtb5M9liqDsmEw56le376mTZkEX/jEb/RxNFyegNul7eNslCXP9FDj/Lcu0X8KEyMceP2ntpaHrDEVA=="],
|
||||
|
||||
"tinypool": ["tinypool@2.1.0", "", {}, "sha512-Pugqs6M0m7Lv1I7FtxN4aoyToKg1C4tu+/381vH35y8oENM/Ai7f7C4StcoK4/+BSw9ebcS8jRiVrORFKCALLw=="],
|
||||
|
||||
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
|
||||
|
||||
"undici-types": ["undici-types@7.18.2", "", {}, "sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w=="],
|
||||
|
||||
"whatwg-fetch": ["whatwg-fetch@3.6.20", "", {}, "sha512-EqhiFU6daOA8kpjOWTL0olhVOF3i7OrFzSYiGsEMB8GcXS+RrzauAERX65xMeNWVqxA6HXH2m69Z9LaKKdisfg=="],
|
||||
|
||||
"zod": ["zod@4.3.6", "", {}, "sha512-rftlrkhHZOcjDwkGlnUtZZkvaPHCsDATp4pGpuOOMDaTdDDXF91wuVDJoWoPsKX/3YPQ5fHuF3STjcYyKr+Qhg=="],
|
||||
}
|
||||
}
|
||||
|
||||
233
docs/API.md
Normal file
233
docs/API.md
Normal file
@@ -0,0 +1,233 @@
|
||||
# API Reference
|
||||
|
||||
This document describes the tool interface exposed to the LLM and the internal APIs for extending nanobot.
|
||||
|
||||
## Tool Interface
|
||||
|
||||
All tools implement the `Tool` interface from `src/agent/tools/base.ts`:
|
||||
|
||||
```typescript
|
||||
interface Tool {
|
||||
name: string; // Tool identifier
|
||||
description: string; // LLM-readable description
|
||||
parameters: Record<string, unknown>; // JSON Schema object
|
||||
execute(args: Record<string, unknown>): Promise<string>;
|
||||
}
|
||||
```
|
||||
|
||||
## Built-in Tools
|
||||
|
||||
### read_file
|
||||
|
||||
Read a file from the filesystem.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| path | string | yes | Absolute or relative file path |
|
||||
| offset | number | no | Line number to start from (1-indexed) |
|
||||
| limit | number | no | Maximum number of lines to read |
|
||||
|
||||
**Returns**: Line-numbered content (e.g., `1: first line\n2: second line`)
|
||||
|
||||
### write_file
|
||||
|
||||
Write content to a file, creating parent directories as needed.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| path | string | yes | File path to write |
|
||||
| content | string | yes | Content to write |
|
||||
|
||||
**Returns**: Success message or error
|
||||
|
||||
### edit_file
|
||||
|
||||
Replace an exact string in a file.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| path | string | yes | File path to edit |
|
||||
| oldString | string | yes | Exact string to replace |
|
||||
| newString | string | yes | Replacement string |
|
||||
| replaceAll | boolean | no | Replace all occurrences |
|
||||
|
||||
**Returns**: Success message or error if oldString not found
|
||||
|
||||
### list_dir
|
||||
|
||||
List files in a directory.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| path | string | yes | Directory path |
|
||||
| recursive | boolean | no | List recursively |
|
||||
|
||||
**Returns**: One file/directory per line, directories suffixed with `/`
|
||||
|
||||
### exec
|
||||
|
||||
Execute a shell command.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| command | string | yes | Shell command to execute |
|
||||
| timeout | number | no | Timeout in seconds (default: 120) |
|
||||
| workdir | string | no | Working directory override |
|
||||
|
||||
**Returns**: Combined stdout + stderr
|
||||
|
||||
### web_search
|
||||
|
||||
Search the web using Brave Search API.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| query | string | yes | Search query |
|
||||
| count | number | no | Number of results (default: 10) |
|
||||
|
||||
**Returns**: JSON array of `{ title, url, snippet }` objects
|
||||
|
||||
### web_fetch
|
||||
|
||||
Fetch and parse a URL.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| url | string | yes | URL to fetch |
|
||||
| mode | string | no | `markdown` (default), `raw`, or `html` |
|
||||
|
||||
**Returns**:
|
||||
- HTML pages: extracted readable text (via Readability)
|
||||
- JSON: pretty-printed JSON
|
||||
- Other: raw text
|
||||
|
||||
### message
|
||||
|
||||
Send a message to the current chat channel.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| content | string | yes | Message content |
|
||||
|
||||
**Returns**: Success confirmation
|
||||
|
||||
### spawn
|
||||
|
||||
Spawn a background subagent for long-running tasks.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| task | string | yes | Task description for the subagent |
|
||||
|
||||
**Returns**: Spawn confirmation with subagent ID
|
||||
|
||||
### cron
|
||||
|
||||
Manage scheduled tasks.
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| action | string | yes | `list`, `add`, `remove`, `enable`, `disable`, `run`, `status` |
|
||||
| id | string | conditional | Job ID (for remove/enable/disable/run) |
|
||||
| name | string | conditional | Job name (for add) |
|
||||
| message | string | conditional | Task message (for add) |
|
||||
| schedule | string | conditional | Schedule expression (for add) |
|
||||
| deleteAfterRun | boolean | no | Delete after one execution |
|
||||
|
||||
**Schedule formats**:
|
||||
- `every Ns/m/h/d` — e.g., `every 30m`
|
||||
- `at YYYY-MM-DD HH:MM` — one-time
|
||||
- Cron expression — e.g., `0 9 * * 1-5`
|
||||
|
||||
**Returns**: Action-specific response (job list, confirmation, status)
|
||||
|
||||
## Internal APIs
|
||||
|
||||
### BaseChannel
|
||||
|
||||
Extend to create new channel types:
|
||||
|
||||
```typescript
|
||||
abstract class BaseChannel {
|
||||
_bus: MessageBus;
|
||||
abstract start(): Promise<void>;
|
||||
abstract stop(): void;
|
||||
abstract send(chatId: string, content: string, metadata?: Record<string, unknown>): Promise<void>;
|
||||
isAllowed(senderId: string, allowFrom: string[]): boolean;
|
||||
}
|
||||
```
|
||||
|
||||
### MessageBus
|
||||
|
||||
```typescript
|
||||
class MessageBus {
|
||||
publishInbound(msg: InboundMessage): void;
|
||||
consumeInbound(): Promise<InboundMessage>;
|
||||
publishOutbound(msg: OutboundMessage): void;
|
||||
consumeOutbound(): Promise<OutboundMessage>;
|
||||
}
|
||||
```
|
||||
|
||||
### InboundMessage
|
||||
|
||||
```typescript
|
||||
type InboundMessage = {
|
||||
channel: string; // 'mattermost', 'cli', 'system'
|
||||
senderId: string; // User identifier
|
||||
chatId: string; // Conversation identifier
|
||||
content: string; // Message text
|
||||
metadata: Record<string, unknown>;
|
||||
media?: string[]; // Optional media URLs
|
||||
};
|
||||
```
|
||||
|
||||
### OutboundMessage
|
||||
|
||||
```typescript
|
||||
type OutboundMessage = {
|
||||
channel: string;
|
||||
chatId: string;
|
||||
content: string | null;
|
||||
metadata: Record<string, unknown>;
|
||||
media?: string[];
|
||||
};
|
||||
```
|
||||
|
||||
### LLMProvider
|
||||
|
||||
```typescript
|
||||
class LLMProvider {
|
||||
defaultModel: string;
|
||||
chat(opts: ChatOptions): Promise<{ response: LLMResponse; responseMessages: ModelMessage[] }>;
|
||||
chatWithRetry(opts: ChatOptions): Promise<{ response: LLMResponse; responseMessages: ModelMessage[] }>;
|
||||
}
|
||||
```
|
||||
|
||||
### Session
|
||||
|
||||
```typescript
|
||||
class Session {
|
||||
key: string;
|
||||
messages: SessionMessage[];
|
||||
createdAt: string;
|
||||
updatedAt: string;
|
||||
lastConsolidated: number;
|
||||
getHistory(maxMessages?: number): SessionMessage[];
|
||||
clear(): void;
|
||||
}
|
||||
```
|
||||
|
||||
### CronService
|
||||
|
||||
```typescript
|
||||
class CronService {
|
||||
listJobs(): CronJob[];
|
||||
addJob(job: Omit<CronJob, 'state' | 'createdAtMs' | 'updatedAtMs'>): CronJob;
|
||||
removeJob(id: string): boolean;
|
||||
enableJob(id: string, enabled: boolean): boolean;
|
||||
runJob(id: string): Promise<string>;
|
||||
status(): string;
|
||||
start(): void;
|
||||
stop(): void;
|
||||
}
|
||||
```
|
||||
150
docs/Architecture.md
Normal file
150
docs/Architecture.md
Normal file
@@ -0,0 +1,150 @@
|
||||
# Architecture
|
||||
|
||||
## Tech Stack
|
||||
|
||||
| Layer | Technology |
|
||||
|-------|------------|
|
||||
| Runtime | Bun (v1.0+) |
|
||||
| Language | TypeScript (strict mode) |
|
||||
| LLM Abstraction | Vercel AI SDK v6 |
|
||||
| Validation | Zod v4 |
|
||||
| CLI | Commander |
|
||||
| Colors | picocolors |
|
||||
| Formatting | oxfmt (single quotes) |
|
||||
| Linting | oxlint |
|
||||
|
||||
## Folder Structure
|
||||
|
||||
```
|
||||
nanobot-ts/
|
||||
├── index.ts # Entry point
|
||||
├── src/
|
||||
│ ├── agent/
|
||||
│ │ ├── loop.ts # AgentLoop: LLM ↔ tool execution loop
|
||||
│ │ ├── context.ts # ContextBuilder: system prompt assembly
|
||||
│ │ ├── memory.ts # MemoryConsolidator: token management
|
||||
│ │ ├── skills.ts # Skill loader from workspace
|
||||
│ │ ├── subagent.ts # SubagentManager: background tasks
|
||||
│ │ └── tools/
|
||||
│ │ ├── base.ts # Tool interface + ToolRegistry
|
||||
│ │ ├── filesystem.ts # read_file, write_file, edit_file, list_dir
|
||||
│ │ ├── shell.ts # exec
|
||||
│ │ ├── web.ts # web_search, web_fetch
|
||||
│ │ ├── message.ts # message
|
||||
│ │ ├── spawn.ts # spawn
|
||||
│ │ └── cron.ts # cron
|
||||
│ ├── channels/
|
||||
│ │ ├── base.ts # BaseChannel abstract class
|
||||
│ │ ├── mattermost.ts # Mattermost WebSocket + REST
|
||||
│ │ └── manager.ts # ChannelManager lifecycle
|
||||
│ ├── bus/
|
||||
│ │ ├── types.ts # InboundMessage, OutboundMessage schemas
|
||||
│ │ └── queue.ts # AsyncQueue, MessageBus
|
||||
│ ├── provider/
|
||||
│ │ ├── types.ts # LLMResponse, ToolCall, ChatOptions
|
||||
│ │ └── index.ts # LLMProvider (AI SDK wrapper)
|
||||
│ ├── session/
|
||||
│ │ ├── types.ts # SessionMessage, SessionMeta schemas
|
||||
│ │ └── manager.ts # Session persistence (JSONL)
|
||||
│ ├── cron/
|
||||
│ │ ├── types.ts # CronJob, CronSchedule schemas
|
||||
│ │ └── service.ts # CronService
|
||||
│ ├── heartbeat/
|
||||
│ │ └── service.ts # HeartbeatService
|
||||
│ ├── config/
|
||||
│ │ ├── types.ts # Zod config schemas
|
||||
│ │ └── loader.ts # loadConfig, env overrides
|
||||
│ └── cli/
|
||||
│ └── commands.ts # gateway + agent commands
|
||||
├── templates/ # Default workspace files
|
||||
│ ├── SOUL.md # Agent personality
|
||||
│ ├── USER.md # User preferences
|
||||
│ ├── TOOLS.md # Tool documentation
|
||||
│ ├── AGENTS.md # Agent behavior rules
|
||||
│ ├── HEARTBEAT.md # Periodic tasks
|
||||
│ └── memory/MEMORY.md # Long-term memory
|
||||
└── skills/ # Bundled skills
|
||||
```
|
||||
|
||||
## Data Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Gateway Mode │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Mattermost ──► BaseChannel ──► MessageBus ──► AgentLoop │
|
||||
│ ▲ │ │ │
|
||||
│ │ ▼ ▼ │
|
||||
│ │ OutboundQueue LLMProvider │
|
||||
│ │ │ │ │
|
||||
│ └───────────────────────────────┘ ▼ │
|
||||
│ ToolRegistry │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ Tool.execute() │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Agent Mode │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ CLI stdin ──► processDirect() ──► AgentLoop ──► Response │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Key Components
|
||||
|
||||
### AgentLoop
|
||||
The core orchestrator. Consumes inbound messages, runs the LLM tool-calling loop, and publishes responses.
|
||||
|
||||
1. Receives `InboundMessage` from bus
|
||||
2. Loads/creates session by key
|
||||
3. Builds context (system prompt + history)
|
||||
4. Calls LLM with tools
|
||||
5. Executes tool calls, appends results
|
||||
6. Repeats until no tool calls or max iterations
|
||||
7. Saves session, publishes response
|
||||
|
||||
### MessageBus
|
||||
An async queue system for decoupling channels from the agent loop.
|
||||
|
||||
- `publishInbound()` / `consumeInbound()`: messages from channels to agent
|
||||
- `publishOutbound()` / `consumeOutbound()`: responses from agent to channels
|
||||
|
||||
### LLMProvider
|
||||
Wraps Vercel AI SDK `generateText()` with:
|
||||
|
||||
- Model string resolution (e.g., `openrouter/anthropic/claude-sonnet-4-5`)
|
||||
- Retry logic (3 attempts, exponential backoff)
|
||||
- Malformed JSON repair
|
||||
- Normalized `LLMResponse` type
|
||||
|
||||
### SessionManager
|
||||
Persists conversation history to JSONL files in `~/.config/nanobot/sessions/`.
|
||||
|
||||
- Key format: `{channel}:{chatId}` (e.g., `mattermost:abc123`)
|
||||
- Supports history truncation for context window limits
|
||||
|
||||
### ToolRegistry
|
||||
Stores tools by name, provides OpenAI-compatible function definitions to the LLM.
|
||||
|
||||
### MemoryConsolidator
|
||||
When session history exceeds token limits, summarizes old messages and archives to `memory/MEMORY.md`.
|
||||
|
||||
## Configuration
|
||||
|
||||
- File: `~/.config/nanobot/config.json`
|
||||
- Validation: Zod schemas in `src/config/types.ts`
|
||||
- Env overrides: `NANOBOT_MODEL`, `NANOBOT_WORKSPACE`, `NANOBOT_CONFIG`
|
||||
|
||||
## Session Key Convention
|
||||
|
||||
| Channel | Key Format | Example |
|
||||
|---------|-----------|----------|
|
||||
| Mattermost | `mattermost:{channelId}` | `mattermost:abc123` |
|
||||
| Mattermost (thread) | `mattermost:{channelId}:{rootId}` | `mattermost:abc:def456` |
|
||||
| CLI | `cli:{chatId}` | `cli:interactive` |
|
||||
| System | `system:{source}` | `system:heartbeat` |
|
||||
151
docs/Discoveries.md
Normal file
151
docs/Discoveries.md
Normal file
@@ -0,0 +1,151 @@
|
||||
# Discoveries
|
||||
|
||||
Empirical learnings from implementation that future sessions should know.
|
||||
|
||||
## Zod v4 Specifics
|
||||
|
||||
### `.default()` on Nested Objects
|
||||
|
||||
Zod v4 requires factory functions for nested object defaults, and the factory must return the **full output type** (not just `{}`):
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong - empty object won't match the schema
|
||||
const Config = z.object({
|
||||
nested: NestedSchema.default({}),
|
||||
});
|
||||
|
||||
// ✅ Correct - factory returning full type
|
||||
const Config = z.object({
|
||||
nested: NestedSchema.default(() => ({ field: value, ... })),
|
||||
});
|
||||
```
|
||||
|
||||
### `z.record()` Requires Two Arguments
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong
|
||||
z.record(z.string())
|
||||
|
||||
// ✅ Correct
|
||||
z.record(z.string(), z.unknown())
|
||||
```
|
||||
|
||||
## AI SDK v6 Changes
|
||||
|
||||
| v4/v5 | v6 |
|
||||
|-------|-----|
|
||||
| `LanguageModelV2` | `LanguageModel` |
|
||||
| `maxTokens` | `maxOutputTokens` |
|
||||
| `maxSteps` | `stopWhen: stepCountIs(n)` |
|
||||
| `usage.promptTokens` | `usage.inputTokens` |
|
||||
| `usage.completionTokens` | `usage.outputTokens` |
|
||||
|
||||
## ollama-ai-provider Compatibility
|
||||
|
||||
`ollama-ai-provider` v1.2.0 returns `LanguageModelV1`, not the expected `LanguageModel` (v2/v3). Cast at call site:
|
||||
|
||||
```typescript
|
||||
import { ollama } from 'ollama-ai-provider';
|
||||
import type { LanguageModel } from 'ai';
|
||||
|
||||
const model = ollama('llama3.2') as unknown as LanguageModel;
|
||||
```
|
||||
|
||||
## js-tiktoken API
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong (Python-style)
|
||||
import { get_encoding } from 'js-tiktoken';
|
||||
|
||||
// ✅ Correct
|
||||
import { getEncoding } from 'js-tiktoken';
|
||||
```
|
||||
|
||||
## Bun/Node Globals
|
||||
|
||||
`Document` is not available as a global in Bun/Node. For DOM-like operations:
|
||||
|
||||
```typescript
|
||||
// ❌ Wrong
|
||||
function makeDocument(): Document { ... }
|
||||
|
||||
// ✅ Correct
|
||||
function makePseudoDocument(): Record<string, unknown> { ... }
|
||||
// Cast at call site if needed
|
||||
```
|
||||
|
||||
## WebSocket Error Types
|
||||
|
||||
WebSocket `onerror` handler receives an `Event`, not an `Error`:
|
||||
|
||||
```typescript
|
||||
socket.onerror = (err: Event) => {
|
||||
console.error(`WebSocket error: ${err.type}`);
|
||||
};
|
||||
```
|
||||
|
||||
## Template Literals with Unknown Types
|
||||
|
||||
When interpolating `unknown` types in template literals, explicitly convert to string:
|
||||
|
||||
```typescript
|
||||
// ❌ Risky - may throw
|
||||
console.log(`Error: ${err}`);
|
||||
|
||||
// ✅ Safe
|
||||
console.log(`Error: ${String(err)}`);
|
||||
```
|
||||
|
||||
## Helper: strArg
|
||||
|
||||
For safely extracting string arguments from `Record<string, unknown>`:
|
||||
|
||||
```typescript
|
||||
// src/agent/tools/base.ts
|
||||
export function strArg(args: Record<string, unknown>, key: string, fallback = ''): string {
|
||||
const val = args[key];
|
||||
return typeof val === 'string' ? val : fallback;
|
||||
}
|
||||
```
|
||||
|
||||
Usage:
|
||||
|
||||
```typescript
|
||||
// ❌ Verbose
|
||||
const path = String(args['path'] ?? '');
|
||||
|
||||
// ✅ Cleaner
|
||||
const path = strArg(args, 'path');
|
||||
const timeout = parseInt(strArg(args, 'timeout', '30'), 10);
|
||||
```
|
||||
|
||||
## Mattermost WebSocket
|
||||
|
||||
- Uses raw `WebSocket` + `fetch` (no mattermostdriver library)
|
||||
- Auth via hello message with token
|
||||
- Event types: `posted`, `post_edited`, `reaction_added`, etc.
|
||||
- Group channel policy: `mention` (default), `open`, `allowlist`
|
||||
|
||||
## Session Persistence
|
||||
|
||||
- Format: JSONL (one JSON object per line)
|
||||
- Location: `~/.config/nanobot/sessions/{sessionKey}.jsonl`
|
||||
- Tool results truncated at 16,000 characters
|
||||
- Memory consolidation triggered when approaching context window limit
|
||||
|
||||
## Retry Logic
|
||||
|
||||
`LLMProvider.chatWithRetry()` retries on:
|
||||
- HTTP 429 (rate limit)
|
||||
- HTTP 5xx (server errors)
|
||||
- Timeouts
|
||||
- Network errors
|
||||
|
||||
Max 3 attempts with exponential backoff.
|
||||
|
||||
## Config Precedence
|
||||
|
||||
1. CLI flags (`-c`, `-m`, `-w`, `-M`)
|
||||
2. Environment variables (`NANOBOT_CONFIG`, `NANOBOT_MODEL`, `NANOBOT_WORKSPACE`)
|
||||
3. Config file (`~/.config/nanobot/config.json`)
|
||||
4. Zod schema defaults
|
||||
64
docs/PRD.md
Normal file
64
docs/PRD.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# Product Requirements Document (PRD)
|
||||
|
||||
## Overview
|
||||
|
||||
nanobot is an ultra-lightweight personal AI assistant framework. It provides a chat-controlled bot that can execute tasks through natural language commands, with pluggable "channels" for different messaging platforms.
|
||||
|
||||
## Target Audience
|
||||
|
||||
- Individual developers and power users who want a personal AI assistant
|
||||
- Users who prefer self-hosted, privacy-respecting AI tools
|
||||
- Teams using Mattermost who want an integrated AI assistant
|
||||
- Users who need AI assistance with file operations, shell commands, and web searches
|
||||
|
||||
## Core Features
|
||||
|
||||
### 1. Agent Loop
|
||||
- Conversational AI powered by LLMs (Anthropic, OpenAI, Google, OpenRouter, Ollama)
|
||||
- Tool execution with iterative refinement
|
||||
- Session management with persistent conversation history
|
||||
- Memory consolidation to manage context window limits
|
||||
|
||||
### 2. Tool System
|
||||
- **Filesystem**: read, write, edit, list files
|
||||
- **Shell**: execute arbitrary commands with configurable security constraints
|
||||
- **Web**: search (Brave), fetch and parse URLs
|
||||
- **Message**: send intermediate updates to chat channels
|
||||
- **Spawn**: delegate long-running tasks to background subagents
|
||||
- **Cron**: schedule recurring tasks
|
||||
|
||||
### 3. Channel System
|
||||
- **Mattermost**: WebSocket-based real-time messaging with REST API for posts
|
||||
- **CLI**: local interactive terminal or single-shot mode
|
||||
- Extensible channel interface for future platforms
|
||||
|
||||
### 4. Scheduling
|
||||
- **Cron Service**: schedule tasks with cron expressions, intervals, or one-time execution
|
||||
- **Heartbeat**: periodic wake-up to check for tasks (e.g., HEARTBEAT.md)
|
||||
|
||||
### 5. Memory & Skills
|
||||
- Long-term memory with consolidation
|
||||
- Skill loading from workspace
|
||||
- System prompt construction from templates (SOUL.md, USER.md, TOOLS.md)
|
||||
|
||||
## Non-Goals (Out of Scope)
|
||||
|
||||
- Non-Mattermost channels (Telegram, Discord, Slack, etc.)
|
||||
- MCP (Model Context Protocol) client support
|
||||
- Extended thinking/reasoning token handling
|
||||
- Onboard configuration wizard
|
||||
- Multi-tenancy or user authentication
|
||||
|
||||
## User Stories
|
||||
|
||||
1. As a developer, I want to ask the AI to read and modify files in my workspace so I can work faster.
|
||||
2. As a team lead, I want the bot to respond in Mattermost channels when mentioned so my team can get AI help without leaving chat.
|
||||
3. As a power user, I want to schedule recurring tasks so the AI can check things automatically.
|
||||
4. As a privacy-conscious user, I want to run the bot locally with Ollama so my data stays on my machine.
|
||||
|
||||
## Success Metrics
|
||||
|
||||
- Zero external dependencies for core functionality beyond LLM providers
|
||||
- Sub-second response time for tool execution
|
||||
- Graceful degradation on LLM errors
|
||||
- Clear error messages for configuration issues
|
||||
5
index.ts
5
index.ts
@@ -1 +1,4 @@
|
||||
console.log("Hello via Bun!");
|
||||
import { createCli } from './src/cli/commands.ts';
|
||||
|
||||
const program = createCli();
|
||||
program.parse(process.argv);
|
||||
|
||||
40
memory-bank/activeContext.md
Normal file
40
memory-bank/activeContext.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Active Context
|
||||
|
||||
## Current Focus
|
||||
Docs directory created with 4 files (PRD.md, Architecture.md, API.md, Discoveries.md). All source files previously written and verified — typecheck and lint are both clean.
|
||||
|
||||
## Session State (as of this writing)
|
||||
- All source files complete and passing `tsc --noEmit` (0 errors) and `oxlint` (0 errors, 0 warnings)
|
||||
- `package.json` scripts added: `start`, `dev`, `typecheck`
|
||||
- Ready for runtime / integration testing
|
||||
|
||||
## Key Fixes Applied This Session
|
||||
- **Zod v4 `.default()`**: nested object schemas need factory functions returning full output types (e.g. `.default(() => ({ field: value, ... }))`)
|
||||
- **Zod v4 `z.record()`**: requires two args: `z.record(z.string(), z.unknown())`
|
||||
- **AI SDK v6**: `LanguageModelV2` → `LanguageModel`; `maxTokens` → `maxOutputTokens`; `maxSteps` → `stopWhen: stepCountIs(1)`; usage fields: `inputTokens` / `outputTokens` (not `promptTokens` / `completionTokens`)
|
||||
- **ollama-ai-provider v1.2.0**: returns `LanguageModelV1` — cast with `as unknown as LanguageModel`
|
||||
- **`js-tiktoken`**: `get_encoding` → `getEncoding`
|
||||
- **`web.ts`**: `Document` global not available in Bun/Node — return `Record<string, unknown>` from `makePseudoDocument`, cast at call site
|
||||
- **`loop.ts`**: syntax error `ContextBuilder(workspace: ...)` → `new ContextBuilder(opts.workspace)`
|
||||
- **lint**: all `${err}` in template literals → `${String(err)}`; `String(args['key'] ?? '')` → `strArg(args, 'key')` helper; unused `onProgress` param → `_onProgress`; WebSocket `onerror` `err` type is `Event` → use `err.type`
|
||||
|
||||
## Work Queue (next steps)
|
||||
1. [x] Create workspace helper module (src/cli/utils.ts) with ensureWorkspace() and syncTemplates()
|
||||
2. [x] Create onboard command (src/cli/onboard.ts) with path argument and directory-not-empty guard
|
||||
3. [x] Agent/gateway commands check workspace exists (throw if not found)
|
||||
4. [x] Added required `provider` field to agent config (values: anthropic, openai, google, openrouter, ollama)
|
||||
5. [x] Provider resolution uses explicit provider from config (no model prefix parsing)
|
||||
6. [x] Typecheck and lint pass (0 errors)
|
||||
7. [x] Test onboard and agent commands work correctly
|
||||
8. [x] Updated Ollama provider from `ollama-ai-provider` to `ai-sdk-ollama`
|
||||
9. [ ] Test with a real Mattermost config (optional — user can do this)
|
||||
|
||||
## Key Decisions Made
|
||||
- Mattermost channel uses raw WebSocket + fetch (no mattermostdriver, no SSL hack)
|
||||
- No MCP support (use shell tools / CLI instead)
|
||||
- No reasoning/thinking token handling (can add later)
|
||||
- Config is fresh Zod schema (no migration from Python config needed)
|
||||
- `ai-sdk-ollama` package for Ollama provider (replaced old `ollama-ai-provider`)
|
||||
- `strArg(args, key, fallback?)` helper exported from `agent/tools/base.ts` for safe unknown→string extraction
|
||||
- Agent config requires explicit `provider` field (no more model prefix like "anthropic/claude-...")
|
||||
- Model names are now just the raw model ID (e.g., "claude-sonnet-4-5" not "anthropic/claude-sonnet-4-5")
|
||||
24
memory-bank/productContext.md
Normal file
24
memory-bank/productContext.md
Normal file
@@ -0,0 +1,24 @@
|
||||
# Product Context
|
||||
|
||||
## What nanobot does
|
||||
A personal AI assistant that connects to Mattermost (via WebSocket) and runs an agent loop. It receives messages, uses an LLM to decide what to do, calls tools, and replies. It can also run on a schedule (cron) and proactively act on a heartbeat timer.
|
||||
|
||||
## Core user experience
|
||||
- User sends a message in Mattermost (DM or channel)
|
||||
- Bot uses the configured LLM, calls tools as needed, replies in the same thread
|
||||
- Long conversations get automatically consolidated to memory files on disk
|
||||
- Custom "skills" (markdown instruction files) extend the bot's capabilities
|
||||
|
||||
## Key design principles (from Python codebase)
|
||||
- Ultra-lightweight: minimal dependencies, small codebase
|
||||
- Provider-agnostic: works with Anthropic, OpenAI, Google, Ollama, OpenRouter
|
||||
- Workspace-centric: everything lives in a configurable workspace directory (`~/.config/nanobot/`)
|
||||
- SOUL/AGENTS/USER/TOOLS.md: workspace markdown files that define the bot's personality and rules
|
||||
- Memory is just markdown files (`MEMORY.md`, `HISTORY.md`) — no database
|
||||
|
||||
## Runtime mental model
|
||||
```
|
||||
Mattermost WS ──► inbound queue ──► AgentLoop ──► LLM + tools ──► outbound queue ──► Mattermost REST
|
||||
│
|
||||
CronService + HeartbeatService (inject messages into inbound queue)
|
||||
```
|
||||
30
memory-bank/progress.md
Normal file
30
memory-bank/progress.md
Normal file
@@ -0,0 +1,30 @@
|
||||
# Progress
|
||||
|
||||
## Milestones
|
||||
|
||||
### ✅ Done
|
||||
- Repo created at `/home/sebby/repos/nanobot-ts`
|
||||
- Tooling configured: `oxfmt` (single quotes), `oxlint`, `@types/bun`, strict tsconfig
|
||||
- All dependencies installed
|
||||
- `src/` directory structure scaffolded
|
||||
- Memory bank initialized
|
||||
- All source files written (first pass)
|
||||
- Templates and skills copied from Python repo
|
||||
- **Full typecheck pass**: `tsc --noEmit` → 0 errors
|
||||
- **Full lint pass**: `oxlint` → 0 errors, 0 warnings
|
||||
- `package.json` scripts added: `start`, `dev`, `typecheck`
|
||||
- **Docs created**: `/docs/PRD.md`, `Architecture.md`, `API.md`, `Discoveries.md`
|
||||
- **Onboard command**: Created `src/cli/onboard.ts` with workspace initialization
|
||||
- **Provider config**: Added required `provider` field to agent config
|
||||
- **Workspace validation**: Agent/gateway commands throw if workspace doesn't exist
|
||||
|
||||
### 🔄 In Progress
|
||||
- Nothing
|
||||
|
||||
### ⏳ Pending
|
||||
- Integration test with a real Mattermost server
|
||||
|
||||
## Known Issues / Risks
|
||||
- `ollama-ai-provider` v1.2.0 returns `LanguageModelV1` (not V2/V3 as expected by AI SDK v6) — cast used at call site. Works at runtime.
|
||||
- Zod v4 `.default()` on nested object schemas requires providing full output type as factory function return value (not just `{}`). All instances fixed.
|
||||
- AI SDK v6 changed `maxSteps` → `stopWhen: stepCountIs(n)` and `usage.promptTokens/completionTokens` → `usage.inputTokens/outputTokens`.
|
||||
22
memory-bank/projectBrief.md
Normal file
22
memory-bank/projectBrief.md
Normal file
@@ -0,0 +1,22 @@
|
||||
# Project Brief
|
||||
|
||||
## What
|
||||
A full port of the Python `nanobot` personal AI agent framework to Bun/TypeScript.
|
||||
|
||||
## Why
|
||||
The owner doesn't know Python and doesn't want to maintain a Python codebase for a small personal project. The port should be idiomatic TypeScript, easier to read, and have fewer dependencies.
|
||||
|
||||
## Scope
|
||||
- **In**: All core features — agent loop, tools, memory/consolidation, skills, sessions, cron, heartbeat, Mattermost channel (WebSocket + REST), config, CLI.
|
||||
- **Out**: All non-Mattermost channels (Telegram, Discord, Slack, etc.), MCP client support, extended thinking/reasoning tokens, onboard wizard.
|
||||
|
||||
## Source of Truth
|
||||
Original Python implementation lives at `/home/sebby/repos/nanobot`. All porting decisions should be verified against it.
|
||||
|
||||
## Constraints
|
||||
- Bun runtime only (no Node-specific APIs where Bun has equivalents)
|
||||
- All runtime types written as Zod schemas in dedicated `types.ts` files; those files also export the inferred TS types
|
||||
- `picocolors` for terminal color (not chalk)
|
||||
- `oxfmt` for formatting (single quotes), `oxlint` for linting
|
||||
- Vercel AI SDK (`ai` package) for LLM abstraction
|
||||
- Mattermost: raw `WebSocket` + `fetch` (no driver library)
|
||||
100
memory-bank/systemPatterns.md
Normal file
100
memory-bank/systemPatterns.md
Normal file
@@ -0,0 +1,100 @@
|
||||
# System Patterns
|
||||
|
||||
## Type Definition Pattern
|
||||
Every module that has runtime-validated data uses a dedicated `types.ts` file:
|
||||
```
|
||||
src/config/types.ts ← Zod schemas + `export type X = z.infer<typeof XSchema>`
|
||||
src/bus/types.ts
|
||||
src/session/types.ts
|
||||
src/cron/types.ts
|
||||
src/provider/types.ts
|
||||
```
|
||||
The implementation files (`loader.ts`, `manager.ts`, etc.) import types from the sibling `types.ts`.
|
||||
|
||||
## Tool Pattern
|
||||
All tools implement the `Tool` interface from `src/agent/tools/base.ts`:
|
||||
```ts
|
||||
interface Tool {
|
||||
name: string
|
||||
description: string
|
||||
parameters: Record<string, unknown> // JSON Schema object
|
||||
execute(args: Record<string, unknown>): Promise<string>
|
||||
}
|
||||
```
|
||||
`ToolRegistry` stores tools by name, exposes `getDefinitions()` (OpenAI function-calling format), and `execute(name, args)`.
|
||||
|
||||
## Message Bus Pattern
|
||||
Inbound and outbound messages are passed through a typed `AsyncQueue<T>`. The queue uses a `Promise`-based dequeue that resolves when an item is available (mirrors Python `asyncio.Queue`).
|
||||
|
||||
## Provider Pattern
|
||||
`LLMProvider` (`src/provider/index.ts`) wraps the Vercel AI SDK `generateText()`. It:
|
||||
- Accepts a model string and resolves it to the correct AI SDK provider instance
|
||||
- Implements `chatWithRetry()` with 3 attempts on transient errors (429, 5xx, timeout)
|
||||
- Repairs malformed tool-call JSON with `jsonrepair`
|
||||
- Returns a normalized `LLMResponse` type
|
||||
|
||||
## Config Pattern
|
||||
- Config file: `~/.config/nanobot/config.json` (camelCase JSON)
|
||||
- Loaded with `loadConfig()`, validated by Zod, returns inferred `Config` type
|
||||
- `NANOBOT_` env vars can override fields (e.g. `NANOBOT_MODEL`)
|
||||
|
||||
## Mattermost Channel Pattern
|
||||
- Inbound: native `WebSocket` connecting to `wss://{server}/api/v4/websocket`, auth via hello message
|
||||
- Outbound: `fetch()` to `POST /api/v4/posts`
|
||||
- Session key: `mattermost:{channelId}` (or `mattermost:{channelId}:{rootId}` when `replyInThread`)
|
||||
|
||||
## Session Key Convention
|
||||
`{channel}:{chatId}` — e.g. `mattermost:abc123`, `cli:direct`
|
||||
|
||||
## Logging Pattern
|
||||
Use `console.error` / `console.warn` / `console.info` / `console.debug` — no external logger. Color via `picocolors` in CLI output only.
|
||||
|
||||
## CLI Command Pattern
|
||||
Each command is in its own file with a registration function:
|
||||
```ts
|
||||
// src/cli/agent.ts
|
||||
export function agentCommand(program: Command, config: Config, workspace: string): void {
|
||||
program.command('agent')
|
||||
.description('...')
|
||||
.option('-m, --message <text>', 'Single message to process')
|
||||
.action(async (opts) => { /* ... */ })
|
||||
}
|
||||
|
||||
// src/cli/commands.ts (bootstrap)
|
||||
export function createCli(): Command {
|
||||
const program = new Command('nanobot')...
|
||||
const config = loadConfig(opts.config);
|
||||
const workspace = resolveWorkspacePath(config.agent.workspacePath);
|
||||
gatewayCommand(program, config, workspace);
|
||||
agentCommand(program, config, workspace);
|
||||
return program;
|
||||
}
|
||||
```
|
||||
|
||||
## File Layout
|
||||
```
|
||||
src/
|
||||
config/types.ts + loader.ts
|
||||
bus/types.ts + queue.ts
|
||||
provider/types.ts + index.ts
|
||||
session/types.ts + manager.ts
|
||||
cron/types.ts + service.ts
|
||||
heartbeat/service.ts
|
||||
agent/
|
||||
memory.ts
|
||||
skills.ts
|
||||
context.ts
|
||||
loop.ts
|
||||
subagent.ts
|
||||
tools/base.ts + filesystem.ts + shell.ts + web.ts + message.ts + spawn.ts + cron.ts
|
||||
channels/
|
||||
base.ts + mattermost.ts + manager.ts
|
||||
cli/
|
||||
types.ts # CommandHandler type
|
||||
commands.ts # Bootstrap - loads config, registers commands
|
||||
agent.ts # agentCommand() - interactive/single-shot mode
|
||||
gateway.ts # gatewayCommand() - full runtime with Mattermost
|
||||
index.ts
|
||||
templates/ (SOUL.md, AGENTS.md, USER.md, TOOLS.md, HEARTBEAT.md, memory/MEMORY.md)
|
||||
skills/ (copied from Python repo)
|
||||
```
|
||||
23
package.json
23
package.json
@@ -4,13 +4,34 @@
|
||||
"type": "module",
|
||||
"module": "index.ts",
|
||||
"scripts": {
|
||||
"nanobot": "bun run index.ts",
|
||||
"dev": "bun --watch run index.ts",
|
||||
"typecheck": "tsc --noEmit",
|
||||
"fmt": "oxfmt --check",
|
||||
"fmt:fix": "oxfmt",
|
||||
"lint": "oxlint",
|
||||
"lint:fix": "oxlint --fix"
|
||||
"lint:fix": "oxlint --fix",
|
||||
"checks": "bun run lint && bun run fmt"
|
||||
},
|
||||
"dependencies": {
|
||||
"@ai-sdk/anthropic": "^3.0.58",
|
||||
"@ai-sdk/google": "^3.0.43",
|
||||
"@ai-sdk/openai": "^3.0.41",
|
||||
"@mozilla/readability": "^0.6.0",
|
||||
"@openrouter/ai-sdk-provider": "^2.3.0",
|
||||
"ai": "^6.0.116",
|
||||
"ai-sdk-ollama": "^3.8.0",
|
||||
"commander": "^14.0.3",
|
||||
"cron-parser": "^5.5.0",
|
||||
"js-tiktoken": "^1.0.21",
|
||||
"jsonrepair": "^3.13.3",
|
||||
"node-html-parser": "^7.1.0",
|
||||
"picocolors": "^1.1.1",
|
||||
"zod": "^4.3.6"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@types/bun": "latest",
|
||||
"@types/mozilla__readability": "^0.4.2",
|
||||
"oxfmt": "^0.40.0",
|
||||
"oxlint": "^1.55.0",
|
||||
"oxlint-tsgolint": "^0.16.0"
|
||||
|
||||
25
skills/README.md
Normal file
25
skills/README.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# nanobot Skills
|
||||
|
||||
This directory contains built-in skills that extend nanobot's capabilities.
|
||||
|
||||
## Skill Format
|
||||
|
||||
Each skill is a directory containing a `SKILL.md` file with:
|
||||
- YAML frontmatter (name, description, metadata)
|
||||
- Markdown instructions for the agent
|
||||
|
||||
## Attribution
|
||||
|
||||
These skills are adapted from [OpenClaw](https://github.com/openclaw/openclaw)'s skill system.
|
||||
The skill format and metadata structure follow OpenClaw's conventions to maintain compatibility.
|
||||
|
||||
## Available Skills
|
||||
|
||||
| Skill | Description |
|
||||
|-------|-------------|
|
||||
| `github` | Interact with GitHub using the `gh` CLI |
|
||||
| `weather` | Get weather info using wttr.in and Open-Meteo |
|
||||
| `summarize` | Summarize URLs, files, and YouTube videos |
|
||||
| `tmux` | Remote-control tmux sessions |
|
||||
| `clawhub` | Search and install skills from ClawHub registry |
|
||||
| `skill-creator` | Create new skills |
|
||||
53
skills/clawhub/SKILL.md
Normal file
53
skills/clawhub/SKILL.md
Normal file
@@ -0,0 +1,53 @@
|
||||
---
|
||||
name: clawhub
|
||||
description: Search and install agent skills from ClawHub, the public skill registry.
|
||||
homepage: https://clawhub.ai
|
||||
metadata: {"nanobot":{"emoji":"🦞"}}
|
||||
---
|
||||
|
||||
# ClawHub
|
||||
|
||||
Public skill registry for AI agents. Search by natural language (vector search).
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill when the user asks any of:
|
||||
- "find a skill for …"
|
||||
- "search for skills"
|
||||
- "install a skill"
|
||||
- "what skills are available?"
|
||||
- "update my skills"
|
||||
|
||||
## Search
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest search "web scraping" --limit 5
|
||||
```
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest install <slug> --workdir ~/.config/nanobot/workspace
|
||||
```
|
||||
|
||||
Replace `<slug>` with the skill name from search results. This places the skill into `~/.config/nanobot/workspace/skills/`, where nanobot loads workspace skills from. Always include `--workdir`.
|
||||
|
||||
## Update
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest update --all --workdir ~/.config/nanobot/workspace
|
||||
```
|
||||
|
||||
## List installed
|
||||
|
||||
```bash
|
||||
npx --yes clawhub@latest list --workdir ~/.config/nanobot/workspace
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires Node.js (`npx` comes with it).
|
||||
- No API key needed for search and install.
|
||||
- Login (`npx --yes clawhub@latest login`) is only required for publishing.
|
||||
- `--workdir ~/.config/nanobot/workspace` is critical — without it, skills install to the current directory instead of the nanobot workspace.
|
||||
- After install, remind the user to start a new session to load the skill.
|
||||
57
skills/cron/SKILL.md
Normal file
57
skills/cron/SKILL.md
Normal file
@@ -0,0 +1,57 @@
|
||||
---
|
||||
name: cron
|
||||
description: Schedule reminders and recurring tasks.
|
||||
---
|
||||
|
||||
# Cron
|
||||
|
||||
Use the `cron` tool to schedule reminders or recurring tasks.
|
||||
|
||||
## Three Modes
|
||||
|
||||
1. **Reminder** - message is sent directly to user
|
||||
2. **Task** - message is a task description, agent executes and sends result
|
||||
3. **One-time** - runs once at a specific time, then auto-deletes
|
||||
|
||||
## Examples
|
||||
|
||||
Fixed reminder:
|
||||
```
|
||||
cron(action="add", message="Time to take a break!", every_seconds=1200)
|
||||
```
|
||||
|
||||
Dynamic task (agent executes each time):
|
||||
```
|
||||
cron(action="add", message="Check HKUDS/nanobot GitHub stars and report", every_seconds=600)
|
||||
```
|
||||
|
||||
One-time scheduled task (compute ISO datetime from current time):
|
||||
```
|
||||
cron(action="add", message="Remind me about the meeting", at="<ISO datetime>")
|
||||
```
|
||||
|
||||
Timezone-aware cron:
|
||||
```
|
||||
cron(action="add", message="Morning standup", cron_expr="0 9 * * 1-5", tz="America/Vancouver")
|
||||
```
|
||||
|
||||
List/remove:
|
||||
```
|
||||
cron(action="list")
|
||||
cron(action="remove", job_id="abc123")
|
||||
```
|
||||
|
||||
## Time Expressions
|
||||
|
||||
| User says | Parameters |
|
||||
|-----------|------------|
|
||||
| every 20 minutes | every_seconds: 1200 |
|
||||
| every hour | every_seconds: 3600 |
|
||||
| every day at 8am | cron_expr: "0 8 * * *" |
|
||||
| weekdays at 5pm | cron_expr: "0 17 * * 1-5" |
|
||||
| 9am Vancouver time daily | cron_expr: "0 9 * * *", tz: "America/Vancouver" |
|
||||
| at a specific time | at: ISO datetime string (compute from current time) |
|
||||
|
||||
## Timezone
|
||||
|
||||
Use `tz` with `cron_expr` to schedule in a specific IANA timezone. Without `tz`, the server's local timezone is used.
|
||||
48
skills/github/SKILL.md
Normal file
48
skills/github/SKILL.md
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
name: github
|
||||
description: "Interact with GitHub using the `gh` CLI. Use `gh issue`, `gh pr`, `gh run`, and `gh api` for issues, PRs, CI runs, and advanced queries."
|
||||
metadata: {"nanobot":{"emoji":"🐙","requires":{"bins":["gh"]},"install":[{"id":"brew","kind":"brew","formula":"gh","bins":["gh"],"label":"Install GitHub CLI (brew)"},{"id":"apt","kind":"apt","package":"gh","bins":["gh"],"label":"Install GitHub CLI (apt)"}]}}
|
||||
---
|
||||
|
||||
# GitHub Skill
|
||||
|
||||
Use the `gh` CLI to interact with GitHub. Always specify `--repo owner/repo` when not in a git directory, or use URLs directly.
|
||||
|
||||
## Pull Requests
|
||||
|
||||
Check CI status on a PR:
|
||||
```bash
|
||||
gh pr checks 55 --repo owner/repo
|
||||
```
|
||||
|
||||
List recent workflow runs:
|
||||
```bash
|
||||
gh run list --repo owner/repo --limit 10
|
||||
```
|
||||
|
||||
View a run and see which steps failed:
|
||||
```bash
|
||||
gh run view <run-id> --repo owner/repo
|
||||
```
|
||||
|
||||
View logs for failed steps only:
|
||||
```bash
|
||||
gh run view <run-id> --repo owner/repo --log-failed
|
||||
```
|
||||
|
||||
## API for Advanced Queries
|
||||
|
||||
The `gh api` command is useful for accessing data not available through other subcommands.
|
||||
|
||||
Get PR with specific fields:
|
||||
```bash
|
||||
gh api repos/owner/repo/pulls/55 --jq '.title, .state, .user.login'
|
||||
```
|
||||
|
||||
## JSON Output
|
||||
|
||||
Most commands support `--json` for structured output. You can use `--jq` to filter:
|
||||
|
||||
```bash
|
||||
gh issue list --repo owner/repo --json number,title --jq '.[] | "\(.number): \(.title)"'
|
||||
```
|
||||
37
skills/memory/SKILL.md
Normal file
37
skills/memory/SKILL.md
Normal file
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: memory
|
||||
description: Two-layer memory system with grep-based recall.
|
||||
always: true
|
||||
---
|
||||
|
||||
# Memory
|
||||
|
||||
## Structure
|
||||
|
||||
- `memory/MEMORY.md` — Long-term facts (preferences, project context, relationships). Always loaded into your context.
|
||||
- `memory/HISTORY.md` — Append-only event log. NOT loaded into context. Search it with grep-style tools or in-memory filters. Each entry starts with [YYYY-MM-DD HH:MM].
|
||||
|
||||
## Search Past Events
|
||||
|
||||
Choose the search method based on file size:
|
||||
|
||||
- Small `memory/HISTORY.md`: use `read_file`, then search in-memory
|
||||
- Large or long-lived `memory/HISTORY.md`: use the `exec` tool for targeted search
|
||||
|
||||
Examples:
|
||||
- **Linux/macOS:** `grep -i "keyword" memory/HISTORY.md`
|
||||
- **Windows:** `findstr /i "keyword" memory\HISTORY.md`
|
||||
- **Cross-platform Python:** `python -c "from pathlib import Path; text = Path('memory/HISTORY.md').read_text(encoding='utf-8'); print('\n'.join([l for l in text.splitlines() if 'keyword' in l.lower()][-20:]))"`
|
||||
|
||||
Prefer targeted command-line search for large history files.
|
||||
|
||||
## When to Update MEMORY.md
|
||||
|
||||
Write important facts immediately using `edit_file` or `write_file`:
|
||||
- User preferences ("I prefer dark mode")
|
||||
- Project context ("The API uses OAuth2")
|
||||
- Relationships ("Alice is the project lead")
|
||||
|
||||
## Auto-consolidation
|
||||
|
||||
Old conversations are automatically summarized and appended to HISTORY.md when the session grows large. Long-term facts are extracted to MEMORY.md. You don't need to manage this.
|
||||
67
skills/summarize/SKILL.md
Normal file
67
skills/summarize/SKILL.md
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
name: summarize
|
||||
description: Summarize or extract text/transcripts from URLs, podcasts, and local files (great fallback for “transcribe this YouTube/video”).
|
||||
homepage: https://summarize.sh
|
||||
metadata: {"nanobot":{"emoji":"🧾","requires":{"bins":["summarize"]},"install":[{"id":"brew","kind":"brew","formula":"steipete/tap/summarize","bins":["summarize"],"label":"Install summarize (brew)"}]}}
|
||||
---
|
||||
|
||||
# Summarize
|
||||
|
||||
Fast CLI to summarize URLs, local files, and YouTube links.
|
||||
|
||||
## When to use (trigger phrases)
|
||||
|
||||
Use this skill immediately when the user asks any of:
|
||||
- “use summarize.sh”
|
||||
- “what’s this link/video about?”
|
||||
- “summarize this URL/article”
|
||||
- “transcribe this YouTube/video” (best-effort transcript extraction; no `yt-dlp` needed)
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
summarize "https://example.com" --model google/gemini-3-flash-preview
|
||||
summarize "/path/to/file.pdf" --model google/gemini-3-flash-preview
|
||||
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto
|
||||
```
|
||||
|
||||
## YouTube: summary vs transcript
|
||||
|
||||
Best-effort transcript (URLs only):
|
||||
|
||||
```bash
|
||||
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto --extract-only
|
||||
```
|
||||
|
||||
If the user asked for a transcript but it’s huge, return a tight summary first, then ask which section/time range to expand.
|
||||
|
||||
## Model + keys
|
||||
|
||||
Set the API key for your chosen provider:
|
||||
- OpenAI: `OPENAI_API_KEY`
|
||||
- Anthropic: `ANTHROPIC_API_KEY`
|
||||
- xAI: `XAI_API_KEY`
|
||||
- Google: `GEMINI_API_KEY` (aliases: `GOOGLE_GENERATIVE_AI_API_KEY`, `GOOGLE_API_KEY`)
|
||||
|
||||
Default model is `google/gemini-3-flash-preview` if none is set.
|
||||
|
||||
## Useful flags
|
||||
|
||||
- `--length short|medium|long|xl|xxl|<chars>`
|
||||
- `--max-output-tokens <count>`
|
||||
- `--extract-only` (URLs only)
|
||||
- `--json` (machine readable)
|
||||
- `--firecrawl auto|off|always` (fallback extraction)
|
||||
- `--youtube auto` (Apify fallback if `APIFY_API_TOKEN` set)
|
||||
|
||||
## Config
|
||||
|
||||
Optional config file: `~/.summarize/config.json`
|
||||
|
||||
```json
|
||||
{ "model": "openai/gpt-5.2" }
|
||||
```
|
||||
|
||||
Optional services:
|
||||
- `FIRECRAWL_API_KEY` for blocked sites
|
||||
- `APIFY_API_TOKEN` for YouTube fallback
|
||||
121
skills/tmux/SKILL.md
Normal file
121
skills/tmux/SKILL.md
Normal file
@@ -0,0 +1,121 @@
|
||||
---
|
||||
name: tmux
|
||||
description: Remote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
|
||||
metadata: {"nanobot":{"emoji":"🧵","os":["darwin","linux"],"requires":{"bins":["tmux"]}}}
|
||||
---
|
||||
|
||||
# tmux Skill
|
||||
|
||||
Use tmux only when you need an interactive TTY. Prefer exec background mode for long-running, non-interactive tasks.
|
||||
|
||||
## Quickstart (isolated socket, exec tool)
|
||||
|
||||
```bash
|
||||
SOCKET_DIR="${NANOBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/nanobot-tmux-sockets}"
|
||||
mkdir -p "$SOCKET_DIR"
|
||||
SOCKET="$SOCKET_DIR/nanobot.sock"
|
||||
SESSION=nanobot-python
|
||||
|
||||
tmux -S "$SOCKET" new -d -s "$SESSION" -n shell
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- 'PYTHON_BASIC_REPL=1 python3 -q' Enter
|
||||
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
|
||||
```
|
||||
|
||||
After starting a session, always print monitor commands:
|
||||
|
||||
```
|
||||
To monitor:
|
||||
tmux -S "$SOCKET" attach -t "$SESSION"
|
||||
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
|
||||
```
|
||||
|
||||
## Socket convention
|
||||
|
||||
- Use `NANOBOT_TMUX_SOCKET_DIR` environment variable.
|
||||
- Default socket path: `"$NANOBOT_TMUX_SOCKET_DIR/nanobot.sock"`.
|
||||
|
||||
## Targeting panes and naming
|
||||
|
||||
- Target format: `session:window.pane` (defaults to `:0.0`).
|
||||
- Keep names short; avoid spaces.
|
||||
- Inspect: `tmux -S "$SOCKET" list-sessions`, `tmux -S "$SOCKET" list-panes -a`.
|
||||
|
||||
## Finding sessions
|
||||
|
||||
- List sessions on your socket: `{baseDir}/scripts/find-sessions.sh -S "$SOCKET"`.
|
||||
- Scan all sockets: `{baseDir}/scripts/find-sessions.sh --all` (uses `NANOBOT_TMUX_SOCKET_DIR`).
|
||||
|
||||
## Sending input safely
|
||||
|
||||
- Prefer literal sends: `tmux -S "$SOCKET" send-keys -t target -l -- "$cmd"`.
|
||||
- Control keys: `tmux -S "$SOCKET" send-keys -t target C-c`.
|
||||
|
||||
## Watching output
|
||||
|
||||
- Capture recent history: `tmux -S "$SOCKET" capture-pane -p -J -t target -S -200`.
|
||||
- Wait for prompts: `{baseDir}/scripts/wait-for-text.sh -t session:0.0 -p 'pattern'`.
|
||||
- Attaching is OK; detach with `Ctrl+b d`.
|
||||
|
||||
## Spawning processes
|
||||
|
||||
- For python REPLs, set `PYTHON_BASIC_REPL=1` (non-basic REPL breaks send-keys flows).
|
||||
|
||||
## Windows / WSL
|
||||
|
||||
- tmux is supported on macOS/Linux. On Windows, use WSL and install tmux inside WSL.
|
||||
- This skill is gated to `darwin`/`linux` and requires `tmux` on PATH.
|
||||
|
||||
## Orchestrating Coding Agents (Codex, Claude Code)
|
||||
|
||||
tmux excels at running multiple coding agents in parallel:
|
||||
|
||||
```bash
|
||||
SOCKET="${TMPDIR:-/tmp}/codex-army.sock"
|
||||
|
||||
# Create multiple sessions
|
||||
for i in 1 2 3 4 5; do
|
||||
tmux -S "$SOCKET" new-session -d -s "agent-$i"
|
||||
done
|
||||
|
||||
# Launch agents in different workdirs
|
||||
tmux -S "$SOCKET" send-keys -t agent-1 "cd /tmp/project1 && codex --yolo 'Fix bug X'" Enter
|
||||
tmux -S "$SOCKET" send-keys -t agent-2 "cd /tmp/project2 && codex --yolo 'Fix bug Y'" Enter
|
||||
|
||||
# Poll for completion (check if prompt returned)
|
||||
for sess in agent-1 agent-2; do
|
||||
if tmux -S "$SOCKET" capture-pane -p -t "$sess" -S -3 | grep -q "❯"; then
|
||||
echo "$sess: DONE"
|
||||
else
|
||||
echo "$sess: Running..."
|
||||
fi
|
||||
done
|
||||
|
||||
# Get full output from completed session
|
||||
tmux -S "$SOCKET" capture-pane -p -t agent-1 -S -500
|
||||
```
|
||||
|
||||
**Tips:**
|
||||
- Use separate git worktrees for parallel fixes (no branch conflicts)
|
||||
- `pnpm install` first before running codex in fresh clones
|
||||
- Check for shell prompt (`❯` or `$`) to detect completion
|
||||
- Codex needs `--yolo` or `--full-auto` for non-interactive fixes
|
||||
|
||||
## Cleanup
|
||||
|
||||
- Kill a session: `tmux -S "$SOCKET" kill-session -t "$SESSION"`.
|
||||
- Kill all sessions on a socket: `tmux -S "$SOCKET" list-sessions -F '#{session_name}' | xargs -r -n1 tmux -S "$SOCKET" kill-session -t`.
|
||||
- Remove everything on the private socket: `tmux -S "$SOCKET" kill-server`.
|
||||
|
||||
## Helper: wait-for-text.sh
|
||||
|
||||
`{baseDir}/scripts/wait-for-text.sh` polls a pane for a regex (or fixed string) with a timeout.
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/wait-for-text.sh -t session:0.0 -p 'pattern' [-F] [-T 20] [-i 0.5] [-l 2000]
|
||||
```
|
||||
|
||||
- `-t`/`--target` pane target (required)
|
||||
- `-p`/`--pattern` regex to match (required); add `-F` for fixed string
|
||||
- `-T` timeout seconds (integer, default 15)
|
||||
- `-i` poll interval seconds (default 0.5)
|
||||
- `-l` history lines to search (integer, default 1000)
|
||||
112
skills/tmux/scripts/find-sessions.sh
Executable file
112
skills/tmux/scripts/find-sessions.sh
Executable file
@@ -0,0 +1,112 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'USAGE'
|
||||
Usage: find-sessions.sh [-L socket-name|-S socket-path|-A] [-q pattern]
|
||||
|
||||
List tmux sessions on a socket (default tmux socket if none provided).
|
||||
|
||||
Options:
|
||||
-L, --socket tmux socket name (passed to tmux -L)
|
||||
-S, --socket-path tmux socket path (passed to tmux -S)
|
||||
-A, --all scan all sockets under NANOBOT_TMUX_SOCKET_DIR
|
||||
-q, --query case-insensitive substring to filter session names
|
||||
-h, --help show this help
|
||||
USAGE
|
||||
}
|
||||
|
||||
socket_name=""
|
||||
socket_path=""
|
||||
query=""
|
||||
scan_all=false
|
||||
socket_dir="${NANOBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/nanobot-tmux-sockets}"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-L|--socket) socket_name="${2-}"; shift 2 ;;
|
||||
-S|--socket-path) socket_path="${2-}"; shift 2 ;;
|
||||
-A|--all) scan_all=true; shift ;;
|
||||
-q|--query) query="${2-}"; shift 2 ;;
|
||||
-h|--help) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$scan_all" == true && ( -n "$socket_name" || -n "$socket_path" ) ]]; then
|
||||
echo "Cannot combine --all with -L or -S" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -n "$socket_name" && -n "$socket_path" ]]; then
|
||||
echo "Use either -L or -S, not both" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v tmux >/dev/null 2>&1; then
|
||||
echo "tmux not found in PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
list_sessions() {
|
||||
local label="$1"; shift
|
||||
local tmux_cmd=(tmux "$@")
|
||||
|
||||
if ! sessions="$("${tmux_cmd[@]}" list-sessions -F '#{session_name}\t#{session_attached}\t#{session_created_string}' 2>/dev/null)"; then
|
||||
echo "No tmux server found on $label" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ -n "$query" ]]; then
|
||||
sessions="$(printf '%s\n' "$sessions" | grep -i -- "$query" || true)"
|
||||
fi
|
||||
|
||||
if [[ -z "$sessions" ]]; then
|
||||
echo "No sessions found on $label"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "Sessions on $label:"
|
||||
printf '%s\n' "$sessions" | while IFS=$'\t' read -r name attached created; do
|
||||
attached_label=$([[ "$attached" == "1" ]] && echo "attached" || echo "detached")
|
||||
printf ' - %s (%s, started %s)\n' "$name" "$attached_label" "$created"
|
||||
done
|
||||
}
|
||||
|
||||
if [[ "$scan_all" == true ]]; then
|
||||
if [[ ! -d "$socket_dir" ]]; then
|
||||
echo "Socket directory not found: $socket_dir" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
shopt -s nullglob
|
||||
sockets=("$socket_dir"/*)
|
||||
shopt -u nullglob
|
||||
|
||||
if [[ "${#sockets[@]}" -eq 0 ]]; then
|
||||
echo "No sockets found under $socket_dir" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit_code=0
|
||||
for sock in "${sockets[@]}"; do
|
||||
if [[ ! -S "$sock" ]]; then
|
||||
continue
|
||||
fi
|
||||
list_sessions "socket path '$sock'" -S "$sock" || exit_code=$?
|
||||
done
|
||||
exit "$exit_code"
|
||||
fi
|
||||
|
||||
tmux_cmd=(tmux)
|
||||
socket_label="default socket"
|
||||
|
||||
if [[ -n "$socket_name" ]]; then
|
||||
tmux_cmd+=(-L "$socket_name")
|
||||
socket_label="socket name '$socket_name'"
|
||||
elif [[ -n "$socket_path" ]]; then
|
||||
tmux_cmd+=(-S "$socket_path")
|
||||
socket_label="socket path '$socket_path'"
|
||||
fi
|
||||
|
||||
list_sessions "$socket_label" "${tmux_cmd[@]:1}"
|
||||
83
skills/tmux/scripts/wait-for-text.sh
Executable file
83
skills/tmux/scripts/wait-for-text.sh
Executable file
@@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'USAGE'
|
||||
Usage: wait-for-text.sh -t target -p pattern [options]
|
||||
|
||||
Poll a tmux pane for text and exit when found.
|
||||
|
||||
Options:
|
||||
-t, --target tmux target (session:window.pane), required
|
||||
-p, --pattern regex pattern to look for, required
|
||||
-F, --fixed treat pattern as a fixed string (grep -F)
|
||||
-T, --timeout seconds to wait (integer, default: 15)
|
||||
-i, --interval poll interval in seconds (default: 0.5)
|
||||
-l, --lines number of history lines to inspect (integer, default: 1000)
|
||||
-h, --help show this help
|
||||
USAGE
|
||||
}
|
||||
|
||||
target=""
|
||||
pattern=""
|
||||
grep_flag="-E"
|
||||
timeout=15
|
||||
interval=0.5
|
||||
lines=1000
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-t|--target) target="${2-}"; shift 2 ;;
|
||||
-p|--pattern) pattern="${2-}"; shift 2 ;;
|
||||
-F|--fixed) grep_flag="-F"; shift ;;
|
||||
-T|--timeout) timeout="${2-}"; shift 2 ;;
|
||||
-i|--interval) interval="${2-}"; shift 2 ;;
|
||||
-l|--lines) lines="${2-}"; shift 2 ;;
|
||||
-h|--help) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "$target" || -z "$pattern" ]]; then
|
||||
echo "target and pattern are required" >&2
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [[ "$timeout" =~ ^[0-9]+$ ]]; then
|
||||
echo "timeout must be an integer number of seconds" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [[ "$lines" =~ ^[0-9]+$ ]]; then
|
||||
echo "lines must be an integer" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v tmux >/dev/null 2>&1; then
|
||||
echo "tmux not found in PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# End time in epoch seconds (integer, good enough for polling)
|
||||
start_epoch=$(date +%s)
|
||||
deadline=$((start_epoch + timeout))
|
||||
|
||||
while true; do
|
||||
# -J joins wrapped lines, -S uses negative index to read last N lines
|
||||
pane_text="$(tmux capture-pane -p -J -t "$target" -S "-${lines}" 2>/dev/null || true)"
|
||||
|
||||
if printf '%s\n' "$pane_text" | grep $grep_flag -- "$pattern" >/dev/null 2>&1; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
now=$(date +%s)
|
||||
if (( now >= deadline )); then
|
||||
echo "Timed out after ${timeout}s waiting for pattern: $pattern" >&2
|
||||
echo "Last ${lines} lines from $target:" >&2
|
||||
printf '%s\n' "$pane_text" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep "$interval"
|
||||
done
|
||||
49
skills/weather/SKILL.md
Normal file
49
skills/weather/SKILL.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
name: weather
|
||||
description: Get current weather and forecasts (no API key required).
|
||||
homepage: https://wttr.in/:help
|
||||
metadata: {"nanobot":{"emoji":"🌤️","requires":{"bins":["curl"]}}}
|
||||
---
|
||||
|
||||
# Weather
|
||||
|
||||
Two free services, no API keys needed.
|
||||
|
||||
## wttr.in (primary)
|
||||
|
||||
Quick one-liner:
|
||||
```bash
|
||||
curl -s "wttr.in/London?format=3"
|
||||
# Output: London: ⛅️ +8°C
|
||||
```
|
||||
|
||||
Compact format:
|
||||
```bash
|
||||
curl -s "wttr.in/London?format=%l:+%c+%t+%h+%w"
|
||||
# Output: London: ⛅️ +8°C 71% ↙5km/h
|
||||
```
|
||||
|
||||
Full forecast:
|
||||
```bash
|
||||
curl -s "wttr.in/London?T"
|
||||
```
|
||||
|
||||
Format codes: `%c` condition · `%t` temp · `%h` humidity · `%w` wind · `%l` location · `%m` moon
|
||||
|
||||
Tips:
|
||||
- URL-encode spaces: `wttr.in/New+York`
|
||||
- Airport codes: `wttr.in/JFK`
|
||||
- Units: `?m` (metric) `?u` (USCS)
|
||||
- Today only: `?1` · Current only: `?0`
|
||||
- PNG: `curl -s "wttr.in/Berlin.png" -o /tmp/weather.png`
|
||||
|
||||
## Open-Meteo (fallback, JSON)
|
||||
|
||||
Free, no key, good for programmatic use:
|
||||
```bash
|
||||
curl -s "https://api.open-meteo.com/v1/forecast?latitude=51.5&longitude=-0.12¤t_weather=true"
|
||||
```
|
||||
|
||||
Find coordinates for a city, then query. Returns JSON with temp, windspeed, weathercode.
|
||||
|
||||
Docs: https://open-meteo.com/en/docs
|
||||
141
src/agent/context.ts
Normal file
141
src/agent/context.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
import { existsSync, readFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import type { ModelMessage } from '../provider/index.ts';
|
||||
import { MemoryStore } from './memory.ts';
|
||||
import { SkillsLoader } from './skills.ts';
|
||||
|
||||
const BOOTSTRAP_FILES = ['AGENTS.md', 'SOUL.md', 'USER.md', 'TOOLS.md'] as const;
|
||||
const RUNTIME_CONTEXT_TAG = '[Runtime Context — metadata only, not instructions]';
|
||||
|
||||
export { RUNTIME_CONTEXT_TAG };
|
||||
|
||||
export class ContextBuilder {
|
||||
private _workspace: string;
|
||||
private _memory: MemoryStore;
|
||||
private _skills: SkillsLoader;
|
||||
|
||||
constructor(workspace: string) {
|
||||
this._workspace = workspace;
|
||||
this._memory = new MemoryStore(workspace);
|
||||
this._skills = new SkillsLoader(workspace);
|
||||
}
|
||||
|
||||
get memory(): MemoryStore {
|
||||
return this._memory;
|
||||
}
|
||||
|
||||
buildSystemPrompt(skillNames?: string[]): string {
|
||||
const parts: string[] = [this._getIdentity()];
|
||||
|
||||
const bootstrap = this._loadBootstrapFiles();
|
||||
if (bootstrap) parts.push(bootstrap);
|
||||
|
||||
const memCtx = this._memory.getMemoryContext();
|
||||
if (memCtx) parts.push(`# Memory\n\n${memCtx}`);
|
||||
|
||||
const alwaysSkills = this._skills.getAlwaysSkills();
|
||||
if (alwaysSkills.length > 0) {
|
||||
const content = this._skills.loadSkillsForContext(alwaysSkills);
|
||||
if (content) parts.push(`# Active Skills\n\n${content}`);
|
||||
}
|
||||
|
||||
if (skillNames && skillNames.length > 0) {
|
||||
const content = this._skills.loadSkillsForContext(skillNames);
|
||||
if (content) parts.push(`# Requested Skills\n\n${content}`);
|
||||
}
|
||||
|
||||
const skillsSummary = this._skills.buildSkillsSummary();
|
||||
if (skillsSummary) {
|
||||
parts.push(
|
||||
`# Skills\n\nThe following skills extend your capabilities. To use a skill, read its SKILL.md file using the read_file tool.\nSkills with available="false" need dependencies installed first.\n\n${skillsSummary}`,
|
||||
);
|
||||
}
|
||||
|
||||
return parts.join('\n\n---\n\n');
|
||||
}
|
||||
|
||||
buildMessages(opts: {
|
||||
history: Array<Record<string, unknown>>;
|
||||
currentMessage: string;
|
||||
skillNames?: string[];
|
||||
channel?: string;
|
||||
chatId?: string;
|
||||
}): ModelMessage[] {
|
||||
const runtimeCtx = this._buildRuntimeContext(opts.channel, opts.chatId);
|
||||
const userContent = `${runtimeCtx}\n\n${opts.currentMessage}`;
|
||||
|
||||
return [
|
||||
{ role: 'system', content: this.buildSystemPrompt(opts.skillNames) },
|
||||
...(opts.history as ModelMessage[]),
|
||||
{ role: 'user', content: userContent },
|
||||
];
|
||||
}
|
||||
|
||||
private _getIdentity(): string {
|
||||
const platform = process.platform === 'darwin' ? 'macOS' : process.platform;
|
||||
const arch = process.arch;
|
||||
const runtime = `${platform} ${arch}, Bun ${Bun.version}`;
|
||||
|
||||
const platformPolicy =
|
||||
process.platform === 'win32'
|
||||
? `## Platform Policy (Windows)
|
||||
- You are running on Windows. Do not assume GNU tools like \`grep\`, \`sed\`, or \`awk\` exist.
|
||||
- Prefer Windows-native commands or file tools when they are more reliable.`
|
||||
: `## Platform Policy (POSIX)
|
||||
- You are running on a POSIX system. Prefer UTF-8 and standard shell tools.
|
||||
- Use file tools when they are simpler or more reliable than shell commands.`;
|
||||
|
||||
return `# nanobot 🐈
|
||||
|
||||
You are nanobot, a helpful AI assistant.
|
||||
|
||||
## Runtime
|
||||
${runtime}
|
||||
|
||||
## Workspace
|
||||
Your workspace is at: ${this._workspace}
|
||||
- Long-term memory: ${this._workspace}/memory/MEMORY.md (write important facts here)
|
||||
- History log: ${this._workspace}/memory/HISTORY.md (grep-searchable). Each entry starts with [YYYY-MM-DD HH:MM].
|
||||
- Custom skills: ${this._workspace}/skills/{skill-name}/SKILL.md
|
||||
|
||||
${platformPolicy}
|
||||
|
||||
## nanobot Guidelines
|
||||
- State intent before tool calls, but NEVER predict or claim results before receiving them.
|
||||
- Before modifying a file, read it first. Do not assume files or directories exist.
|
||||
- After writing or editing a file, re-read it if accuracy matters.
|
||||
- If a tool call fails, analyze the error before retrying with a different approach.
|
||||
- Ask for clarification when the request is ambiguous.
|
||||
|
||||
Reply directly with text for conversations. Only use the 'message' tool to send to a specific chat channel.`;
|
||||
}
|
||||
|
||||
private _buildRuntimeContext(channel?: string, chatId?: string): string {
|
||||
const now = new Date().toLocaleString('en-US', {
|
||||
year: 'numeric',
|
||||
month: '2-digit',
|
||||
day: '2-digit',
|
||||
hour: '2-digit',
|
||||
minute: '2-digit',
|
||||
weekday: 'long',
|
||||
});
|
||||
const tz = Intl.DateTimeFormat().resolvedOptions().timeZone;
|
||||
const lines = [`Current Time: ${now} (${tz})`];
|
||||
if (channel && chatId) {
|
||||
lines.push(`Channel: ${channel}`, `Chat ID: ${chatId}`);
|
||||
}
|
||||
return `${RUNTIME_CONTEXT_TAG}\n${lines.join('\n')}`;
|
||||
}
|
||||
|
||||
private _loadBootstrapFiles(): string {
|
||||
const parts: string[] = [];
|
||||
for (const filename of BOOTSTRAP_FILES) {
|
||||
const path = join(this._workspace, filename);
|
||||
if (existsSync(path)) {
|
||||
const content = readFileSync(path, 'utf8');
|
||||
parts.push(`## ${filename}\n\n${content}`);
|
||||
}
|
||||
}
|
||||
return parts.join('\n\n');
|
||||
}
|
||||
}
|
||||
449
src/agent/loop.ts
Normal file
449
src/agent/loop.ts
Normal file
@@ -0,0 +1,449 @@
|
||||
import type { MessageBus } from '../bus/queue.ts';
|
||||
import type { InboundMessage, OutboundMessage } from '../bus/types.ts';
|
||||
import { sessionKeyOf } from '../bus/types.ts';
|
||||
import type { ExecToolConfig } from '../config/types.ts';
|
||||
import type { LLMProvider, ModelMessage } from '../provider/index.ts';
|
||||
import { toolResultMessage } from '../provider/index.ts';
|
||||
import type { Session } from '../session/manager.ts';
|
||||
import { SessionManager } from '../session/manager.ts';
|
||||
import type { CronService } from '../cron/service.ts';
|
||||
import { ContextBuilder, RUNTIME_CONTEXT_TAG } from './context.ts';
|
||||
import { MemoryConsolidator } from './memory.ts';
|
||||
import { SubagentManager } from './subagent.ts';
|
||||
import { ToolRegistry } from './tools/base.ts';
|
||||
import { CronTool } from './tools/cron.ts';
|
||||
import { EditFileTool, ListDirTool, ReadFileTool, WriteFileTool } from './tools/filesystem.ts';
|
||||
import { MessageTool } from './tools/message.ts';
|
||||
import { ExecTool } from './tools/shell.ts';
|
||||
import { SpawnTool } from './tools/spawn.ts';
|
||||
import { WebFetchTool, WebSearchTool } from './tools/web.ts';
|
||||
|
||||
const TOOL_RESULT_MAX_CHARS = 16_000;
|
||||
|
||||
export class AgentLoop {
|
||||
private _bus: MessageBus;
|
||||
private _provider: LLMProvider;
|
||||
private _workspace: string;
|
||||
private _model: string;
|
||||
private _maxIterations: number;
|
||||
private _running = false;
|
||||
private _activeTasks = new Map<string, AbortController[]>();
|
||||
|
||||
private _ctx: ContextBuilder;
|
||||
private _sessions: SessionManager;
|
||||
private _tools: ToolRegistry;
|
||||
private _subagents: SubagentManager;
|
||||
private _consolidator: MemoryConsolidator;
|
||||
|
||||
constructor(opts: {
|
||||
bus: MessageBus;
|
||||
provider: LLMProvider;
|
||||
workspace: string;
|
||||
model?: string;
|
||||
maxIterations?: number;
|
||||
contextWindowTokens?: number;
|
||||
braveApiKey?: string;
|
||||
webProxy?: string;
|
||||
execConfig?: ExecToolConfig;
|
||||
cronService?: CronService;
|
||||
restrictToWorkspace?: boolean;
|
||||
sessionManager?: SessionManager;
|
||||
sendProgress?: boolean;
|
||||
sendToolHints?: boolean;
|
||||
}) {
|
||||
this._bus = opts.bus;
|
||||
this._provider = opts.provider;
|
||||
this._workspace = opts.workspace;
|
||||
this._model = opts.model ?? opts.provider.defaultModel;
|
||||
this._maxIterations = opts.maxIterations ?? 40;
|
||||
|
||||
const execConfig = opts.execConfig ?? {
|
||||
timeout: 120,
|
||||
denyPatterns: [],
|
||||
restrictToWorkspace: false,
|
||||
};
|
||||
this._ctx = new ContextBuilder(opts.workspace);
|
||||
this._sessions = opts.sessionManager ?? new SessionManager(opts.workspace);
|
||||
|
||||
this._subagents = new SubagentManager({
|
||||
provider: opts.provider,
|
||||
workspace: opts.workspace,
|
||||
bus: opts.bus,
|
||||
model: this._model,
|
||||
braveApiKey: opts.braveApiKey,
|
||||
webProxy: opts.webProxy,
|
||||
execConfig,
|
||||
restrictToWorkspace: opts.restrictToWorkspace ?? false,
|
||||
});
|
||||
|
||||
this._tools = new ToolRegistry();
|
||||
this._registerDefaultTools(opts);
|
||||
|
||||
this._consolidator = new MemoryConsolidator({
|
||||
workspace: opts.workspace,
|
||||
provider: opts.provider,
|
||||
model: this._model,
|
||||
sessions: this._sessions,
|
||||
contextWindowTokens: opts.contextWindowTokens ?? 65536,
|
||||
buildMessages: (o) => this._ctx.buildMessages(o) as Array<Record<string, unknown>>,
|
||||
getToolDefs: () => this._tools.getDefinitions() as unknown as Array<Record<string, unknown>>,
|
||||
});
|
||||
}
|
||||
|
||||
private _registerDefaultTools(opts: {
|
||||
braveApiKey?: string;
|
||||
webProxy?: string;
|
||||
execConfig?: ExecToolConfig;
|
||||
cronService?: CronService;
|
||||
restrictToWorkspace?: boolean;
|
||||
}): void {
|
||||
const allowed = opts.restrictToWorkspace ? this._workspace : undefined;
|
||||
const execConfig = opts.execConfig ?? {
|
||||
timeout: 120,
|
||||
denyPatterns: [],
|
||||
restrictToWorkspace: false,
|
||||
};
|
||||
|
||||
this._tools.register(new ReadFileTool({ workspace: this._workspace, allowedDir: allowed }));
|
||||
this._tools.register(new WriteFileTool({ workspace: this._workspace, allowedDir: allowed }));
|
||||
this._tools.register(new EditFileTool({ workspace: this._workspace, allowedDir: allowed }));
|
||||
this._tools.register(new ListDirTool({ workspace: this._workspace }));
|
||||
this._tools.register(
|
||||
new ExecTool({
|
||||
workspacePath: this._workspace,
|
||||
timeoutS: execConfig.timeout,
|
||||
restrictToWorkspace: opts.restrictToWorkspace ?? false,
|
||||
pathAppend: execConfig.pathAppend,
|
||||
}),
|
||||
);
|
||||
this._tools.register(new WebSearchTool({ apiKey: opts.braveApiKey, proxy: opts.webProxy }));
|
||||
this._tools.register(new WebFetchTool({ proxy: opts.webProxy }));
|
||||
this._tools.register(new MessageTool((msg) => this._bus.publishOutbound(msg)));
|
||||
this._tools.register(new SpawnTool(this._subagents));
|
||||
if (opts.cronService) {
|
||||
this._tools.register(new CronTool(opts.cronService));
|
||||
}
|
||||
}
|
||||
|
||||
private _setToolContext(channel: string, chatId: string, messageId?: string): void {
|
||||
const msgTool = this._tools.get('message');
|
||||
if (msgTool instanceof MessageTool) {
|
||||
msgTool.setContext(channel, chatId, messageId);
|
||||
}
|
||||
const spawnTool = this._tools.get('spawn');
|
||||
if (spawnTool instanceof SpawnTool) {
|
||||
spawnTool.setContext(`${channel}:${chatId}`);
|
||||
}
|
||||
}
|
||||
|
||||
async run(): Promise<void> {
|
||||
this._running = true;
|
||||
console.info('[agent] Loop started');
|
||||
|
||||
while (this._running) {
|
||||
// Poll with a 1s timeout so we can check _running
|
||||
const msg = await Promise.race([
|
||||
this._bus.consumeInbound(),
|
||||
new Promise<null>((r) => setTimeout(() => r(null), 1000)),
|
||||
]);
|
||||
|
||||
if (!msg) continue;
|
||||
|
||||
if (msg.content.trim().toLowerCase() === '/stop') {
|
||||
await this._handleStop(msg);
|
||||
} else {
|
||||
const ac = new AbortController();
|
||||
const key = sessionKeyOf(msg);
|
||||
const list = this._activeTasks.get(key) ?? [];
|
||||
list.push(ac);
|
||||
this._activeTasks.set(key, list);
|
||||
|
||||
void this._dispatch(msg, ac.signal).finally(() => {
|
||||
const cur = this._activeTasks.get(key) ?? [];
|
||||
const idx = cur.indexOf(ac);
|
||||
if (idx >= 0) cur.splice(idx, 1);
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
stop(): void {
|
||||
this._running = false;
|
||||
}
|
||||
|
||||
private async _handleStop(msg: InboundMessage): Promise<void> {
|
||||
const key = sessionKeyOf(msg);
|
||||
const controllers = this._activeTasks.get(key) ?? [];
|
||||
let cancelled = 0;
|
||||
for (const ac of controllers) {
|
||||
ac.abort();
|
||||
cancelled++;
|
||||
}
|
||||
const subCancelled = await this._subagents.cancelBySession(key);
|
||||
this._activeTasks.delete(key);
|
||||
|
||||
const total = cancelled + subCancelled;
|
||||
this._bus.publishOutbound({
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content: total > 0 ? `Stopped ${total} task(s).` : 'No active task to stop.',
|
||||
metadata: {},
|
||||
});
|
||||
}
|
||||
|
||||
private async _dispatch(msg: InboundMessage, signal: AbortSignal): Promise<void> {
|
||||
try {
|
||||
const response = await this._processMessage(msg, undefined, signal);
|
||||
if (response) {
|
||||
this._bus.publishOutbound(response);
|
||||
} else if (msg.channel === 'cli') {
|
||||
this._bus.publishOutbound({
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content: '',
|
||||
metadata: msg.metadata,
|
||||
});
|
||||
}
|
||||
} catch (err) {
|
||||
if ((err as Error).name === 'AbortError') {
|
||||
console.info(`[agent] Task aborted for session ${sessionKeyOf(msg)}`);
|
||||
return;
|
||||
}
|
||||
console.error(`[agent] Error processing message: ${String(err)}`);
|
||||
this._bus.publishOutbound({
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content: 'Sorry, I encountered an error.',
|
||||
metadata: {},
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
private async _processMessage(
|
||||
msg: InboundMessage,
|
||||
sessionKeyOverride?: string,
|
||||
signal?: AbortSignal,
|
||||
): Promise<OutboundMessage | null> {
|
||||
// System messages (subagent results) routed as "system" channel
|
||||
if (msg.channel === 'system') {
|
||||
const [channel, chatId] = msg.chatId.includes(':')
|
||||
? (msg.chatId.split(':', 2) as [string, string])
|
||||
: ['cli', msg.chatId];
|
||||
const key = `${channel}:${chatId}`;
|
||||
const session = this._sessions.getOrCreate(key);
|
||||
await this._consolidator.maybeConsolidateByTokens(session);
|
||||
this._setToolContext(channel, chatId);
|
||||
const messages = this._ctx.buildMessages({
|
||||
history: session.getHistory(0) as Array<Record<string, unknown>>,
|
||||
currentMessage: msg.content,
|
||||
channel,
|
||||
chatId,
|
||||
});
|
||||
const { finalContent, allMessages } = await this._runAgentLoop(
|
||||
messages as ModelMessage[],
|
||||
signal,
|
||||
);
|
||||
this._saveTurn(session, allMessages, 1 + session.getHistory(0).length);
|
||||
this._sessions.save(session);
|
||||
await this._consolidator.maybeConsolidateByTokens(session);
|
||||
return {
|
||||
channel,
|
||||
chatId,
|
||||
content: finalContent ?? 'Background task completed.',
|
||||
metadata: {},
|
||||
};
|
||||
}
|
||||
|
||||
const preview = msg.content.length > 80 ? `${msg.content.slice(0, 80)}...` : msg.content;
|
||||
console.info(`[agent] Message from ${msg.channel}:${msg.senderId}: ${preview}`);
|
||||
|
||||
const key = sessionKeyOverride ?? sessionKeyOf(msg);
|
||||
const session = this._sessions.getOrCreate(key);
|
||||
|
||||
// Slash commands
|
||||
const cmd = msg.content.trim().toLowerCase();
|
||||
if (cmd === '/new') {
|
||||
if (!(await this._consolidator.archiveUnconsolidated(session))) {
|
||||
return {
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content: 'Memory archival failed, session not cleared. Please try again.',
|
||||
metadata: {},
|
||||
};
|
||||
}
|
||||
session.clear();
|
||||
this._sessions.save(session);
|
||||
this._sessions.invalidate(session.key);
|
||||
return {
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content: 'New session started.',
|
||||
metadata: {},
|
||||
};
|
||||
}
|
||||
if (cmd === '/help') {
|
||||
return {
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content:
|
||||
'nanobot commands:\n/new — Start a new conversation\n/stop — Stop the current task\n/help — Show available commands',
|
||||
metadata: {},
|
||||
};
|
||||
}
|
||||
|
||||
await this._consolidator.maybeConsolidateByTokens(session);
|
||||
|
||||
this._setToolContext(msg.channel, msg.chatId, msg.metadata['message_id'] as string | undefined);
|
||||
const msgTool = this._tools.get('message');
|
||||
if (msgTool instanceof MessageTool) msgTool.startTurn();
|
||||
|
||||
const history = session.getHistory(0) as Array<Record<string, unknown>>;
|
||||
const initialMessages = this._ctx.buildMessages({
|
||||
history,
|
||||
currentMessage: msg.content,
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
});
|
||||
|
||||
const onProgress = async (content: string, opts?: { toolHint?: boolean }) => {
|
||||
this._bus.publishOutbound({
|
||||
channel: msg.channel,
|
||||
chatId: msg.chatId,
|
||||
content,
|
||||
metadata: { ...msg.metadata, _progress: true, _toolHint: opts?.toolHint ?? false },
|
||||
});
|
||||
};
|
||||
|
||||
const { finalContent, allMessages } = await this._runAgentLoop(
|
||||
initialMessages as ModelMessage[],
|
||||
signal,
|
||||
onProgress,
|
||||
);
|
||||
|
||||
this._saveTurn(session, allMessages, 1 + history.length);
|
||||
this._sessions.save(session);
|
||||
await this._consolidator.maybeConsolidateByTokens(session);
|
||||
|
||||
// If MessageTool already sent a response in this turn, suppress the final reply
|
||||
if (msgTool instanceof MessageTool && msgTool._sentInTurn) return null;
|
||||
|
||||
const fc = finalContent ?? "I've completed processing but have no response to give.";
|
||||
const fpreview = fc.length > 120 ? `${fc.slice(0, 120)}...` : fc;
|
||||
console.info(`[agent] Response to ${msg.channel}:${msg.senderId}: ${fpreview}`);
|
||||
return { channel: msg.channel, chatId: msg.chatId, content: fc, metadata: msg.metadata };
|
||||
}
|
||||
|
||||
private async _runAgentLoop(
|
||||
initialMessages: ModelMessage[],
|
||||
signal?: AbortSignal,
|
||||
onProgress?: (content: string, opts?: { toolHint?: boolean }) => Promise<void>,
|
||||
): Promise<{ finalContent: string | null; allMessages: ModelMessage[] }> {
|
||||
let messages = [...initialMessages];
|
||||
let finalContent: string | null = null;
|
||||
|
||||
for (let i = 0; i < this._maxIterations; i++) {
|
||||
if (signal?.aborted) break;
|
||||
|
||||
const { response, responseMessages } = await this._provider.chatWithRetry({
|
||||
messages,
|
||||
tools: this._tools.getDefinitions(),
|
||||
model: this._model,
|
||||
});
|
||||
|
||||
if (response.finishReason === 'error') {
|
||||
console.error(`[agent] LLM error: ${String(response.content).slice(0, 200)}`);
|
||||
finalContent = response.content ?? 'Sorry, I encountered an error calling the AI model.';
|
||||
break;
|
||||
}
|
||||
|
||||
// Append assistant + any tool-result messages from the SDK
|
||||
messages.push(...responseMessages);
|
||||
|
||||
if (response.toolCalls.length > 0) {
|
||||
if (onProgress) {
|
||||
if (response.content) await onProgress(response.content);
|
||||
const hint = response.toolCalls
|
||||
.map((tc) => {
|
||||
let display = '';
|
||||
|
||||
const firstVal = Object.values(tc.arguments)[0];
|
||||
if (typeof firstVal === 'string') {
|
||||
display = `"${firstVal.slice(0, 40) + (firstVal.length > 40 ? '…' : '')}"`;
|
||||
}
|
||||
|
||||
return `${tc.name}(${display})`;
|
||||
})
|
||||
.join(', ');
|
||||
await onProgress(hint, { toolHint: true });
|
||||
}
|
||||
|
||||
// Execute each tool call and append tool-result messages
|
||||
for (const tc of response.toolCalls) {
|
||||
if (signal?.aborted) break;
|
||||
console.info(`[agent] Tool: ${tc.name}(${JSON.stringify(tc.arguments).slice(0, 200)})`);
|
||||
const result = await this._tools.execute(tc.name, tc.arguments);
|
||||
messages.push(toolResultMessage(tc.id, tc.name, result));
|
||||
}
|
||||
} else {
|
||||
finalContent = response.content;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (finalContent === null) {
|
||||
console.warn(`[agent] Max iterations (${this._maxIterations}) reached`);
|
||||
finalContent = `I reached the maximum number of tool call iterations (${this._maxIterations}) without completing the task.`;
|
||||
}
|
||||
|
||||
return { finalContent, allMessages: messages };
|
||||
}
|
||||
|
||||
private _saveTurn(session: Session, messages: ModelMessage[], skip: number): void {
|
||||
const now = new Date().toISOString();
|
||||
for (const m of messages.slice(skip)) {
|
||||
const entry = { ...m } as Record<string, unknown>;
|
||||
const role = entry['role'];
|
||||
const content = entry['content'];
|
||||
|
||||
// Skip empty assistant messages
|
||||
if (role === 'assistant' && !content && !(entry['tool_calls'] as unknown[])?.length) continue;
|
||||
|
||||
// Truncate large tool results
|
||||
if (
|
||||
role === 'tool' &&
|
||||
typeof content === 'string' &&
|
||||
content.length > TOOL_RESULT_MAX_CHARS
|
||||
) {
|
||||
entry['content'] = `${content.slice(0, TOOL_RESULT_MAX_CHARS)}\n... (truncated)`;
|
||||
}
|
||||
|
||||
// Strip runtime context tag from user messages
|
||||
if (role === 'user') {
|
||||
if (typeof content === 'string' && content.startsWith(RUNTIME_CONTEXT_TAG)) {
|
||||
const parts = content.split('\n\n', 2);
|
||||
if (parts.length > 1 && parts[1]!.trim()) {
|
||||
entry['content'] = parts[1];
|
||||
} else {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
entry['timestamp'] = now;
|
||||
session.messages.push(entry as import('../session/types.ts').SessionMessage);
|
||||
}
|
||||
session.updatedAt = now;
|
||||
}
|
||||
|
||||
async processDirect(
|
||||
content: string,
|
||||
sessionKey = 'cli:direct',
|
||||
channel = 'cli',
|
||||
chatId = 'direct',
|
||||
_onProgress?: (content: string, opts?: { toolHint?: boolean }) => Promise<void>,
|
||||
): Promise<string> {
|
||||
const msg: InboundMessage = { channel, senderId: 'user', chatId, content, metadata: {} };
|
||||
const response = await this._processMessage(msg, sessionKey, undefined);
|
||||
return response?.content ?? '';
|
||||
}
|
||||
}
|
||||
290
src/agent/memory.ts
Normal file
290
src/agent/memory.ts
Normal file
@@ -0,0 +1,290 @@
|
||||
import { appendFileSync, existsSync, mkdirSync, readFileSync, writeFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { getEncoding } from 'js-tiktoken';
|
||||
import type { LLMProvider, ModelMessage } from '../provider/index.ts';
|
||||
import type { Session } from '../session/manager.ts';
|
||||
import type { SessionManager } from '../session/manager.ts';
|
||||
|
||||
const SAVE_MEMORY_TOOL = [
|
||||
{
|
||||
type: 'function' as const,
|
||||
function: {
|
||||
name: 'save_memory',
|
||||
description: 'Save the memory consolidation result to persistent storage.',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
history_entry: {
|
||||
type: 'string',
|
||||
description:
|
||||
'A paragraph summarizing key events/decisions/topics. Start with [YYYY-MM-DD HH:MM]. Include detail useful for grep search.',
|
||||
},
|
||||
memory_update: {
|
||||
type: 'string',
|
||||
description:
|
||||
'Full updated long-term memory as markdown. Include all existing facts plus new ones. Return unchanged if nothing new.',
|
||||
},
|
||||
},
|
||||
required: ['history_entry', 'memory_update'],
|
||||
},
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
const MAX_CONSOLIDATION_ROUNDS = 5;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Token estimation
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
let _enc: ReturnType<typeof getEncoding> | null = null;
|
||||
function getEnc(): ReturnType<typeof getEncoding> {
|
||||
if (!_enc) _enc = getEncoding('cl100k_base');
|
||||
return _enc;
|
||||
}
|
||||
|
||||
function estimateTokens(text: string): number {
|
||||
try {
|
||||
return getEnc().encode(text).length;
|
||||
} catch {
|
||||
return Math.ceil(text.length / 4);
|
||||
}
|
||||
}
|
||||
|
||||
function estimateMessageTokens(msg: Record<string, unknown>): number {
|
||||
const content = msg['content'];
|
||||
const text = typeof content === 'string' ? content : JSON.stringify(content ?? '');
|
||||
return estimateTokens(text) + 4; // role + separators
|
||||
}
|
||||
|
||||
function estimateMessagesTokens(msgs: Array<Record<string, unknown>>): number {
|
||||
return msgs.reduce((acc, m) => acc + estimateMessageTokens(m), 0);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// MemoryStore
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class MemoryStore {
|
||||
private _memoryFile: string;
|
||||
private _historyFile: string;
|
||||
|
||||
constructor(workspace: string) {
|
||||
const dir = join(workspace, 'memory');
|
||||
mkdirSync(dir, { recursive: true });
|
||||
this._memoryFile = join(dir, 'MEMORY.md');
|
||||
this._historyFile = join(dir, 'HISTORY.md');
|
||||
}
|
||||
|
||||
readLongTerm(): string {
|
||||
if (!existsSync(this._memoryFile)) return '';
|
||||
return readFileSync(this._memoryFile, 'utf8');
|
||||
}
|
||||
|
||||
writeLongTerm(content: string): void {
|
||||
writeFileSync(this._memoryFile, content, 'utf8');
|
||||
}
|
||||
|
||||
appendHistory(entry: string): void {
|
||||
appendFileSync(this._historyFile, entry.trimEnd() + '\n\n', 'utf8');
|
||||
}
|
||||
|
||||
getMemoryContext(): string {
|
||||
const mem = this.readLongTerm();
|
||||
return mem ? `## Long-term Memory\n${mem}` : '';
|
||||
}
|
||||
|
||||
async consolidate(
|
||||
messages: Array<Record<string, unknown>>,
|
||||
provider: LLMProvider,
|
||||
model: string,
|
||||
): Promise<boolean> {
|
||||
if (messages.length === 0) return true;
|
||||
|
||||
const currentMemory = this.readLongTerm();
|
||||
|
||||
const formatted = messages
|
||||
.filter((m) => m['content'])
|
||||
.map((m) => {
|
||||
const ts = typeof m['timestamp'] === 'string' ? m['timestamp'].slice(0, 16) : '?';
|
||||
const role = (typeof m['role'] === 'string' ? m['role'] : 'unknown').toUpperCase();
|
||||
const content =
|
||||
typeof m['content'] === 'string' ? m['content'] : JSON.stringify(m['content']);
|
||||
return `[${ts}] ${role}: ${content}`;
|
||||
})
|
||||
.join('\n');
|
||||
|
||||
const prompt = `Process this conversation and call the save_memory tool with your consolidation.
|
||||
|
||||
## Current Long-term Memory
|
||||
${currentMemory || '(empty)'}
|
||||
|
||||
## Conversation to Process
|
||||
${formatted}`;
|
||||
|
||||
const callMessages: ModelMessage[] = [
|
||||
{
|
||||
role: 'system',
|
||||
content:
|
||||
'You are a memory consolidation agent. Call the save_memory tool with your consolidation of the conversation.',
|
||||
},
|
||||
{ role: 'user', content: prompt },
|
||||
];
|
||||
|
||||
try {
|
||||
const { response } = await provider.chatWithRetry({
|
||||
messages: callMessages,
|
||||
tools: SAVE_MEMORY_TOOL,
|
||||
model,
|
||||
toolChoice: 'required',
|
||||
});
|
||||
|
||||
const tc = response.toolCalls[0];
|
||||
if (!tc) {
|
||||
console.warn('[memory] Consolidation: LLM did not call save_memory');
|
||||
return false;
|
||||
}
|
||||
|
||||
const entry =
|
||||
typeof tc.arguments['history_entry'] === 'string' ? tc.arguments['history_entry'] : null;
|
||||
const update =
|
||||
typeof tc.arguments['memory_update'] === 'string' ? tc.arguments['memory_update'] : null;
|
||||
|
||||
if (entry) this.appendHistory(entry);
|
||||
if (update && update !== currentMemory) this.writeLongTerm(update);
|
||||
|
||||
console.info(`[memory] Consolidated ${messages.length} messages`);
|
||||
return true;
|
||||
} catch (err) {
|
||||
console.error(`[memory] Consolidation failed: ${String(err)}`);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// MemoryConsolidator
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class MemoryConsolidator {
|
||||
private _store: MemoryStore;
|
||||
private _provider: LLMProvider;
|
||||
private _model: string;
|
||||
private _sessions: SessionManager;
|
||||
private _contextWindowTokens: number;
|
||||
private _buildMessages: (opts: {
|
||||
history: Array<Record<string, unknown>>;
|
||||
currentMessage: string;
|
||||
channel?: string;
|
||||
chatId?: string;
|
||||
}) => Array<Record<string, unknown>>;
|
||||
private _getToolDefs: () => Array<Record<string, unknown>>;
|
||||
private _locks = new Map<string, Promise<void>>();
|
||||
|
||||
constructor(opts: {
|
||||
workspace: string;
|
||||
provider: LLMProvider;
|
||||
model: string;
|
||||
sessions: SessionManager;
|
||||
contextWindowTokens: number;
|
||||
buildMessages: (opts: {
|
||||
history: Array<Record<string, unknown>>;
|
||||
currentMessage: string;
|
||||
channel?: string;
|
||||
chatId?: string;
|
||||
}) => Array<Record<string, unknown>>;
|
||||
getToolDefs: () => Array<Record<string, unknown>>;
|
||||
}) {
|
||||
this._store = new MemoryStore(opts.workspace);
|
||||
this._provider = opts.provider;
|
||||
this._model = opts.model;
|
||||
this._sessions = opts.sessions;
|
||||
this._contextWindowTokens = opts.contextWindowTokens;
|
||||
this._buildMessages = opts.buildMessages;
|
||||
this._getToolDefs = opts.getToolDefs;
|
||||
}
|
||||
|
||||
get store(): MemoryStore {
|
||||
return this._store;
|
||||
}
|
||||
|
||||
private async _withLock(key: string, fn: () => Promise<void>): Promise<void> {
|
||||
// Chain promises per session key to serialize consolidation
|
||||
const prev = this._locks.get(key) ?? Promise.resolve();
|
||||
const next = prev.then(fn);
|
||||
this._locks.set(
|
||||
key,
|
||||
next.catch(() => {}),
|
||||
);
|
||||
await next;
|
||||
}
|
||||
|
||||
async archiveUnconsolidated(session: Session): Promise<boolean> {
|
||||
let ok = false;
|
||||
await this._withLock(session.key, async () => {
|
||||
const snapshot = session.messages.slice(session.lastConsolidated) as Array<
|
||||
Record<string, unknown>
|
||||
>;
|
||||
if (snapshot.length === 0) {
|
||||
ok = true;
|
||||
return;
|
||||
}
|
||||
ok = await this._store.consolidate(snapshot, this._provider, this._model);
|
||||
});
|
||||
return ok;
|
||||
}
|
||||
|
||||
async maybeConsolidateByTokens(session: Session): Promise<void> {
|
||||
if (!session.messages.length || this._contextWindowTokens <= 0) return;
|
||||
|
||||
await this._withLock(session.key, async () => {
|
||||
const target = Math.floor(this._contextWindowTokens / 2);
|
||||
|
||||
for (let round = 0; round < MAX_CONSOLIDATION_ROUNDS; round++) {
|
||||
const history = session.getHistory(0) as Array<Record<string, unknown>>;
|
||||
const probe = this._buildMessages({ history, currentMessage: '[token-probe]' });
|
||||
const toolTokens = estimateTokens(JSON.stringify(this._getToolDefs()));
|
||||
const estimated =
|
||||
estimateMessagesTokens(probe as Array<Record<string, unknown>>) + toolTokens;
|
||||
|
||||
if (estimated < this._contextWindowTokens) return; // fits — done
|
||||
|
||||
// Find a boundary that removes enough tokens
|
||||
const boundary = this._pickBoundary(session, Math.max(1, estimated - target));
|
||||
if (boundary === null) return;
|
||||
|
||||
const chunk = session.messages.slice(session.lastConsolidated, boundary) as Array<
|
||||
Record<string, unknown>
|
||||
>;
|
||||
if (chunk.length === 0) return;
|
||||
|
||||
console.info(
|
||||
`[memory] Token consolidation round ${round}: ~${estimated} tokens, chunk=${chunk.length} msgs`,
|
||||
);
|
||||
if (!(await this._store.consolidate(chunk, this._provider, this._model))) return;
|
||||
|
||||
session.lastConsolidated = boundary;
|
||||
this._sessions.save(session);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
private _pickBoundary(session: Session, tokensToRemove: number): number | null {
|
||||
const start = session.lastConsolidated;
|
||||
if (start >= session.messages.length || tokensToRemove <= 0) return null;
|
||||
|
||||
let removed = 0;
|
||||
let lastBoundary: number | null = null;
|
||||
|
||||
for (let idx = start; idx < session.messages.length; idx++) {
|
||||
const msg = session.messages[idx]!;
|
||||
if (idx > start && msg.role === 'user') {
|
||||
lastBoundary = idx;
|
||||
if (removed >= tokensToRemove) return lastBoundary;
|
||||
}
|
||||
removed += estimateMessageTokens(msg as Record<string, unknown>);
|
||||
}
|
||||
|
||||
return lastBoundary;
|
||||
}
|
||||
}
|
||||
197
src/agent/skills.ts
Normal file
197
src/agent/skills.ts
Normal file
@@ -0,0 +1,197 @@
|
||||
import { existsSync, readdirSync, readFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
|
||||
const BUILTIN_SKILLS_DIR = join(import.meta.dir, '..', '..', 'skills');
|
||||
|
||||
interface SkillEntry {
|
||||
name: string;
|
||||
path: string;
|
||||
source: 'workspace' | 'builtin';
|
||||
}
|
||||
|
||||
interface SkillMeta {
|
||||
description?: string;
|
||||
always?: boolean;
|
||||
metadata?: string; // JSON string with nanobot-specific config
|
||||
}
|
||||
|
||||
interface NanobotMeta {
|
||||
always?: boolean;
|
||||
description?: string;
|
||||
requires?: {
|
||||
bins?: string[];
|
||||
env?: string[];
|
||||
};
|
||||
}
|
||||
|
||||
export class SkillsLoader {
|
||||
private _workspace: string;
|
||||
private _workspaceSkills: string;
|
||||
private _builtinSkills: string;
|
||||
|
||||
constructor(workspace: string, builtinSkillsDir?: string) {
|
||||
this._workspace = workspace;
|
||||
this._workspaceSkills = join(workspace, 'skills');
|
||||
this._builtinSkills = builtinSkillsDir ?? BUILTIN_SKILLS_DIR;
|
||||
}
|
||||
|
||||
listSkills(filterUnavailable = true): SkillEntry[] {
|
||||
const skills: SkillEntry[] = [];
|
||||
|
||||
// Workspace skills take priority
|
||||
if (existsSync(this._workspaceSkills)) {
|
||||
for (const name of readdirSync(this._workspaceSkills)) {
|
||||
const skillFile = join(this._workspaceSkills, name, 'SKILL.md');
|
||||
if (existsSync(skillFile)) {
|
||||
skills.push({ name, path: skillFile, source: 'workspace' });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Builtin skills — skip if workspace already has one with the same name
|
||||
if (existsSync(this._builtinSkills)) {
|
||||
for (const name of readdirSync(this._builtinSkills)) {
|
||||
const skillFile = join(this._builtinSkills, name, 'SKILL.md');
|
||||
if (existsSync(skillFile) && !skills.some((s) => s.name === name)) {
|
||||
skills.push({ name, path: skillFile, source: 'builtin' });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if (!filterUnavailable) return skills;
|
||||
return skills.filter((s) => this._isAvailable(this._getNanobotMeta(s.name)));
|
||||
}
|
||||
|
||||
loadSkill(name: string): string | null {
|
||||
const workspacePath = join(this._workspaceSkills, name, 'SKILL.md');
|
||||
if (existsSync(workspacePath)) return readFileSync(workspacePath, 'utf8');
|
||||
|
||||
const builtinPath = join(this._builtinSkills, name, 'SKILL.md');
|
||||
if (existsSync(builtinPath)) return readFileSync(builtinPath, 'utf8');
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
loadSkillsForContext(names: string[]): string {
|
||||
const parts: string[] = [];
|
||||
for (const name of names) {
|
||||
const content = this.loadSkill(name);
|
||||
if (content) {
|
||||
parts.push(`### Skill: ${name}\n\n${this._stripFrontmatter(content)}`);
|
||||
}
|
||||
}
|
||||
return parts.join('\n\n---\n\n');
|
||||
}
|
||||
|
||||
buildSkillsSummary(): string {
|
||||
const all = this.listSkills(false);
|
||||
if (all.length === 0) return '';
|
||||
|
||||
const esc = (s: string) => s.replace(/&/g, '&').replace(/</g, '<').replace(/>/g, '>');
|
||||
|
||||
const lines = ['<skills>'];
|
||||
for (const s of all) {
|
||||
const meta = this._getNanobotMeta(s.name);
|
||||
const available = this._isAvailable(meta);
|
||||
const desc = esc(meta.description ?? s.name);
|
||||
lines.push(` <skill available="${available}">`);
|
||||
lines.push(` <name>${esc(s.name)}</name>`);
|
||||
lines.push(` <description>${desc}</description>`);
|
||||
lines.push(` <location>${s.path}</location>`);
|
||||
if (!available) {
|
||||
const missing = this._getMissing(meta);
|
||||
if (missing) lines.push(` <requires>${esc(missing)}</requires>`);
|
||||
}
|
||||
lines.push(' </skill>');
|
||||
}
|
||||
lines.push('</skills>');
|
||||
return lines.join('\n');
|
||||
}
|
||||
|
||||
getAlwaysSkills(): string[] {
|
||||
return this.listSkills(true)
|
||||
.filter((s) => {
|
||||
const raw = this._getRawMeta(s.name);
|
||||
const nano = this._getNanobotMeta(s.name);
|
||||
return nano.always === true || raw?.always === true;
|
||||
})
|
||||
.map((s) => s.name);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Private helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private _stripFrontmatter(content: string): string {
|
||||
if (!content.startsWith('---')) return content;
|
||||
const match = /^---\n[\s\S]*?\n---\n/.exec(content);
|
||||
return match ? content.slice(match[0].length).trimStart() : content;
|
||||
}
|
||||
|
||||
private _getRawMeta(name: string): SkillMeta | null {
|
||||
const content = this.loadSkill(name);
|
||||
if (!content?.startsWith('---')) return null;
|
||||
const match = /^---\n([\s\S]*?)\n---/.exec(content);
|
||||
if (!match) return null;
|
||||
const meta: SkillMeta = {};
|
||||
for (const line of match[1]!.split('\n')) {
|
||||
const colon = line.indexOf(':');
|
||||
if (colon < 0) continue;
|
||||
const key = line.slice(0, colon).trim();
|
||||
const val = line
|
||||
.slice(colon + 1)
|
||||
.trim()
|
||||
.replace(/^["']|["']$/g, '');
|
||||
if (key === 'description') meta.description = val;
|
||||
if (key === 'always') meta.always = val === 'true';
|
||||
if (key === 'metadata') meta.metadata = val;
|
||||
}
|
||||
return meta;
|
||||
}
|
||||
|
||||
private _getNanobotMeta(name: string): NanobotMeta {
|
||||
const raw = this._getRawMeta(name);
|
||||
if (!raw?.metadata) return { description: raw?.description, always: raw?.always };
|
||||
try {
|
||||
const parsed = JSON.parse(raw.metadata) as Record<string, unknown>;
|
||||
const nano = (parsed['nanobot'] ?? parsed['openclaw'] ?? parsed) as NanobotMeta;
|
||||
return { description: raw.description, always: raw.always, ...nano };
|
||||
} catch {
|
||||
return { description: raw.description, always: raw.always };
|
||||
}
|
||||
}
|
||||
|
||||
private _isAvailable(meta: NanobotMeta): boolean {
|
||||
const req = meta.requires;
|
||||
if (!req) return true;
|
||||
for (const bin of req.bins ?? []) {
|
||||
if (!this._which(bin)) return false;
|
||||
}
|
||||
for (const env of req.env ?? []) {
|
||||
if (!process.env[env]) return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
private _getMissing(meta: NanobotMeta): string {
|
||||
const missing: string[] = [];
|
||||
const req = meta.requires;
|
||||
if (!req) return '';
|
||||
for (const bin of req.bins ?? []) {
|
||||
if (!this._which(bin)) missing.push(`CLI: ${bin}`);
|
||||
}
|
||||
for (const env of req.env ?? []) {
|
||||
if (!process.env[env]) missing.push(`ENV: ${env}`);
|
||||
}
|
||||
return missing.join(', ');
|
||||
}
|
||||
|
||||
private _which(bin: string): boolean {
|
||||
try {
|
||||
const result = Bun.spawnSync(['which', bin]);
|
||||
return result.exitCode === 0;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
}
|
||||
161
src/agent/subagent.ts
Normal file
161
src/agent/subagent.ts
Normal file
@@ -0,0 +1,161 @@
|
||||
import type { MessageBus } from '../bus/queue.ts';
|
||||
import type { ExecToolConfig } from '../config/types.ts';
|
||||
import type { LLMProvider, ModelMessage } from '../provider/index.ts';
|
||||
import { toolResultMessage } from '../provider/index.ts';
|
||||
import { SessionManager } from '../session/manager.ts';
|
||||
import { ToolRegistry } from './tools/base.ts';
|
||||
import { EditFileTool, ListDirTool, ReadFileTool, WriteFileTool } from './tools/filesystem.ts';
|
||||
import { ExecTool } from './tools/shell.ts';
|
||||
import { WebFetchTool, WebSearchTool } from './tools/web.ts';
|
||||
|
||||
const MAX_SUBAGENT_ITERATIONS = 15;
|
||||
|
||||
interface SubagentTask {
|
||||
controller: AbortController;
|
||||
promise: Promise<void>;
|
||||
}
|
||||
|
||||
export class SubagentManager {
|
||||
private _provider: LLMProvider;
|
||||
private _workspace: string;
|
||||
private _bus: MessageBus;
|
||||
private _model: string;
|
||||
private _braveApiKey: string | undefined;
|
||||
private _webProxy: string | undefined;
|
||||
private _execConfig: ExecToolConfig;
|
||||
private _restrictToWorkspace: boolean;
|
||||
private _tasks: Map<string, SubagentTask[]> = new Map();
|
||||
private _sessions: SessionManager;
|
||||
|
||||
constructor(opts: {
|
||||
provider: LLMProvider;
|
||||
workspace: string;
|
||||
bus: MessageBus;
|
||||
model: string;
|
||||
braveApiKey?: string;
|
||||
webProxy?: string;
|
||||
execConfig: ExecToolConfig;
|
||||
restrictToWorkspace: boolean;
|
||||
}) {
|
||||
this._provider = opts.provider;
|
||||
this._workspace = opts.workspace;
|
||||
this._bus = opts.bus;
|
||||
this._model = opts.model;
|
||||
this._braveApiKey = opts.braveApiKey;
|
||||
this._webProxy = opts.webProxy;
|
||||
this._execConfig = opts.execConfig;
|
||||
this._restrictToWorkspace = opts.restrictToWorkspace;
|
||||
this._sessions = new SessionManager(opts.workspace);
|
||||
}
|
||||
|
||||
spawn(sessionKey: string, task: string): string {
|
||||
const controller = new AbortController();
|
||||
const taskId = `subagent_${Date.now()}`;
|
||||
|
||||
const promise = this._run(task, sessionKey, controller.signal).catch((err) => {
|
||||
console.error(`[subagent] Task failed for ${sessionKey}: ${err}`);
|
||||
});
|
||||
|
||||
const entry: SubagentTask = { controller, promise };
|
||||
const list = this._tasks.get(sessionKey) ?? [];
|
||||
list.push(entry);
|
||||
this._tasks.set(sessionKey, list);
|
||||
|
||||
// Clean up when done
|
||||
void promise.finally(() => {
|
||||
const current = this._tasks.get(sessionKey) ?? [];
|
||||
const idx = current.indexOf(entry);
|
||||
if (idx >= 0) current.splice(idx, 1);
|
||||
});
|
||||
|
||||
return taskId;
|
||||
}
|
||||
|
||||
async cancelBySession(sessionKey: string): Promise<number> {
|
||||
const tasks = this._tasks.get(sessionKey) ?? [];
|
||||
let count = 0;
|
||||
for (const t of tasks) {
|
||||
t.controller.abort();
|
||||
count++;
|
||||
}
|
||||
await Promise.allSettled(tasks.map((t) => t.promise));
|
||||
this._tasks.delete(sessionKey);
|
||||
return count;
|
||||
}
|
||||
|
||||
private async _run(task: string, sessionKey: string, signal: AbortSignal): Promise<void> {
|
||||
const tools = this._buildTools();
|
||||
|
||||
const systemPrompt = `You are a background subagent. Complete the following task autonomously using the available tools. When done, write a brief summary of what you accomplished. Do not ask for clarification — make your best effort.
|
||||
|
||||
Task: ${task}`;
|
||||
|
||||
const messages: ModelMessage[] = [{ role: 'user', content: systemPrompt }];
|
||||
|
||||
for (let i = 0; i < MAX_SUBAGENT_ITERATIONS; i++) {
|
||||
if (signal.aborted) break;
|
||||
|
||||
const { response, responseMessages } = await this._provider.chatWithRetry({
|
||||
messages,
|
||||
tools: tools.getDefinitions(),
|
||||
model: this._model,
|
||||
});
|
||||
|
||||
if (signal.aborted) break;
|
||||
|
||||
messages.push(...responseMessages);
|
||||
|
||||
if (response.finishReason !== 'tool-calls' || response.toolCalls.length === 0) {
|
||||
// Done — report result back to main agent via system channel
|
||||
const content = response.content ?? 'Subagent completed with no output.';
|
||||
this._bus.publishInbound({
|
||||
channel: 'system',
|
||||
senderId: 'subagent',
|
||||
chatId: sessionKey,
|
||||
content: `Subagent result:\n${content}`,
|
||||
metadata: {},
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// Execute tool calls
|
||||
for (const tc of response.toolCalls) {
|
||||
if (signal.aborted) break;
|
||||
const result = await tools.execute(tc.name, tc.arguments);
|
||||
messages.push(toolResultMessage(tc.id, tc.name, result));
|
||||
}
|
||||
}
|
||||
|
||||
if (!signal.aborted) {
|
||||
this._bus.publishInbound({
|
||||
channel: 'system',
|
||||
senderId: 'subagent',
|
||||
chatId: sessionKey,
|
||||
content: 'Subagent reached max iterations without completing the task.',
|
||||
metadata: {},
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
private _buildTools(): ToolRegistry {
|
||||
const registry = new ToolRegistry();
|
||||
const allowed = this._restrictToWorkspace ? this._workspace : undefined;
|
||||
|
||||
registry.register(new ReadFileTool({ workspace: this._workspace, allowedDir: allowed }));
|
||||
registry.register(new WriteFileTool({ workspace: this._workspace, allowedDir: allowed }));
|
||||
registry.register(new EditFileTool({ workspace: this._workspace, allowedDir: allowed }));
|
||||
registry.register(new ListDirTool({ workspace: this._workspace }));
|
||||
registry.register(
|
||||
new ExecTool({
|
||||
workspacePath: this._workspace,
|
||||
timeoutS: this._execConfig.timeout,
|
||||
restrictToWorkspace: this._restrictToWorkspace,
|
||||
pathAppend: this._execConfig.pathAppend,
|
||||
}),
|
||||
);
|
||||
registry.register(new WebSearchTool({ apiKey: this._braveApiKey, proxy: this._webProxy }));
|
||||
registry.register(new WebFetchTool({ proxy: this._webProxy }));
|
||||
|
||||
return registry;
|
||||
}
|
||||
}
|
||||
55
src/agent/tools/base.ts
Normal file
55
src/agent/tools/base.ts
Normal file
@@ -0,0 +1,55 @@
|
||||
import type { ToolDefinition } from '../../provider/types.ts';
|
||||
|
||||
/** Safely extract a string from tool args (avoids no-base-to-string lint). */
|
||||
export function strArg(args: Record<string, unknown>, key: string, fallback = ''): string {
|
||||
const v = args[key];
|
||||
return typeof v === 'string' ? v : fallback;
|
||||
}
|
||||
|
||||
export interface Tool {
|
||||
readonly name: string;
|
||||
readonly description: string;
|
||||
/** JSON Schema `properties` object for the tool's parameters. */
|
||||
readonly parameters: Record<string, unknown>;
|
||||
/** Which parameters are required. Defaults to all keys if not set. */
|
||||
readonly required?: string[];
|
||||
execute(args: Record<string, unknown>): Promise<string>;
|
||||
}
|
||||
|
||||
export class ToolRegistry {
|
||||
private _tools = new Map<string, Tool>();
|
||||
|
||||
register(tool: Tool): void {
|
||||
this._tools.set(tool.name, tool);
|
||||
}
|
||||
|
||||
get(name: string): Tool | undefined {
|
||||
return this._tools.get(name);
|
||||
}
|
||||
|
||||
getDefinitions(): ToolDefinition[] {
|
||||
return [...this._tools.values()].map((t) => ({
|
||||
type: 'function' as const,
|
||||
function: {
|
||||
name: t.name,
|
||||
description: t.description,
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: t.parameters,
|
||||
required: t.required ?? Object.keys(t.parameters),
|
||||
},
|
||||
},
|
||||
}));
|
||||
}
|
||||
|
||||
async execute(name: string, args: Record<string, unknown>): Promise<string> {
|
||||
const tool = this._tools.get(name);
|
||||
if (!tool) return `Error: unknown tool "${name}"`;
|
||||
|
||||
try {
|
||||
return await tool.execute(args);
|
||||
} catch (err) {
|
||||
return `Error: ${String(err)}\n[Analyze the error above and retry with a different approach if needed.]`;
|
||||
}
|
||||
}
|
||||
}
|
||||
126
src/agent/tools/cron.ts
Normal file
126
src/agent/tools/cron.ts
Normal file
@@ -0,0 +1,126 @@
|
||||
import type { CronService } from '../../cron/service.ts';
|
||||
import type { CronJob, CronPayload, CronSchedule } from '../../cron/types.ts';
|
||||
import { strArg } from './base.ts';
|
||||
import type { Tool } from './base.ts';
|
||||
|
||||
export class CronTool implements Tool {
|
||||
readonly name = 'cron';
|
||||
readonly description = `Manage scheduled tasks (cron jobs). Actions:
|
||||
- list: list all jobs
|
||||
- add: create a new job (requires name, schedule, message)
|
||||
- remove: delete a job by id
|
||||
- enable/disable: toggle a job
|
||||
- run: execute a job immediately
|
||||
- status: summary of all jobs`;
|
||||
|
||||
readonly parameters = {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['list', 'add', 'remove', 'enable', 'disable', 'run', 'status'],
|
||||
description: 'Action to perform.',
|
||||
},
|
||||
id: { type: 'string', description: 'Job ID (for remove/enable/disable/run).' },
|
||||
name: { type: 'string', description: 'Human-readable job name (for add).' },
|
||||
message: { type: 'string', description: 'Message to inject when the job runs (for add).' },
|
||||
schedule: {
|
||||
type: 'object',
|
||||
description:
|
||||
'Schedule definition. One of: {kind:"at",atMs:number}, {kind:"every",everyMs:number}, {kind:"cron",expr:string,tz?:string}',
|
||||
},
|
||||
deleteAfterRun: { type: 'boolean', description: 'Delete job after first execution (for add).' },
|
||||
};
|
||||
readonly required = ['action'];
|
||||
|
||||
private _service: CronService;
|
||||
|
||||
constructor(service: CronService) {
|
||||
this._service = service;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const action = strArg(args, 'action');
|
||||
|
||||
switch (action) {
|
||||
case 'list':
|
||||
return this._list();
|
||||
case 'status':
|
||||
return this._service.status();
|
||||
case 'add':
|
||||
return this._add(args);
|
||||
case 'remove': {
|
||||
const id = strArg(args, 'id');
|
||||
if (!id) return 'Error: id is required for remove.';
|
||||
return this._service.removeJob(id) ? `Job ${id} removed.` : `Error: job ${id} not found.`;
|
||||
}
|
||||
case 'enable': {
|
||||
const id = strArg(args, 'id');
|
||||
if (!id) return 'Error: id is required for enable.';
|
||||
return this._service.enableJob(id, true)
|
||||
? `Job ${id} enabled.`
|
||||
: `Error: job ${id} not found.`;
|
||||
}
|
||||
case 'disable': {
|
||||
const id = strArg(args, 'id');
|
||||
if (!id) return 'Error: id is required for disable.';
|
||||
return this._service.enableJob(id, false)
|
||||
? `Job ${id} disabled.`
|
||||
: `Error: job ${id} not found.`;
|
||||
}
|
||||
case 'run': {
|
||||
const id = strArg(args, 'id');
|
||||
if (!id) return 'Error: id is required for run.';
|
||||
return this._service.runJob(id);
|
||||
}
|
||||
default:
|
||||
return `Error: unknown action "${action}". Valid: list, add, remove, enable, disable, run, status.`;
|
||||
}
|
||||
}
|
||||
|
||||
private _list(): string {
|
||||
const jobs = this._service.listJobs();
|
||||
if (jobs.length === 0) return 'No cron jobs.';
|
||||
return jobs
|
||||
.map((j: CronJob) => {
|
||||
const schedule = this._fmtSchedule(j.schedule);
|
||||
const next = j.state.nextRunAtMs ? new Date(j.state.nextRunAtMs).toISOString() : 'N/A';
|
||||
return `${j.id} [${j.enabled ? 'ON' : 'OFF'}] "${j.name}" ${schedule} next=${next}`;
|
||||
})
|
||||
.join('\n');
|
||||
}
|
||||
|
||||
private _fmtSchedule(s: CronSchedule): string {
|
||||
if (s.kind === 'at') return `at ${new Date(s.atMs).toISOString()}`;
|
||||
if (s.kind === 'every') return `every ${s.everyMs}ms`;
|
||||
return `cron(${s.expr}${s.tz ? ` ${s.tz}` : ''})`;
|
||||
}
|
||||
|
||||
private _add(args: Record<string, unknown>): string {
|
||||
const name = strArg(args, 'name').trim();
|
||||
const message = strArg(args, 'message').trim();
|
||||
if (!name) return 'Error: name is required for add.';
|
||||
if (!message) return 'Error: message is required for add.';
|
||||
|
||||
const rawSchedule = args['schedule'];
|
||||
if (!rawSchedule || typeof rawSchedule !== 'object') {
|
||||
return 'Error: schedule is required for add. Format: {kind:"cron",expr:"0 9 * * *"} or {kind:"every",everyMs:60000} or {kind:"at",atMs:1234567890}';
|
||||
}
|
||||
|
||||
const id = `job_${Date.now()}_${Math.random().toString(36).slice(2, 7)}`;
|
||||
const payload: CronPayload = { kind: 'agent_turn', message, deliver: false };
|
||||
const schedule = rawSchedule as CronSchedule;
|
||||
|
||||
try {
|
||||
const job = this._service.addJob({
|
||||
id,
|
||||
name,
|
||||
enabled: true,
|
||||
schedule,
|
||||
payload,
|
||||
deleteAfterRun: Boolean(args['deleteAfterRun']),
|
||||
});
|
||||
return `Job created: ${job.id} "${job.name}"`;
|
||||
} catch (err) {
|
||||
return `Error creating job: ${String(err)}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
264
src/agent/tools/filesystem.ts
Normal file
264
src/agent/tools/filesystem.ts
Normal file
@@ -0,0 +1,264 @@
|
||||
import { existsSync, mkdirSync, readdirSync, readFileSync, statSync, writeFileSync } from 'node:fs';
|
||||
import { dirname, relative, resolve } from 'node:path';
|
||||
import { strArg } from './base.ts';
|
||||
import type { Tool } from './base.ts';
|
||||
|
||||
const MAX_READ_CHARS = 128_000;
|
||||
const MAX_ENTRIES = 2000;
|
||||
const IGNORED_DIRS = new Set([
|
||||
'.git',
|
||||
'node_modules',
|
||||
'__pycache__',
|
||||
'.venv',
|
||||
'venv',
|
||||
'dist',
|
||||
'.next',
|
||||
'build',
|
||||
]);
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// read_file
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class ReadFileTool implements Tool {
|
||||
readonly name = 'read_file';
|
||||
readonly description =
|
||||
'Read a file. Returns line-numbered content. Use offset/limit for large files.';
|
||||
readonly parameters = {
|
||||
path: { type: 'string', description: 'Absolute or workspace-relative path.' },
|
||||
offset: { type: 'number', description: 'First line to read (1-indexed).' },
|
||||
limit: { type: 'number', description: 'Max lines to return.' },
|
||||
};
|
||||
readonly required = ['path'];
|
||||
|
||||
private _workspace: string;
|
||||
private _allowedDir: string | undefined;
|
||||
|
||||
constructor(opts: { workspace: string; allowedDir?: string }) {
|
||||
this._workspace = opts.workspace;
|
||||
this._allowedDir = opts.allowedDir;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const rawPath = strArg(args, 'path');
|
||||
const absPath = rawPath.startsWith('/') ? rawPath : resolve(this._workspace, rawPath);
|
||||
|
||||
if (this._allowedDir && !absPath.startsWith(this._allowedDir)) {
|
||||
return `Error: path is outside the allowed directory (${this._allowedDir}).`;
|
||||
}
|
||||
if (!existsSync(absPath)) return `Error: file not found: ${absPath}`;
|
||||
|
||||
let content: string;
|
||||
try {
|
||||
content = readFileSync(absPath, 'utf8');
|
||||
} catch (err) {
|
||||
return `Error reading file: ${String(err)}`;
|
||||
}
|
||||
|
||||
const lines = content.split('\n');
|
||||
const offset = Math.max(1, Number(args['offset'] ?? 1));
|
||||
const limit = args['limit'] ? Number(args['limit']) : undefined;
|
||||
|
||||
const start = offset - 1;
|
||||
const end = limit !== undefined ? start + limit : lines.length;
|
||||
const slice = lines.slice(start, end);
|
||||
|
||||
const numbered = slice.map((l, i) => `${start + i + 1}: ${l}`).join('\n');
|
||||
const truncated =
|
||||
numbered.length > MAX_READ_CHARS
|
||||
? numbered.slice(0, MAX_READ_CHARS) + '\n... (truncated)'
|
||||
: numbered;
|
||||
|
||||
const totalLines = lines.length;
|
||||
const header = `File: ${absPath} (${totalLines} lines total)\n`;
|
||||
return header + truncated;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// write_file
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class WriteFileTool implements Tool {
|
||||
readonly name = 'write_file';
|
||||
readonly description = 'Write content to a file, creating parent directories as needed.';
|
||||
readonly parameters = {
|
||||
path: { type: 'string', description: 'Absolute or workspace-relative path.' },
|
||||
content: { type: 'string', description: 'Content to write.' },
|
||||
};
|
||||
readonly required = ['path', 'content'];
|
||||
|
||||
private _workspace: string;
|
||||
private _allowedDir: string | undefined;
|
||||
|
||||
constructor(opts: { workspace: string; allowedDir?: string }) {
|
||||
this._workspace = opts.workspace;
|
||||
this._allowedDir = opts.allowedDir;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const rawPath = strArg(args, 'path');
|
||||
const absPath = rawPath.startsWith('/') ? rawPath : resolve(this._workspace, rawPath);
|
||||
|
||||
if (this._allowedDir && !absPath.startsWith(this._allowedDir)) {
|
||||
return `Error: path is outside the allowed directory (${this._allowedDir}).`;
|
||||
}
|
||||
|
||||
const content = strArg(args, 'content');
|
||||
try {
|
||||
mkdirSync(dirname(absPath), { recursive: true });
|
||||
writeFileSync(absPath, content, 'utf8');
|
||||
return `Written ${content.length} chars to ${absPath}`;
|
||||
} catch (err) {
|
||||
return `Error writing file: ${String(err)}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// edit_file
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class EditFileTool implements Tool {
|
||||
readonly name = 'edit_file';
|
||||
readonly description =
|
||||
'Replace an exact string in a file. The oldString must match the file content exactly (including whitespace). Use replaceAll to replace every occurrence.';
|
||||
readonly parameters = {
|
||||
path: { type: 'string', description: 'Absolute or workspace-relative path.' },
|
||||
oldString: { type: 'string', description: 'Exact text to find and replace.' },
|
||||
newString: { type: 'string', description: 'Replacement text.' },
|
||||
replaceAll: { type: 'boolean', description: 'Replace every occurrence (default false).' },
|
||||
};
|
||||
readonly required = ['path', 'oldString', 'newString'];
|
||||
|
||||
private _workspace: string;
|
||||
private _allowedDir: string | undefined;
|
||||
|
||||
constructor(opts: { workspace: string; allowedDir?: string }) {
|
||||
this._workspace = opts.workspace;
|
||||
this._allowedDir = opts.allowedDir;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const rawPath = strArg(args, 'path');
|
||||
const absPath = rawPath.startsWith('/') ? rawPath : resolve(this._workspace, rawPath);
|
||||
|
||||
if (this._allowedDir && !absPath.startsWith(this._allowedDir)) {
|
||||
return `Error: path is outside the allowed directory (${this._allowedDir}).`;
|
||||
}
|
||||
if (!existsSync(absPath)) return `Error: file not found: ${absPath}`;
|
||||
|
||||
const oldString = strArg(args, 'oldString');
|
||||
const newString = strArg(args, 'newString');
|
||||
const replaceAll = Boolean(args['replaceAll']);
|
||||
|
||||
let content: string;
|
||||
try {
|
||||
content = readFileSync(absPath, 'utf8');
|
||||
} catch (err) {
|
||||
return `Error reading file: ${String(err)}`;
|
||||
}
|
||||
|
||||
if (!content.includes(oldString)) {
|
||||
// Try trimmed-line fallback
|
||||
const trimmedOld = oldString.trim();
|
||||
const trimmedContent = content.trim();
|
||||
if (!trimmedContent.includes(trimmedOld)) {
|
||||
return `Error: oldString not found in ${absPath}. Hint: read the file first to verify the exact text.`;
|
||||
}
|
||||
}
|
||||
|
||||
let count = 0;
|
||||
let updated: string;
|
||||
if (replaceAll) {
|
||||
updated = content.split(oldString).join(newString);
|
||||
count = content.split(oldString).length - 1;
|
||||
} else {
|
||||
const idx = content.indexOf(oldString);
|
||||
if (idx === -1) return `Error: oldString not found in ${absPath}.`;
|
||||
// Check for multiple occurrences
|
||||
const second = content.indexOf(oldString, idx + 1);
|
||||
if (second !== -1) {
|
||||
return `Error: oldString found multiple times in ${absPath}. Provide more surrounding context or use replaceAll.`;
|
||||
}
|
||||
updated = content.slice(0, idx) + newString + content.slice(idx + oldString.length);
|
||||
count = 1;
|
||||
}
|
||||
|
||||
try {
|
||||
writeFileSync(absPath, updated, 'utf8');
|
||||
return `Replaced ${count} occurrence(s) in ${absPath}`;
|
||||
} catch (err) {
|
||||
return `Error writing file: ${String(err)}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// list_dir
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class ListDirTool implements Tool {
|
||||
readonly name = 'list_dir';
|
||||
readonly description = 'List files in a directory. Use recursive=true for a full tree.';
|
||||
readonly parameters = {
|
||||
path: { type: 'string', description: 'Directory path (absolute or workspace-relative).' },
|
||||
recursive: { type: 'boolean', description: 'Include all subdirectories recursively.' },
|
||||
};
|
||||
readonly required = ['path'];
|
||||
|
||||
private _workspace: string;
|
||||
|
||||
constructor(opts: { workspace: string }) {
|
||||
this._workspace = opts.workspace;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const rawPath = strArg(args, 'path', '.');
|
||||
const absPath = rawPath.startsWith('/') ? rawPath : resolve(this._workspace, rawPath);
|
||||
const recursive = Boolean(args['recursive']);
|
||||
|
||||
if (!existsSync(absPath)) return `Error: path not found: ${absPath}`;
|
||||
if (!statSync(absPath).isDirectory()) return `Error: not a directory: ${absPath}`;
|
||||
|
||||
const entries: string[] = [];
|
||||
this._collect(absPath, absPath, recursive, entries);
|
||||
|
||||
if (entries.length >= MAX_ENTRIES) {
|
||||
entries.push(`... (truncated at ${MAX_ENTRIES} entries)`);
|
||||
}
|
||||
|
||||
return entries.join('\n') || '(empty directory)';
|
||||
}
|
||||
|
||||
private _collect(base: string, dir: string, recursive: boolean, out: string[]): void {
|
||||
if (out.length >= MAX_ENTRIES) return;
|
||||
|
||||
let names: string[];
|
||||
try {
|
||||
names = readdirSync(dir);
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
for (const name of names.sort()) {
|
||||
if (out.length >= MAX_ENTRIES) return;
|
||||
const full = resolve(dir, name);
|
||||
let st: ReturnType<typeof statSync>;
|
||||
try {
|
||||
st = statSync(full);
|
||||
} catch {
|
||||
continue;
|
||||
}
|
||||
|
||||
const rel = relative(base, full);
|
||||
if (st.isDirectory()) {
|
||||
if (IGNORED_DIRS.has(name)) continue;
|
||||
out.push(`${rel}/`);
|
||||
if (recursive) this._collect(base, full, true, out);
|
||||
} else {
|
||||
out.push(rel);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
54
src/agent/tools/message.ts
Normal file
54
src/agent/tools/message.ts
Normal file
@@ -0,0 +1,54 @@
|
||||
import type { OutboundMessage } from '../../bus/types.ts';
|
||||
import { strArg } from './base.ts';
|
||||
import type { Tool } from './base.ts';
|
||||
|
||||
type SendCallback = (msg: OutboundMessage) => void;
|
||||
|
||||
export class MessageTool implements Tool {
|
||||
readonly name = 'message';
|
||||
readonly description =
|
||||
'Send a message to the current chat channel. Use this to send intermediate updates or when you want to reply to a specific channel/chat rather than just returning your final response.';
|
||||
readonly parameters = {
|
||||
content: { type: 'string', description: 'Message content to send.' },
|
||||
};
|
||||
readonly required = ['content'];
|
||||
|
||||
private _send: SendCallback;
|
||||
private _channel = 'cli';
|
||||
private _chatId = 'direct';
|
||||
private _messageId: string | undefined;
|
||||
_sentInTurn = false;
|
||||
|
||||
constructor(sendCallback: SendCallback) {
|
||||
this._send = sendCallback;
|
||||
}
|
||||
|
||||
setContext(channel: string, chatId: string, messageId?: string): void {
|
||||
this._channel = channel;
|
||||
this._chatId = chatId;
|
||||
this._messageId = messageId;
|
||||
this._sentInTurn = false;
|
||||
}
|
||||
|
||||
startTurn(): void {
|
||||
this._sentInTurn = false;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const content = strArg(args, 'content').trim();
|
||||
if (!content) return 'Error: content is required.';
|
||||
|
||||
const meta: Record<string, unknown> = {};
|
||||
if (this._messageId) meta['message_id'] = this._messageId;
|
||||
|
||||
this._send({
|
||||
channel: this._channel,
|
||||
chatId: this._chatId,
|
||||
content,
|
||||
metadata: meta,
|
||||
});
|
||||
|
||||
this._sentInTurn = true;
|
||||
return 'Message sent.';
|
||||
}
|
||||
}
|
||||
107
src/agent/tools/shell.ts
Normal file
107
src/agent/tools/shell.ts
Normal file
@@ -0,0 +1,107 @@
|
||||
import { spawn } from 'node:child_process';
|
||||
import { resolve } from 'node:path';
|
||||
import { strArg } from './base.ts';
|
||||
import type { Tool } from './base.ts';
|
||||
|
||||
const DEFAULT_TIMEOUT_S = 120;
|
||||
const MAX_TIMEOUT_S = 600;
|
||||
const OUTPUT_MAX_CHARS = 32_000;
|
||||
|
||||
const DEFAULT_DENY_PATTERNS = [/rm\s+-rf\s+\/(?!\S)/, /mkfs/, /dd\s+if=/, /:\(\)\s*\{.*\}/];
|
||||
|
||||
export class ExecTool implements Tool {
|
||||
readonly name = 'exec';
|
||||
readonly description =
|
||||
'Execute a shell command in the workspace. Returns combined stdout+stderr. Use for file ops, build tools, running scripts, etc.';
|
||||
|
||||
readonly parameters = {
|
||||
command: { type: 'string', description: 'Shell command to run.' },
|
||||
timeout: {
|
||||
type: 'number',
|
||||
description: `Timeout in seconds (max ${MAX_TIMEOUT_S}, default ${DEFAULT_TIMEOUT_S}).`,
|
||||
},
|
||||
workdir: { type: 'string', description: 'Working directory (defaults to workspace root).' },
|
||||
};
|
||||
readonly required = ['command'];
|
||||
|
||||
private _workspacePath: string;
|
||||
private _timeoutS: number;
|
||||
private _restrictToWorkspace: boolean;
|
||||
private _pathAppend: string | undefined;
|
||||
|
||||
constructor(opts: {
|
||||
workspacePath: string;
|
||||
timeoutS?: number;
|
||||
restrictToWorkspace?: boolean;
|
||||
pathAppend?: string;
|
||||
}) {
|
||||
this._workspacePath = opts.workspacePath;
|
||||
this._timeoutS = opts.timeoutS ?? DEFAULT_TIMEOUT_S;
|
||||
this._restrictToWorkspace = opts.restrictToWorkspace ?? false;
|
||||
this._pathAppend = opts.pathAppend;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const command = strArg(args, 'command').trim();
|
||||
if (!command) return 'Error: command is required.';
|
||||
|
||||
const timeoutS = Math.min(Number(args['timeout'] ?? this._timeoutS), MAX_TIMEOUT_S);
|
||||
|
||||
for (const pattern of DEFAULT_DENY_PATTERNS) {
|
||||
if (pattern.test(command)) {
|
||||
return `Error: command matches a blocked pattern (${pattern.source}).`;
|
||||
}
|
||||
}
|
||||
|
||||
let cwd = this._workspacePath;
|
||||
if (args['workdir']) {
|
||||
const requested = resolve(strArg(args, 'workdir'));
|
||||
if (this._restrictToWorkspace && !requested.startsWith(this._workspacePath)) {
|
||||
return `Error: workdir must be inside workspace (${this._workspacePath}).`;
|
||||
}
|
||||
cwd = requested;
|
||||
}
|
||||
|
||||
const env: Record<string, string> = { ...process.env } as Record<string, string>;
|
||||
if (this._pathAppend) {
|
||||
env['PATH'] = `${env['PATH'] ?? '/usr/bin'}:${this._pathAppend}`;
|
||||
}
|
||||
|
||||
return new Promise<string>((resolveP) => {
|
||||
let output = '';
|
||||
let timedOut = false;
|
||||
|
||||
const proc = spawn('sh', ['-c', command], { cwd, env });
|
||||
|
||||
const onData = (chunk: Buffer | string) => {
|
||||
output += chunk.toString();
|
||||
if (output.length > OUTPUT_MAX_CHARS) {
|
||||
output = output.slice(-OUTPUT_MAX_CHARS);
|
||||
}
|
||||
};
|
||||
|
||||
proc.stdout.on('data', onData);
|
||||
proc.stderr.on('data', onData);
|
||||
|
||||
const timer = setTimeout(() => {
|
||||
timedOut = true;
|
||||
proc.kill('SIGKILL');
|
||||
}, timeoutS * 1000);
|
||||
|
||||
proc.on('close', (code) => {
|
||||
clearTimeout(timer);
|
||||
if (timedOut) {
|
||||
resolveP(`[Timed out after ${timeoutS}s]\n${output}`);
|
||||
} else {
|
||||
const suffix = code !== 0 ? `\n[Exit code: ${code}]` : '';
|
||||
resolveP((output || '(no output)') + suffix);
|
||||
}
|
||||
});
|
||||
|
||||
proc.on('error', (err) => {
|
||||
clearTimeout(timer);
|
||||
resolveP(`Error spawning process: ${err}`);
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
35
src/agent/tools/spawn.ts
Normal file
35
src/agent/tools/spawn.ts
Normal file
@@ -0,0 +1,35 @@
|
||||
import type { SubagentManager } from '../subagent.ts';
|
||||
import { strArg } from './base.ts';
|
||||
import type { Tool } from './base.ts';
|
||||
|
||||
export class SpawnTool implements Tool {
|
||||
readonly name = 'spawn';
|
||||
readonly description =
|
||||
'Spawn a background subagent to handle a long-running task autonomously. The subagent has access to filesystem, shell, and web tools. It will report its result back when done.';
|
||||
readonly parameters = {
|
||||
task: {
|
||||
type: 'string',
|
||||
description: 'Full description of the task for the subagent to complete.',
|
||||
},
|
||||
};
|
||||
readonly required = ['task'];
|
||||
|
||||
private _manager: SubagentManager;
|
||||
private _sessionKey = 'cli:direct';
|
||||
|
||||
constructor(manager: SubagentManager) {
|
||||
this._manager = manager;
|
||||
}
|
||||
|
||||
setContext(sessionKey: string): void {
|
||||
this._sessionKey = sessionKey;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const task = strArg(args, 'task').trim();
|
||||
if (!task) return 'Error: task is required.';
|
||||
|
||||
const taskId = this._manager.spawn(this._sessionKey, task);
|
||||
return `Subagent spawned (id: ${taskId}). It will report back when done.`;
|
||||
}
|
||||
}
|
||||
194
src/agent/tools/web.ts
Normal file
194
src/agent/tools/web.ts
Normal file
@@ -0,0 +1,194 @@
|
||||
import { Readability } from '@mozilla/readability';
|
||||
import { parse as parseHtml } from 'node-html-parser';
|
||||
import { strArg } from './base.ts';
|
||||
import type { Tool } from './base.ts';
|
||||
|
||||
const FETCH_TIMEOUT_MS = 30_000;
|
||||
const MAX_CONTENT_CHARS = 50_000;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// web_search (Brave Search API)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class WebSearchTool implements Tool {
|
||||
readonly name = 'web_search';
|
||||
readonly description =
|
||||
'Search the web using Brave Search. Returns a list of results with titles, URLs, and snippets.';
|
||||
readonly parameters = {
|
||||
query: { type: 'string', description: 'Search query.' },
|
||||
count: { type: 'number', description: 'Number of results (default 10, max 20).' },
|
||||
};
|
||||
readonly required = ['query'];
|
||||
|
||||
private _apiKey: string | undefined;
|
||||
private _proxy: string | undefined;
|
||||
|
||||
constructor(opts: { apiKey?: string; proxy?: string } = {}) {
|
||||
this._apiKey = opts.apiKey;
|
||||
this._proxy = opts.proxy;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const query = strArg(args, 'query').trim();
|
||||
if (!query) return 'Error: query is required.';
|
||||
if (!this._apiKey)
|
||||
return 'Error: BRAVE_API_KEY not configured (set tools.web.braveApiKey in config).';
|
||||
|
||||
const count = Math.min(Number(args['count'] ?? 10), 20);
|
||||
const url = `https://api.search.brave.com/res/v1/web/search?q=${encodeURIComponent(query)}&count=${count}`;
|
||||
|
||||
try {
|
||||
const res = await fetchWithTimeout(url, {
|
||||
headers: {
|
||||
Accept: 'application/json',
|
||||
'Accept-Encoding': 'gzip',
|
||||
'X-Subscription-Token': this._apiKey,
|
||||
},
|
||||
});
|
||||
|
||||
if (!res.ok) return `Error: Brave Search API returned ${res.status}: ${await res.text()}`;
|
||||
|
||||
const data = (await res.json()) as {
|
||||
web?: { results?: Array<{ title: string; url: string; description: string }> };
|
||||
};
|
||||
const results = data.web?.results ?? [];
|
||||
|
||||
if (results.length === 0) return 'No results found.';
|
||||
|
||||
return results
|
||||
.map((r, i) => `${i + 1}. ${r.title}\n ${r.url}\n ${r.description ?? ''}`)
|
||||
.join('\n\n');
|
||||
} catch (err) {
|
||||
return `Error: ${String(err)}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// web_fetch
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export class WebFetchTool implements Tool {
|
||||
readonly name = 'web_fetch';
|
||||
readonly description =
|
||||
'Fetch a URL and return its content. HTML pages are extracted to readable text. Use mode="raw" for JSON/XML/plain text.';
|
||||
readonly parameters = {
|
||||
url: { type: 'string', description: 'URL to fetch.' },
|
||||
mode: {
|
||||
type: 'string',
|
||||
enum: ['markdown', 'text', 'raw'],
|
||||
description: 'Output mode (default: text).',
|
||||
},
|
||||
};
|
||||
readonly required = ['url'];
|
||||
|
||||
private _proxy: string | undefined;
|
||||
|
||||
constructor(opts: { proxy?: string } = {}) {
|
||||
this._proxy = opts.proxy;
|
||||
}
|
||||
|
||||
async execute(args: Record<string, unknown>): Promise<string> {
|
||||
const url = strArg(args, 'url').trim();
|
||||
if (!url) return 'Error: url is required.';
|
||||
|
||||
const mode = strArg(args, 'mode', 'text');
|
||||
|
||||
try {
|
||||
const res = await fetchWithTimeout(url, {
|
||||
headers: { 'User-Agent': 'Mozilla/5.0 (compatible; nanobot/1.0)' },
|
||||
});
|
||||
|
||||
if (!res.ok) return `Error: HTTP ${res.status} from ${url}`;
|
||||
|
||||
const contentType = res.headers.get('content-type') ?? '';
|
||||
const body = await res.text();
|
||||
|
||||
if (
|
||||
mode === 'raw' ||
|
||||
(!contentType.includes('text/html') && !body.trimStart().startsWith('<'))
|
||||
) {
|
||||
const truncated =
|
||||
body.length > MAX_CONTENT_CHARS
|
||||
? body.slice(0, MAX_CONTENT_CHARS) + '\n... (truncated)'
|
||||
: body;
|
||||
return truncated;
|
||||
}
|
||||
|
||||
// Parse HTML with Readability
|
||||
// Readability needs a DOM — build one from node-html-parser
|
||||
const root = parseHtml(body);
|
||||
|
||||
// Minimal JSDOM-compatible interface for Readability
|
||||
// biome-ignore lint/suspicious/noExplicitAny: Readability duck-typing requires any
|
||||
const doc = makePseudoDocument(url, body, root) as any;
|
||||
const reader = new Readability(doc);
|
||||
const article = reader.parse();
|
||||
|
||||
const title = article?.title ?? '';
|
||||
const textContent = article?.textContent ?? stripTags(body);
|
||||
const trimmed = textContent.replace(/\n{3,}/g, '\n\n').trim();
|
||||
const truncated =
|
||||
trimmed.length > MAX_CONTENT_CHARS
|
||||
? trimmed.slice(0, MAX_CONTENT_CHARS) + '\n... (truncated)'
|
||||
: trimmed;
|
||||
|
||||
return title ? `# ${title}\n\n${truncated}` : truncated;
|
||||
} catch (err) {
|
||||
return `Error fetching ${url}: ${String(err)}`;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function fetchWithTimeout(url: string, init: RequestInit = {}): Promise<Response> {
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), FETCH_TIMEOUT_MS);
|
||||
return fetch(url, { ...init, signal: controller.signal }).finally(() => clearTimeout(timer));
|
||||
}
|
||||
|
||||
function stripTags(html: string): string {
|
||||
return html
|
||||
.replace(/<[^>]*>/g, ' ')
|
||||
.replace(/\s+/g, ' ')
|
||||
.trim();
|
||||
}
|
||||
|
||||
/** Build a minimal pseudo-document that satisfies Readability's interface. */
|
||||
function makePseudoDocument(
|
||||
url: string,
|
||||
html: string,
|
||||
root: ReturnType<typeof parseHtml>,
|
||||
): Record<string, unknown> {
|
||||
// node-html-parser's API is close enough for Readability's needs when
|
||||
// accessed via a proxy. We create a real DOMParser-like wrapper.
|
||||
// Bun/Node don't have DOMParser built-in, so we duck-type what Readability
|
||||
// needs: baseURI, documentURI, querySelector, querySelectorAll, innerHTML.
|
||||
const pseudoDoc = {
|
||||
baseURI: url,
|
||||
documentURI: url,
|
||||
URL: url,
|
||||
title: root.querySelector('title')?.text ?? '',
|
||||
documentElement: root,
|
||||
body: root.querySelector('body') ?? root,
|
||||
head: root.querySelector('head') ?? root,
|
||||
// biome-ignore lint/suspicious/noExplicitAny: Readability duck-typing
|
||||
querySelector: (sel: string) => root.querySelector(sel) as any,
|
||||
// biome-ignore lint/suspicious/noExplicitAny: Readability duck-typing
|
||||
querySelectorAll: (sel: string) => root.querySelectorAll(sel) as any,
|
||||
getElementsByTagName: (tag: string) => root.querySelectorAll(tag),
|
||||
createElement: (_tag: string) => ({ innerHTML: '', textContent: '', style: {} }),
|
||||
createTreeWalker: () => ({ nextNode: () => null }),
|
||||
createRange: () => ({ selectNodeContents: () => {}, cloneContents: () => null }),
|
||||
// biome-ignore lint/suspicious/noExplicitAny: Readability duck-typing
|
||||
get innerHTML() {
|
||||
return html;
|
||||
},
|
||||
location: { href: url },
|
||||
};
|
||||
|
||||
return pseudoDoc;
|
||||
}
|
||||
51
src/bus/queue.ts
Normal file
51
src/bus/queue.ts
Normal file
@@ -0,0 +1,51 @@
|
||||
import type { InboundMessage, OutboundMessage } from './types.ts';
|
||||
|
||||
/** A simple async FIFO queue that mirrors Python's asyncio.Queue behaviour. */
|
||||
class AsyncQueue<T> {
|
||||
private _items: T[] = [];
|
||||
private _waiters: Array<(value: T) => void> = [];
|
||||
|
||||
enqueue(item: T): void {
|
||||
const waiter = this._waiters.shift();
|
||||
if (waiter) {
|
||||
waiter(item);
|
||||
} else {
|
||||
this._items.push(item);
|
||||
}
|
||||
}
|
||||
|
||||
dequeue(): Promise<T> {
|
||||
const item = this._items.shift();
|
||||
if (item !== undefined) {
|
||||
return Promise.resolve(item);
|
||||
}
|
||||
return new Promise<T>((resolve) => {
|
||||
this._waiters.push(resolve);
|
||||
});
|
||||
}
|
||||
|
||||
get size(): number {
|
||||
return this._items.length;
|
||||
}
|
||||
}
|
||||
|
||||
export class MessageBus {
|
||||
private _inbound = new AsyncQueue<InboundMessage>();
|
||||
private _outbound = new AsyncQueue<OutboundMessage>();
|
||||
|
||||
publishInbound(msg: InboundMessage): void {
|
||||
this._inbound.enqueue(msg);
|
||||
}
|
||||
|
||||
consumeInbound(): Promise<InboundMessage> {
|
||||
return this._inbound.dequeue();
|
||||
}
|
||||
|
||||
publishOutbound(msg: OutboundMessage): void {
|
||||
this._outbound.enqueue(msg);
|
||||
}
|
||||
|
||||
consumeOutbound(): Promise<OutboundMessage> {
|
||||
return this._outbound.dequeue();
|
||||
}
|
||||
}
|
||||
25
src/bus/types.ts
Normal file
25
src/bus/types.ts
Normal file
@@ -0,0 +1,25 @@
|
||||
import { z } from 'zod';
|
||||
|
||||
export const InboundMessageSchema = z.object({
|
||||
channel: z.string(),
|
||||
senderId: z.string(),
|
||||
chatId: z.string(),
|
||||
content: z.string(),
|
||||
media: z.array(z.string()).optional(),
|
||||
metadata: z.record(z.string(), z.unknown()).default(() => ({})),
|
||||
});
|
||||
export type InboundMessage = z.infer<typeof InboundMessageSchema>;
|
||||
|
||||
export const OutboundMessageSchema = z.object({
|
||||
channel: z.string(),
|
||||
chatId: z.string(),
|
||||
content: z.string().nullable(),
|
||||
media: z.array(z.string()).optional(),
|
||||
metadata: z.record(z.string(), z.unknown()).default(() => ({})),
|
||||
});
|
||||
export type OutboundMessage = z.infer<typeof OutboundMessageSchema>;
|
||||
|
||||
/** Derive a stable session key from an inbound message. */
|
||||
export function sessionKeyOf(msg: InboundMessage): string {
|
||||
return `${msg.channel}:${msg.chatId}`;
|
||||
}
|
||||
20
src/channels/base.ts
Normal file
20
src/channels/base.ts
Normal file
@@ -0,0 +1,20 @@
|
||||
import type { MessageBus } from '../bus/queue.ts';
|
||||
|
||||
export abstract class BaseChannel {
|
||||
protected _bus: MessageBus;
|
||||
|
||||
constructor(bus: MessageBus) {
|
||||
this._bus = bus;
|
||||
}
|
||||
|
||||
abstract get name(): string;
|
||||
abstract start(): Promise<void>;
|
||||
abstract stop(): void;
|
||||
abstract send(chatId: string, content: string, metadata?: Record<string, unknown>): Promise<void>;
|
||||
|
||||
protected isAllowed(senderId: string, allowFrom: string[]): boolean {
|
||||
if (allowFrom.length === 0) return true;
|
||||
if (allowFrom.includes('*')) return true;
|
||||
return allowFrom.includes(senderId);
|
||||
}
|
||||
}
|
||||
72
src/channels/manager.ts
Normal file
72
src/channels/manager.ts
Normal file
@@ -0,0 +1,72 @@
|
||||
import type { MessageBus } from '../bus/queue.ts';
|
||||
import type { OutboundMessage } from '../bus/types.ts';
|
||||
import type { BaseChannel } from './base.ts';
|
||||
|
||||
export class ChannelManager {
|
||||
private _channels: BaseChannel[] = [];
|
||||
private _bus: MessageBus;
|
||||
private _running = false;
|
||||
|
||||
constructor(bus: MessageBus) {
|
||||
this._bus = bus;
|
||||
}
|
||||
|
||||
register(channel: BaseChannel): void {
|
||||
this._channels.push(channel);
|
||||
}
|
||||
|
||||
async startAll(): Promise<void> {
|
||||
this._running = true;
|
||||
|
||||
// Start all channels in parallel + the outbound dispatcher
|
||||
await Promise.all([
|
||||
...this._channels.map((ch) =>
|
||||
ch.start().catch((err) => console.error(`[channel:${ch.name}] Failed to start: ${err}`)),
|
||||
),
|
||||
this._dispatchOutbound(),
|
||||
]);
|
||||
}
|
||||
|
||||
stopAll(): void {
|
||||
this._running = false;
|
||||
for (const ch of this._channels) ch.stop();
|
||||
}
|
||||
|
||||
private async _dispatchOutbound(): Promise<void> {
|
||||
while (this._running) {
|
||||
const msg: OutboundMessage | null = await Promise.race([
|
||||
this._bus.consumeOutbound(),
|
||||
new Promise<null>((r) => setTimeout(() => r(null), 1000)),
|
||||
]);
|
||||
|
||||
if (!msg) continue;
|
||||
if (!msg.content) continue; // empty progress marker etc.
|
||||
|
||||
await this._route(msg);
|
||||
}
|
||||
}
|
||||
|
||||
private async _route(msg: OutboundMessage): Promise<void> {
|
||||
// Progress/tool-hint messages — only forward if sendProgress/sendToolHints enabled
|
||||
const isProgress = msg.metadata?.['_progress'] === true;
|
||||
const isToolHint = msg.metadata?.['_toolHint'] === true;
|
||||
if (isProgress && isToolHint) return; // suppress raw tool hints from channel delivery
|
||||
if (isProgress) return; // intermediate thoughts — suppress for now (channels can opt in)
|
||||
|
||||
const channel = this._channels.find((ch) => ch.name === msg.channel);
|
||||
if (!channel) {
|
||||
// CLI channel — handled by the CLI directly
|
||||
return;
|
||||
}
|
||||
|
||||
const content = msg.content ?? '';
|
||||
const chatId = (msg.metadata?.['channel_id'] as string | undefined) ?? msg.chatId;
|
||||
const rootId = msg.metadata?.['root_id'] as string | undefined;
|
||||
|
||||
try {
|
||||
await channel.send(chatId, content, rootId ? { rootId } : undefined);
|
||||
} catch (err) {
|
||||
console.error(`[channel:${channel.name}] Failed to send: ${String(err)}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
246
src/channels/mattermost.ts
Normal file
246
src/channels/mattermost.ts
Normal file
@@ -0,0 +1,246 @@
|
||||
import type { MessageBus } from '../bus/queue.ts';
|
||||
import type { MattermostConfig } from '../config/types.ts';
|
||||
import { BaseChannel } from './base.ts';
|
||||
|
||||
const RECONNECT_DELAY_MS = 5000;
|
||||
const MAX_RECONNECT_DELAY_MS = 60_000;
|
||||
|
||||
interface MMPost {
|
||||
id: string;
|
||||
channel_id: string;
|
||||
root_id: string;
|
||||
user_id: string;
|
||||
message: string;
|
||||
type: string;
|
||||
}
|
||||
|
||||
interface MMChannel {
|
||||
id: string;
|
||||
type: string; // 'D' = direct, 'O' = open, 'P' = private
|
||||
name: string;
|
||||
}
|
||||
|
||||
interface MMUser {
|
||||
id: string;
|
||||
username: string;
|
||||
}
|
||||
|
||||
export class MattermostChannel extends BaseChannel {
|
||||
readonly name = 'mattermost';
|
||||
|
||||
private _cfg: MattermostConfig;
|
||||
private _baseUrl: string;
|
||||
private _ws: WebSocket | null = null;
|
||||
private _botUserId: string | null = null;
|
||||
private _botUsername: string | null = null;
|
||||
private _running = false;
|
||||
private _reconnectDelay = RECONNECT_DELAY_MS;
|
||||
private _channelCache = new Map<string, MMChannel>();
|
||||
|
||||
constructor(bus: MessageBus, cfg: MattermostConfig) {
|
||||
super(bus);
|
||||
this._cfg = cfg;
|
||||
const port = cfg.port === 443 ? '' : `:${cfg.port}`;
|
||||
this._baseUrl = `${cfg.scheme}://${cfg.serverUrl}${port}${cfg.basePath}`;
|
||||
}
|
||||
|
||||
async start(): Promise<void> {
|
||||
this._running = true;
|
||||
|
||||
// Fetch bot identity
|
||||
const me = await this._api<MMUser>('GET', '/api/v4/users/me');
|
||||
this._botUserId = me.id;
|
||||
this._botUsername = me.username;
|
||||
console.info(`[mattermost] Connected as @${me.username} (${me.id})`);
|
||||
|
||||
this._connectWs();
|
||||
}
|
||||
|
||||
stop(): void {
|
||||
this._running = false;
|
||||
this._ws?.close();
|
||||
this._ws = null;
|
||||
}
|
||||
|
||||
async send(chatId: string, content: string, metadata?: Record<string, unknown>): Promise<void> {
|
||||
const rootId = typeof metadata?.['rootId'] === 'string' ? metadata['rootId'] : undefined;
|
||||
const replyInThread = this._cfg.replyInThread;
|
||||
|
||||
const body: Record<string, unknown> = {
|
||||
channel_id: chatId,
|
||||
message: content,
|
||||
};
|
||||
|
||||
if (replyInThread && rootId) {
|
||||
body['root_id'] = rootId;
|
||||
}
|
||||
|
||||
try {
|
||||
await this._api<MMPost>('POST', '/api/v4/posts', body);
|
||||
} catch (err) {
|
||||
console.error(`[mattermost] Failed to send post: ${String(err)}`);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// WebSocket
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private _connectWs(): void {
|
||||
if (!this._running) return;
|
||||
|
||||
const wsScheme = this._cfg.scheme === 'https' ? 'wss' : 'ws';
|
||||
const port = this._cfg.port === 443 ? '' : `:${this._cfg.port}`;
|
||||
const wsUrl = `${wsScheme}://${this._cfg.serverUrl}${port}${this._cfg.basePath}/api/v4/websocket`;
|
||||
|
||||
console.info(`[mattermost] Connecting to WebSocket: ${wsUrl}`);
|
||||
const ws = new WebSocket(wsUrl);
|
||||
this._ws = ws;
|
||||
|
||||
ws.onopen = () => {
|
||||
this._reconnectDelay = RECONNECT_DELAY_MS;
|
||||
// Authenticate via hello message
|
||||
ws.send(
|
||||
JSON.stringify({
|
||||
seq: 1,
|
||||
action: 'authentication_challenge',
|
||||
data: { token: this._cfg.token },
|
||||
}),
|
||||
);
|
||||
console.info('[mattermost] WebSocket connected');
|
||||
};
|
||||
|
||||
ws.onmessage = (event) => {
|
||||
let payload: Record<string, unknown>;
|
||||
try {
|
||||
payload = JSON.parse(String(event.data)) as Record<string, unknown>;
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
void this._onEvent(payload);
|
||||
};
|
||||
|
||||
ws.onerror = (err) => {
|
||||
console.error('[mattermost] WebSocket error', err.type);
|
||||
};
|
||||
|
||||
ws.onclose = () => {
|
||||
console.warn(`[mattermost] WebSocket closed, reconnecting in ${this._reconnectDelay}ms`);
|
||||
if (!this._running) return;
|
||||
setTimeout(() => {
|
||||
this._reconnectDelay = Math.min(this._reconnectDelay * 2, MAX_RECONNECT_DELAY_MS);
|
||||
this._connectWs();
|
||||
}, this._reconnectDelay);
|
||||
};
|
||||
}
|
||||
|
||||
private async _onEvent(payload: Record<string, unknown>): Promise<void> {
|
||||
if (payload['event'] !== 'posted') return;
|
||||
|
||||
const data = payload['data'] as Record<string, unknown> | undefined;
|
||||
if (!data) return;
|
||||
|
||||
let post: MMPost;
|
||||
try {
|
||||
post = JSON.parse(String(data['post'])) as MMPost;
|
||||
} catch {
|
||||
return;
|
||||
}
|
||||
|
||||
// Ignore own messages
|
||||
if (post.user_id === this._botUserId) return;
|
||||
|
||||
// Ignore system messages
|
||||
if (post.type && post.type !== '') return;
|
||||
|
||||
const channelType = typeof data['channel_type'] === 'string' ? data['channel_type'] : '';
|
||||
const channel = await this._getChannel(post.channel_id);
|
||||
if (!channel) return;
|
||||
|
||||
const isDm = channelType === 'D' || channel.type === 'D';
|
||||
|
||||
if (isDm) {
|
||||
if (!this._cfg.dm.enabled) return;
|
||||
if (!this.isAllowed(post.user_id, this._cfg.dm.allowFrom)) return;
|
||||
} else {
|
||||
// Group channel
|
||||
if (!this._shouldRespondInGroup(post.message, this._cfg.groupPolicy)) return;
|
||||
if (
|
||||
this._cfg.groupPolicy === 'allowlist' &&
|
||||
!this.isAllowed(post.user_id, this._cfg.groupAllowFrom)
|
||||
)
|
||||
return;
|
||||
if (!this.isAllowed(post.user_id, this._cfg.allowFrom)) return;
|
||||
}
|
||||
|
||||
const text = this._stripBotMention(post.message);
|
||||
if (!text.trim()) return;
|
||||
|
||||
// Build session key — thread-aware if configured
|
||||
let chatId = post.channel_id;
|
||||
if (this._cfg.replyInThread && post.root_id) {
|
||||
chatId = `${post.channel_id}:${post.root_id}`;
|
||||
} else if (this._cfg.replyInThread && !post.root_id) {
|
||||
chatId = `${post.channel_id}:${post.id}`;
|
||||
}
|
||||
|
||||
this._bus.publishInbound({
|
||||
channel: 'mattermost',
|
||||
senderId: post.user_id,
|
||||
chatId,
|
||||
content: text,
|
||||
metadata: {
|
||||
message_id: post.id,
|
||||
channel_id: post.channel_id,
|
||||
root_id: post.root_id || post.id, // use post.id as root for new threads
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
private _shouldRespondInGroup(message: string, policy: string): boolean {
|
||||
if (policy === 'open') return true;
|
||||
if (policy === 'allowlist') return true;
|
||||
// 'mention' — only respond when bot is mentioned
|
||||
if (!this._botUsername) return false;
|
||||
return message.includes(`@${this._botUsername}`);
|
||||
}
|
||||
|
||||
private _stripBotMention(text: string): string {
|
||||
if (!this._botUsername) return text;
|
||||
return text.replace(new RegExp(`^@${this._botUsername}\\s*`, 'i'), '').trim();
|
||||
}
|
||||
|
||||
private async _getChannel(channelId: string): Promise<MMChannel | null> {
|
||||
const cached = this._channelCache.get(channelId);
|
||||
if (cached) return cached;
|
||||
try {
|
||||
const ch = await this._api<MMChannel>('GET', `/api/v4/channels/${channelId}`);
|
||||
this._channelCache.set(channelId, ch);
|
||||
return ch;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// REST helper
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private async _api<T>(method: string, path: string, body?: unknown): Promise<T> {
|
||||
const res = await fetch(`${this._baseUrl}${path}`, {
|
||||
method,
|
||||
headers: {
|
||||
Authorization: `Bearer ${this._cfg.token}`,
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: body !== undefined ? JSON.stringify(body) : undefined,
|
||||
});
|
||||
|
||||
if (!res.ok) {
|
||||
const text = await res.text();
|
||||
throw new Error(`Mattermost API ${method} ${path} → ${res.status}: ${text}`);
|
||||
}
|
||||
|
||||
return res.json() as Promise<T>;
|
||||
}
|
||||
}
|
||||
94
src/cli/agent.ts
Normal file
94
src/cli/agent.ts
Normal file
@@ -0,0 +1,94 @@
|
||||
import { createInterface } from 'node:readline';
|
||||
import { Command } from 'commander';
|
||||
import pc from 'picocolors';
|
||||
import { AgentLoop } from '../agent/loop.ts';
|
||||
import { MessageBus } from '../bus/queue.ts';
|
||||
import { makeProvider } from '../provider/index.ts';
|
||||
import { loadConfig } from '../config/loader.ts';
|
||||
import { ensureWorkspace } from './utils.ts';
|
||||
|
||||
export function agentCommand(program: Command): void {
|
||||
program
|
||||
.command('agent')
|
||||
.description('Run the agent interactively or send a single message.')
|
||||
.option('-c, --config <path>', 'Path to config.json')
|
||||
.option('-m, --message <text>', 'Single message to process (non-interactive)')
|
||||
.option('-M, --model <model>', 'Model override')
|
||||
.action(async (opts: { config?: string; message?: string; model?: string }) => {
|
||||
const config = loadConfig(opts.config);
|
||||
const workspace = config.agent.workspacePath;
|
||||
ensureWorkspace(workspace);
|
||||
|
||||
console.info(pc.magenta(`workspace path: ${workspace}`));
|
||||
|
||||
const model = opts.model ?? config.agent.model;
|
||||
const provider = makeProvider(
|
||||
config.providers,
|
||||
config.agent.provider,
|
||||
model,
|
||||
config.agent.maxTokens,
|
||||
config.agent.temperature,
|
||||
);
|
||||
const bus = new MessageBus();
|
||||
|
||||
const agentLoop = new AgentLoop({
|
||||
bus,
|
||||
provider,
|
||||
workspace,
|
||||
model,
|
||||
maxIterations: config.agent.maxToolIterations,
|
||||
contextWindowTokens: config.agent.contextWindowTokens,
|
||||
braveApiKey: config.tools.web.braveApiKey,
|
||||
webProxy: config.tools.web.proxy,
|
||||
execConfig: config.tools.exec,
|
||||
restrictToWorkspace: config.tools.restrictToWorkspace,
|
||||
});
|
||||
|
||||
// Single-shot mode
|
||||
if (opts.message) {
|
||||
const result = await agentLoop.processDirect(opts.message);
|
||||
console.log(result);
|
||||
return;
|
||||
}
|
||||
|
||||
// Interactive mode
|
||||
console.info(pc.green('nanobot interactive mode. Type your message, Ctrl+C to exit.'));
|
||||
|
||||
const rl = createInterface({ input: process.stdin, output: process.stdout });
|
||||
|
||||
const promptUser = () => {
|
||||
rl.question(pc.cyan('You: '), async (input) => {
|
||||
const text = input.trim();
|
||||
if (!text) {
|
||||
promptUser();
|
||||
return;
|
||||
}
|
||||
|
||||
const onProgress = async (content: string, opts?: { toolHint?: boolean }) => {
|
||||
if (opts?.toolHint) {
|
||||
process.stdout.write(pc.dim(` [${content}]\n`));
|
||||
} else {
|
||||
process.stdout.write(pc.dim(` ${content}\n`));
|
||||
}
|
||||
};
|
||||
|
||||
const result = await agentLoop.processDirect(
|
||||
text,
|
||||
'cli:interactive',
|
||||
'cli',
|
||||
'interactive',
|
||||
onProgress,
|
||||
);
|
||||
console.log(pc.bold('Bot:'), result);
|
||||
promptUser();
|
||||
});
|
||||
};
|
||||
|
||||
rl.on('close', () => {
|
||||
agentLoop.stop();
|
||||
process.exit(0);
|
||||
});
|
||||
|
||||
promptUser();
|
||||
});
|
||||
}
|
||||
16
src/cli/commands.ts
Normal file
16
src/cli/commands.ts
Normal file
@@ -0,0 +1,16 @@
|
||||
import { Command } from 'commander';
|
||||
import { agentCommand } from './agent.ts';
|
||||
import { gatewayCommand } from './gateway.ts';
|
||||
import { onboardCommand } from './onboard.ts';
|
||||
|
||||
export function createCli(): Command {
|
||||
const program = new Command('nanobot')
|
||||
.description('nanobot — personal AI assistant')
|
||||
.version('1.0.0');
|
||||
|
||||
onboardCommand(program);
|
||||
gatewayCommand(program);
|
||||
agentCommand(program);
|
||||
|
||||
return program;
|
||||
}
|
||||
109
src/cli/gateway.ts
Normal file
109
src/cli/gateway.ts
Normal file
@@ -0,0 +1,109 @@
|
||||
import { ChannelManager } from '../channels/manager.ts';
|
||||
import { Command } from 'commander';
|
||||
import pc from 'picocolors';
|
||||
import { AgentLoop } from '../agent/loop.ts';
|
||||
import { MessageBus } from '../bus/queue.ts';
|
||||
import { MattermostChannel } from '../channels/mattermost.ts';
|
||||
import { CronService } from '../cron/service.ts';
|
||||
import { HeartbeatService } from '../heartbeat/service.ts';
|
||||
import { makeProvider } from '../provider/index.ts';
|
||||
import { loadConfig } from '../config/loader.ts';
|
||||
import { ensureWorkspace } from './utils.ts';
|
||||
|
||||
export function gatewayCommand(program: Command): void {
|
||||
program
|
||||
.command('gateway')
|
||||
.option('-c, --config <path>', 'Path to config.json')
|
||||
.description('Start the full gateway: Mattermost channel, agent loop, cron, and heartbeat.')
|
||||
.action(async (opts: { config?: string }) => {
|
||||
const config = loadConfig(opts.config);
|
||||
const workspace = config.agent.workspacePath;
|
||||
ensureWorkspace(workspace);
|
||||
|
||||
console.info(pc.magenta(`workspace path: ${workspace}`));
|
||||
|
||||
const provider = makeProvider(
|
||||
config.providers,
|
||||
config.agent.provider,
|
||||
config.agent.model,
|
||||
config.agent.maxTokens,
|
||||
config.agent.temperature,
|
||||
);
|
||||
const bus = new MessageBus();
|
||||
const channelManager = new ChannelManager(bus);
|
||||
|
||||
// Cron service
|
||||
const cronService = new CronService(workspace, async (job) => {
|
||||
bus.publishInbound({
|
||||
channel: 'system',
|
||||
senderId: 'cron',
|
||||
chatId: `cli:cron_${job.id}`,
|
||||
content: job.payload.message || `Cron job "${job.name}" triggered.`,
|
||||
metadata: { cronJobId: job.id },
|
||||
});
|
||||
});
|
||||
|
||||
const agentLoop = new AgentLoop({
|
||||
bus,
|
||||
provider,
|
||||
workspace,
|
||||
model: config.agent.model,
|
||||
maxIterations: config.agent.maxToolIterations,
|
||||
contextWindowTokens: config.agent.contextWindowTokens,
|
||||
braveApiKey: config.tools.web.braveApiKey,
|
||||
webProxy: config.tools.web.proxy,
|
||||
execConfig: config.tools.exec,
|
||||
cronService,
|
||||
restrictToWorkspace: config.tools.restrictToWorkspace,
|
||||
sendProgress: config.channels.sendProgress,
|
||||
sendToolHints: config.channels.sendToolHints,
|
||||
});
|
||||
|
||||
// Mattermost
|
||||
if (config.channels.mattermost) {
|
||||
const mm = new MattermostChannel(bus, config.channels.mattermost);
|
||||
channelManager.register(mm);
|
||||
} else {
|
||||
console.warn(pc.yellow('[gateway] No Mattermost config found. Running without channels.'));
|
||||
}
|
||||
|
||||
// Heartbeat
|
||||
let heartbeat: HeartbeatService | null = null;
|
||||
if (config.heartbeat.enabled) {
|
||||
heartbeat = new HeartbeatService({
|
||||
workspace,
|
||||
provider,
|
||||
model: config.agent.model,
|
||||
intervalMinutes: config.heartbeat.intervalMinutes,
|
||||
onExecute: async (tasks) => {
|
||||
const content =
|
||||
tasks.length > 0
|
||||
? `Heartbeat tasks:\n${tasks.map((t, i) => `${i + 1}. ${t}`).join('\n')}`
|
||||
: 'Heartbeat tick — check for anything to do.';
|
||||
return agentLoop.processDirect(content, 'system:heartbeat', 'system', 'heartbeat');
|
||||
},
|
||||
onNotify: async (_result) => {
|
||||
// Result already delivered via processDirect / message tool
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
// Graceful shutdown
|
||||
const shutdown = () => {
|
||||
console.info('\n[gateway] Shutting down...');
|
||||
agentLoop.stop();
|
||||
channelManager.stopAll();
|
||||
heartbeat?.stop();
|
||||
cronService.stop();
|
||||
process.exit(0);
|
||||
};
|
||||
process.on('SIGINT', shutdown);
|
||||
process.on('SIGTERM', shutdown);
|
||||
|
||||
console.info(pc.green('[gateway] Starting...'));
|
||||
cronService.start();
|
||||
heartbeat?.start();
|
||||
|
||||
await Promise.all([agentLoop.run(), channelManager.startAll()]);
|
||||
});
|
||||
}
|
||||
69
src/cli/onboard.ts
Normal file
69
src/cli/onboard.ts
Normal file
@@ -0,0 +1,69 @@
|
||||
import { writeFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { Command } from 'commander';
|
||||
import pc from 'picocolors';
|
||||
import { WORKSPACE_PATH } from '../config/constants.ts';
|
||||
import { ensureWorkspace, resolvePath, checkWorkspaceEmpty, syncTemplates } from './utils.ts';
|
||||
|
||||
function logCreated(item: string) {
|
||||
console.info(pc.green(` ✓ Created ${item}`));
|
||||
}
|
||||
|
||||
export function onboardCommand(program: Command): void {
|
||||
program
|
||||
.command('onboard [path]')
|
||||
.description('Initialize a new nanobot workspace with config and templates')
|
||||
.action(async (rawPath?: string) => {
|
||||
try {
|
||||
// Create a minimal config template - users must fill in provider and model
|
||||
const defaultConfig = {
|
||||
providers: {},
|
||||
agent: {
|
||||
provider: '',
|
||||
model: '',
|
||||
},
|
||||
};
|
||||
|
||||
const targetPath = resolvePath(rawPath ?? WORKSPACE_PATH);
|
||||
const configPath = join(targetPath, 'config.json');
|
||||
|
||||
console.info(pc.blue('Initializing nanobot workspace...'));
|
||||
console.info(pc.dim(`Target path: ${targetPath}`));
|
||||
|
||||
// Check if directory exists and is not empty
|
||||
checkWorkspaceEmpty(targetPath);
|
||||
|
||||
// Create workspace directory
|
||||
ensureWorkspace(targetPath, true);
|
||||
logCreated('workspace directory');
|
||||
|
||||
// Write default config
|
||||
writeFileSync(configPath, JSON.stringify(defaultConfig, null, 2), 'utf8');
|
||||
logCreated('config.json');
|
||||
|
||||
// Sync templates
|
||||
const createdFiles = syncTemplates(targetPath);
|
||||
for (const file of createdFiles) {
|
||||
logCreated(file);
|
||||
}
|
||||
|
||||
console.info();
|
||||
console.info(pc.green('nanobot workspace initialized successfully!'));
|
||||
console.info();
|
||||
console.info(pc.bold('Next steps:'));
|
||||
console.info(` 1. Edit ${pc.cyan(configPath)} to add your API keys`);
|
||||
console.info(
|
||||
` 2. Customize ${pc.cyan(join(targetPath, 'USER.md'))} and ${pc.cyan(join(targetPath, 'SOUL.md'))} with your preferences`,
|
||||
);
|
||||
console.info(` 3. Start chatting: ${pc.cyan('bun run nanobot agent')}`);
|
||||
console.info();
|
||||
console.info(` -- For gateway mode:`);
|
||||
console.info(` 1. Edit ${pc.cyan(configPath)} to add your channel config (Mattermost)`);
|
||||
console.info(` 2. Connect your agent: ${pc.cyan('bun run nanobot gateway')}`);
|
||||
console.info();
|
||||
} catch (err) {
|
||||
console.error(pc.red(String(err)));
|
||||
process.exit(1);
|
||||
}
|
||||
});
|
||||
}
|
||||
4
src/cli/types.ts
Normal file
4
src/cli/types.ts
Normal file
@@ -0,0 +1,4 @@
|
||||
import type { Command } from 'commander';
|
||||
import type { Config } from '../config/types.ts';
|
||||
|
||||
export type CommandHandler = (program: Command, config: Config, workspace: string) => void;
|
||||
89
src/cli/utils.ts
Normal file
89
src/cli/utils.ts
Normal file
@@ -0,0 +1,89 @@
|
||||
import { existsSync, mkdirSync, readdirSync, readFileSync, writeFileSync } from 'node:fs';
|
||||
import { dirname, join, resolve } from 'node:path';
|
||||
import { fileURLToPath } from 'node:url';
|
||||
import { homedir } from 'node:os';
|
||||
import pc from 'picocolors';
|
||||
|
||||
export function resolvePath(raw: string): string {
|
||||
if (raw.startsWith('~/') || raw === '~') {
|
||||
return resolve(homedir(), raw.slice(2));
|
||||
}
|
||||
return resolve(raw);
|
||||
}
|
||||
|
||||
export function ensureWorkspace(rawPath: string, createIfMissing = false): string {
|
||||
const path = resolvePath(rawPath);
|
||||
if (!existsSync(path)) {
|
||||
if (createIfMissing) {
|
||||
mkdirSync(path, { recursive: true });
|
||||
} else {
|
||||
console.error(
|
||||
pc.red(`Workspace does not exist: ${path}\nRun 'nanobot onboard' to initialize.`),
|
||||
);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
return path;
|
||||
}
|
||||
|
||||
export function syncTemplates(workspacePath: string): string[] {
|
||||
// Get project root relative to this file (src/cli/utils.ts)
|
||||
const currentFile = fileURLToPath(import.meta.url);
|
||||
const srcDir = dirname(currentFile);
|
||||
const projectRoot = resolve(srcDir, '..', '..');
|
||||
const templatesDir = resolve(projectRoot, 'templates');
|
||||
|
||||
if (!existsSync(templatesDir)) {
|
||||
throw new Error(`Templates directory not found at ${templatesDir}`);
|
||||
}
|
||||
|
||||
const created: string[] = [];
|
||||
|
||||
function copyTemplate(src: string, dest: string) {
|
||||
if (existsSync(dest)) return;
|
||||
mkdirSync(dirname(dest), { recursive: true });
|
||||
const content = readFileSync(src, 'utf8');
|
||||
writeFileSync(dest, content, 'utf8');
|
||||
created.push(dest.slice(workspacePath.length + 1));
|
||||
}
|
||||
|
||||
function copyDir(srcDir: string, destDir: string) {
|
||||
if (!existsSync(srcDir)) return;
|
||||
const entries = readdirSync(srcDir, { withFileTypes: true });
|
||||
for (const entry of entries) {
|
||||
const srcPath = join(srcDir, entry.name);
|
||||
const destPath = join(destDir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
copyDir(srcPath, destPath);
|
||||
} else if (entry.name.endsWith('.md')) {
|
||||
copyTemplate(srcPath, destPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
copyDir(templatesDir, workspacePath);
|
||||
|
||||
// Create empty HISTORY.md
|
||||
const historyPath = join(workspacePath, 'memory', 'HISTORY.md');
|
||||
if (!existsSync(historyPath)) {
|
||||
mkdirSync(dirname(historyPath), { recursive: true });
|
||||
writeFileSync(historyPath, '# Conversation History\n\n', 'utf8');
|
||||
created.push('memory/HISTORY.md');
|
||||
}
|
||||
|
||||
// Create skills directory
|
||||
const skillsPath = join(workspacePath, 'skills');
|
||||
if (!existsSync(skillsPath)) {
|
||||
mkdirSync(skillsPath, { recursive: true });
|
||||
}
|
||||
|
||||
return created;
|
||||
}
|
||||
|
||||
export function checkWorkspaceEmpty(path: string): void {
|
||||
if (!existsSync(path)) return;
|
||||
const entries = readdirSync(path);
|
||||
if (entries.length > 0) {
|
||||
throw new Error(pc.red(`Directory not empty: ${path}`));
|
||||
}
|
||||
}
|
||||
1
src/config/constants.ts
Normal file
1
src/config/constants.ts
Normal file
@@ -0,0 +1 @@
|
||||
export const WORKSPACE_PATH = '~/.config/nanobot';
|
||||
67
src/config/loader.ts
Normal file
67
src/config/loader.ts
Normal file
@@ -0,0 +1,67 @@
|
||||
import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'node:fs';
|
||||
import { homedir } from 'node:os';
|
||||
import { dirname, resolve } from 'node:path';
|
||||
import pc from 'picocolors';
|
||||
import { type Config, ConfigSchema } from './types.ts';
|
||||
|
||||
const DEFAULT_CONFIG_PATH = resolve(homedir(), '.config', 'nanobot', 'config.json');
|
||||
|
||||
export function getConfigPath(override?: string): string {
|
||||
return override ?? process.env['NANOBOT_CONFIG'] ?? DEFAULT_CONFIG_PATH;
|
||||
}
|
||||
|
||||
export function loadConfig(configPath?: string): Config {
|
||||
const path = getConfigPath(configPath);
|
||||
|
||||
if (!existsSync(path)) {
|
||||
console.error(pc.red(`Failed to load config from ${configPath}`));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const raw = readFileSync(path, 'utf8');
|
||||
let json: unknown;
|
||||
try {
|
||||
json = JSON.parse(raw);
|
||||
} catch (error) {
|
||||
console.error(`Failed to parse config at ${path}`);
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Apply NANOBOT_ env var overrides before validation
|
||||
const merged = applyEnvOverrides(json as Record<string, unknown>);
|
||||
return ConfigSchema.parse(merged);
|
||||
}
|
||||
|
||||
export function saveConfig(config: Config, configPath?: string): void {
|
||||
const path = getConfigPath(configPath);
|
||||
mkdirSync(dirname(path), { recursive: true });
|
||||
writeFileSync(path, JSON.stringify(config, null, 2), 'utf8');
|
||||
}
|
||||
|
||||
/** Resolve `~` in workspace path to the real home directory. */
|
||||
export function resolveWorkspacePath(raw: string): string {
|
||||
if (raw.startsWith('~/') || raw === '~') {
|
||||
return resolve(homedir(), raw.slice(2));
|
||||
}
|
||||
return resolve(raw);
|
||||
}
|
||||
|
||||
function applyEnvOverrides(json: Record<string, unknown>): Record<string, unknown> {
|
||||
const out = structuredClone(json);
|
||||
|
||||
const model = process.env['NANOBOT_MODEL'];
|
||||
if (model) {
|
||||
const agent = (out['agent'] as Record<string, unknown> | undefined) ?? {};
|
||||
agent['model'] = model;
|
||||
out['agent'] = agent;
|
||||
}
|
||||
|
||||
const workspace = process.env['NANOBOT_WORKSPACE'];
|
||||
if (workspace) {
|
||||
const agent = (out['agent'] as Record<string, unknown> | undefined) ?? {};
|
||||
agent['workspacePath'] = workspace;
|
||||
out['agent'] = agent;
|
||||
}
|
||||
|
||||
return out;
|
||||
}
|
||||
136
src/config/types.ts
Normal file
136
src/config/types.ts
Normal file
@@ -0,0 +1,136 @@
|
||||
import { z } from 'zod';
|
||||
import { WORKSPACE_PATH } from './constants.ts';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Mattermost
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const MattermostDmConfigSchema = z.object({
|
||||
enabled: z.boolean().default(true),
|
||||
allowFrom: z.array(z.string()).default([]),
|
||||
});
|
||||
export type MattermostDmConfig = z.infer<typeof MattermostDmConfigSchema>;
|
||||
|
||||
export const MattermostConfigSchema = z.object({
|
||||
serverUrl: z.string(),
|
||||
token: z.string(),
|
||||
scheme: z.enum(['https', 'http']).default('https'),
|
||||
port: z.number().int().default(443),
|
||||
basePath: z.string().default(''),
|
||||
allowFrom: z.array(z.string()).default([]),
|
||||
groupPolicy: z.enum(['open', 'mention', 'allowlist']).default('mention'),
|
||||
groupAllowFrom: z.array(z.string()).default([]),
|
||||
dm: MattermostDmConfigSchema.default(() => ({ enabled: true, allowFrom: [] })),
|
||||
replyInThread: z.boolean().default(true),
|
||||
});
|
||||
export type MattermostConfig = z.infer<typeof MattermostConfigSchema>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Channels
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const ChannelsConfigSchema = z.object({
|
||||
mattermost: MattermostConfigSchema.optional(),
|
||||
sendProgress: z.boolean().default(true),
|
||||
sendToolHints: z.boolean().default(true),
|
||||
});
|
||||
export type ChannelsConfig = z.infer<typeof ChannelsConfigSchema>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Agent
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const AgentProviderSchema = z.enum([
|
||||
'anthropic',
|
||||
'openai',
|
||||
'google',
|
||||
'openrouter',
|
||||
'ollama',
|
||||
]);
|
||||
export type AgentProvider = z.infer<typeof AgentProviderSchema>;
|
||||
|
||||
export const AgentConfigSchema = z.object({
|
||||
provider: AgentProviderSchema,
|
||||
model: z.string(),
|
||||
workspacePath: z.string().default(WORKSPACE_PATH),
|
||||
maxTokens: z.number().int().default(4096),
|
||||
contextWindowTokens: z.number().int().default(65536),
|
||||
temperature: z.number().default(0.7),
|
||||
maxToolIterations: z.number().int().default(40),
|
||||
});
|
||||
export type AgentConfig = z.infer<typeof AgentConfigSchema>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Providers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const ProviderConfigSchema = z.object({
|
||||
apiKey: z.string().optional(),
|
||||
apiBase: z.string().optional(),
|
||||
});
|
||||
export type ProviderConfig = z.infer<typeof ProviderConfigSchema>;
|
||||
|
||||
export const ProvidersConfigSchema = z.object({
|
||||
anthropic: ProviderConfigSchema.optional(),
|
||||
openai: ProviderConfigSchema.optional(),
|
||||
google: ProviderConfigSchema.optional(),
|
||||
openrouter: ProviderConfigSchema.optional(),
|
||||
ollama: ProviderConfigSchema.optional(),
|
||||
});
|
||||
export type ProvidersConfig = z.infer<typeof ProvidersConfigSchema>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tools
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const ExecToolConfigSchema = z.object({
|
||||
timeout: z.number().int().default(120),
|
||||
pathAppend: z.string().optional(),
|
||||
denyPatterns: z.array(z.string()).default([]),
|
||||
restrictToWorkspace: z.boolean().default(false),
|
||||
});
|
||||
export type ExecToolConfig = z.infer<typeof ExecToolConfigSchema>;
|
||||
|
||||
export const WebToolConfigSchema = z.object({
|
||||
braveApiKey: z.string().optional(),
|
||||
proxy: z.string().optional(),
|
||||
});
|
||||
export type WebToolConfig = z.infer<typeof WebToolConfigSchema>;
|
||||
|
||||
export const ToolsConfigSchema = z.object({
|
||||
exec: ExecToolConfigSchema.default(() => ({
|
||||
timeout: 120,
|
||||
denyPatterns: [],
|
||||
restrictToWorkspace: false,
|
||||
})),
|
||||
web: WebToolConfigSchema.default(() => ({})),
|
||||
restrictToWorkspace: z.boolean().default(false),
|
||||
});
|
||||
export type ToolsConfig = z.infer<typeof ToolsConfigSchema>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Heartbeat
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const HeartbeatConfigSchema = z.object({
|
||||
enabled: z.boolean().default(false),
|
||||
intervalMinutes: z.number().int().default(30),
|
||||
});
|
||||
export type HeartbeatConfig = z.infer<typeof HeartbeatConfigSchema>;
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Root config
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
export const ConfigSchema = z.object({
|
||||
providers: ProvidersConfigSchema.default(() => ({})),
|
||||
agent: AgentConfigSchema,
|
||||
heartbeat: HeartbeatConfigSchema.default(() => ({ enabled: false, intervalMinutes: 30 })),
|
||||
channels: ChannelsConfigSchema.default(() => ({ sendProgress: true, sendToolHints: true })),
|
||||
tools: ToolsConfigSchema.default(() => ({
|
||||
exec: { timeout: 120, denyPatterns: [], restrictToWorkspace: false },
|
||||
web: {},
|
||||
restrictToWorkspace: false,
|
||||
})),
|
||||
});
|
||||
export type Config = z.infer<typeof ConfigSchema>;
|
||||
216
src/cron/service.ts
Normal file
216
src/cron/service.ts
Normal file
@@ -0,0 +1,216 @@
|
||||
import { existsSync, readFileSync, statSync, writeFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import { CronExpressionParser } from 'cron-parser';
|
||||
import { type CronJob, CronJobSchema, CronStoreSchema } from './types.ts';
|
||||
|
||||
export type OnJobCallback = (job: CronJob) => Promise<void>;
|
||||
|
||||
export class CronService {
|
||||
private _filePath: string;
|
||||
private _jobs: Map<string, CronJob> = new Map();
|
||||
private _timers: Map<string, ReturnType<typeof setTimeout>> = new Map();
|
||||
private _onJob: OnJobCallback;
|
||||
private _lastMtime = 0;
|
||||
|
||||
constructor(workspacePath: string, onJob: OnJobCallback) {
|
||||
this._filePath = join(workspacePath, 'cron', 'jobs.json');
|
||||
this._onJob = onJob;
|
||||
this._load();
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Persistence
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private _load(): void {
|
||||
if (!existsSync(this._filePath)) return;
|
||||
try {
|
||||
const raw = readFileSync(this._filePath, 'utf8');
|
||||
const store = CronStoreSchema.parse(JSON.parse(raw));
|
||||
this._jobs = new Map(store.jobs.map((j) => [j.id, j]));
|
||||
this._lastMtime = statSync(this._filePath).mtimeMs;
|
||||
} catch {
|
||||
// start fresh on corrupt file
|
||||
}
|
||||
}
|
||||
|
||||
private _save(): void {
|
||||
const store = CronStoreSchema.parse({ jobs: [...this._jobs.values()] });
|
||||
const dir = this._filePath.replace(/\/[^/]+$/, '');
|
||||
require('node:fs').mkdirSync(dir, { recursive: true });
|
||||
writeFileSync(this._filePath, JSON.stringify(store, null, 2), 'utf8');
|
||||
this._lastMtime = statSync(this._filePath).mtimeMs;
|
||||
}
|
||||
|
||||
private _reloadIfChanged(): void {
|
||||
if (!existsSync(this._filePath)) return;
|
||||
const mtime = statSync(this._filePath).mtimeMs;
|
||||
if (mtime !== this._lastMtime) this._load();
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Public API
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
listJobs(): CronJob[] {
|
||||
this._reloadIfChanged();
|
||||
return [...this._jobs.values()];
|
||||
}
|
||||
|
||||
addJob(job: Omit<CronJob, 'state' | 'createdAtMs' | 'updatedAtMs'>): CronJob {
|
||||
const now = Date.now();
|
||||
const full = CronJobSchema.parse({ ...job, state: {}, createdAtMs: now, updatedAtMs: now });
|
||||
this._jobs.set(full.id, full);
|
||||
this._save();
|
||||
this._arm(full);
|
||||
return full;
|
||||
}
|
||||
|
||||
removeJob(id: string): boolean {
|
||||
if (!this._jobs.has(id)) return false;
|
||||
this._clearTimer(id);
|
||||
this._jobs.delete(id);
|
||||
this._save();
|
||||
return true;
|
||||
}
|
||||
|
||||
enableJob(id: string, enabled: boolean): boolean {
|
||||
const job = this._jobs.get(id);
|
||||
if (!job) return false;
|
||||
this._jobs.set(id, { ...job, enabled, updatedAtMs: Date.now() });
|
||||
this._save();
|
||||
if (enabled) this._arm(this._jobs.get(id)!);
|
||||
else this._clearTimer(id);
|
||||
return true;
|
||||
}
|
||||
|
||||
async runJob(id: string): Promise<string> {
|
||||
const job = this._jobs.get(id);
|
||||
if (!job) return `Error: job ${id} not found`;
|
||||
await this._execute(job);
|
||||
return `Job ${id} executed.`;
|
||||
}
|
||||
|
||||
status(): string {
|
||||
const jobs = this.listJobs();
|
||||
if (jobs.length === 0) return 'No cron jobs configured.';
|
||||
return jobs
|
||||
.map((j) => {
|
||||
const next = j.state.nextRunAtMs ? new Date(j.state.nextRunAtMs).toISOString() : 'N/A';
|
||||
return `[${j.enabled ? 'ON' : 'OFF'}] ${j.id} "${j.name}" next=${next}`;
|
||||
})
|
||||
.join('\n');
|
||||
}
|
||||
|
||||
/** Arm all loaded jobs. Call once after construction. */
|
||||
start(): void {
|
||||
for (const job of this._jobs.values()) {
|
||||
if (job.enabled) this._arm(job);
|
||||
}
|
||||
}
|
||||
|
||||
stop(): void {
|
||||
for (const id of this._timers.keys()) this._clearTimer(id);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Scheduling internals
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
private _arm(job: CronJob): void {
|
||||
this._clearTimer(job.id);
|
||||
if (!job.enabled) return;
|
||||
|
||||
const delayMs = this._nextDelayMs(job);
|
||||
if (delayMs === null) return;
|
||||
|
||||
const nextRunAtMs = Date.now() + delayMs;
|
||||
const updated: CronJob = {
|
||||
...job,
|
||||
state: { ...job.state, nextRunAtMs },
|
||||
updatedAtMs: Date.now(),
|
||||
};
|
||||
this._jobs.set(job.id, updated);
|
||||
this._save();
|
||||
|
||||
const timer = setTimeout(() => void this._tick(job.id), delayMs);
|
||||
this._timers.set(job.id, timer);
|
||||
}
|
||||
|
||||
private async _tick(id: string): Promise<void> {
|
||||
this._timers.delete(id);
|
||||
const job = this._jobs.get(id);
|
||||
if (!job || !job.enabled) return;
|
||||
await this._execute(job);
|
||||
// Re-arm unless it was deleted or is a one-shot
|
||||
const current = this._jobs.get(id);
|
||||
if (current && !current.deleteAfterRun) this._arm(current);
|
||||
}
|
||||
|
||||
private async _execute(job: CronJob): Promise<void> {
|
||||
try {
|
||||
await this._onJob(job);
|
||||
const updated: CronJob = {
|
||||
...job,
|
||||
state: { ...job.state, lastRunAtMs: Date.now(), lastStatus: 'ok', lastError: null },
|
||||
updatedAtMs: Date.now(),
|
||||
};
|
||||
this._jobs.set(job.id, updated);
|
||||
if (job.deleteAfterRun) {
|
||||
this._jobs.delete(job.id);
|
||||
}
|
||||
this._save();
|
||||
} catch (err) {
|
||||
const updated: CronJob = {
|
||||
...job,
|
||||
state: {
|
||||
...job.state,
|
||||
lastRunAtMs: Date.now(),
|
||||
lastStatus: 'error',
|
||||
lastError: String(err),
|
||||
},
|
||||
updatedAtMs: Date.now(),
|
||||
};
|
||||
this._jobs.set(job.id, updated);
|
||||
this._save();
|
||||
}
|
||||
}
|
||||
|
||||
private _nextDelayMs(job: CronJob): number | null {
|
||||
const { schedule } = job;
|
||||
const now = Date.now();
|
||||
|
||||
if (schedule.kind === 'at') {
|
||||
const delay = schedule.atMs - now;
|
||||
return delay > 0 ? delay : null;
|
||||
}
|
||||
|
||||
if (schedule.kind === 'every') {
|
||||
const lastRun = job.state.lastRunAtMs ?? 0;
|
||||
const elapsed = now - lastRun;
|
||||
const delay = Math.max(0, schedule.everyMs - elapsed);
|
||||
return delay;
|
||||
}
|
||||
|
||||
if (schedule.kind === 'cron') {
|
||||
try {
|
||||
const interval = CronExpressionParser.parse(schedule.expr, { tz: schedule.tz });
|
||||
const next = interval.next();
|
||||
return next.getTime() - now;
|
||||
} catch {
|
||||
console.error(`[cron] Failed to parse expression for job ${job.id}: ${schedule.expr}`);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
private _clearTimer(id: string): void {
|
||||
const timer = this._timers.get(id);
|
||||
if (timer !== undefined) {
|
||||
clearTimeout(timer);
|
||||
this._timers.delete(id);
|
||||
}
|
||||
}
|
||||
}
|
||||
53
src/cron/types.ts
Normal file
53
src/cron/types.ts
Normal file
@@ -0,0 +1,53 @@
|
||||
import { z } from 'zod';
|
||||
|
||||
export const CronScheduleSchema = z.discriminatedUnion('kind', [
|
||||
z.object({ kind: z.literal('at'), atMs: z.number().int() }),
|
||||
z.object({ kind: z.literal('every'), everyMs: z.number().int() }),
|
||||
z.object({ kind: z.literal('cron'), expr: z.string(), tz: z.string().optional() }),
|
||||
]);
|
||||
export type CronSchedule = z.infer<typeof CronScheduleSchema>;
|
||||
|
||||
export const CronPayloadSchema = z.object({
|
||||
kind: z.enum(['system_event', 'agent_turn']).default('agent_turn'),
|
||||
message: z.string().default(''),
|
||||
deliver: z.boolean().default(false),
|
||||
channel: z.string().optional(),
|
||||
to: z.string().optional(),
|
||||
});
|
||||
export type CronPayload = z.infer<typeof CronPayloadSchema>;
|
||||
|
||||
export const CronJobStateSchema = z.object({
|
||||
nextRunAtMs: z.number().int().nullable().default(null),
|
||||
lastRunAtMs: z.number().int().nullable().default(null),
|
||||
lastStatus: z.enum(['ok', 'error', 'skipped']).nullable().default(null),
|
||||
lastError: z.string().nullable().default(null),
|
||||
});
|
||||
export type CronJobState = z.infer<typeof CronJobStateSchema>;
|
||||
|
||||
export const CronJobSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
enabled: z.boolean().default(true),
|
||||
schedule: CronScheduleSchema,
|
||||
payload: CronPayloadSchema.default(() => ({
|
||||
kind: 'agent_turn' as const,
|
||||
message: '',
|
||||
deliver: false,
|
||||
})),
|
||||
state: CronJobStateSchema.default(() => ({
|
||||
nextRunAtMs: null,
|
||||
lastRunAtMs: null,
|
||||
lastStatus: null,
|
||||
lastError: null,
|
||||
})),
|
||||
createdAtMs: z.number().int().default(0),
|
||||
updatedAtMs: z.number().int().default(0),
|
||||
deleteAfterRun: z.boolean().default(false),
|
||||
});
|
||||
export type CronJob = z.infer<typeof CronJobSchema>;
|
||||
|
||||
export const CronStoreSchema = z.object({
|
||||
version: z.number().int().default(1),
|
||||
jobs: z.array(CronJobSchema).default([]),
|
||||
});
|
||||
export type CronStore = z.infer<typeof CronStoreSchema>;
|
||||
148
src/heartbeat/service.ts
Normal file
148
src/heartbeat/service.ts
Normal file
@@ -0,0 +1,148 @@
|
||||
import { existsSync, readFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import type { LLMProvider, ModelMessage } from '../provider/index.ts';
|
||||
|
||||
const HEARTBEAT_TOOL = [
|
||||
{
|
||||
type: 'function' as const,
|
||||
function: {
|
||||
name: 'heartbeat_decision',
|
||||
description: 'Decide whether to act on this heartbeat tick.',
|
||||
parameters: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
action: {
|
||||
type: 'string',
|
||||
enum: ['skip', 'run'],
|
||||
description: '"skip" to do nothing, "run" to execute the tasks.',
|
||||
},
|
||||
tasks: {
|
||||
type: 'array',
|
||||
items: { type: 'string' },
|
||||
description: 'List of tasks to perform (only when action is "run").',
|
||||
},
|
||||
reason: { type: 'string', description: 'Brief reason for the decision.' },
|
||||
},
|
||||
required: ['action'],
|
||||
},
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
export type HeartbeatExecuteCallback = (tasks: string[]) => Promise<string>;
|
||||
export type HeartbeatNotifyCallback = (content: string) => Promise<void>;
|
||||
|
||||
export class HeartbeatService {
|
||||
private _workspace: string;
|
||||
private _provider: LLMProvider;
|
||||
private _model: string;
|
||||
private _intervalMs: number;
|
||||
private _onExecute: HeartbeatExecuteCallback;
|
||||
private _onNotify: HeartbeatNotifyCallback;
|
||||
private _timer: ReturnType<typeof setTimeout> | null = null;
|
||||
private _running = false;
|
||||
|
||||
constructor(opts: {
|
||||
workspace: string;
|
||||
provider: LLMProvider;
|
||||
model: string;
|
||||
intervalMinutes: number;
|
||||
onExecute: HeartbeatExecuteCallback;
|
||||
onNotify: HeartbeatNotifyCallback;
|
||||
}) {
|
||||
this._workspace = opts.workspace;
|
||||
this._provider = opts.provider;
|
||||
this._model = opts.model;
|
||||
this._intervalMs = opts.intervalMinutes * 60 * 1000;
|
||||
this._onExecute = opts.onExecute;
|
||||
this._onNotify = opts.onNotify;
|
||||
}
|
||||
|
||||
start(): void {
|
||||
if (this._running) return;
|
||||
this._running = true;
|
||||
this._schedule();
|
||||
console.info(`[heartbeat] Started (interval: ${this._intervalMs / 60000}min)`);
|
||||
}
|
||||
|
||||
stop(): void {
|
||||
this._running = false;
|
||||
if (this._timer) {
|
||||
clearTimeout(this._timer);
|
||||
this._timer = null;
|
||||
}
|
||||
}
|
||||
|
||||
async triggerNow(): Promise<void> {
|
||||
await this._tick();
|
||||
}
|
||||
|
||||
private _schedule(): void {
|
||||
if (!this._running) return;
|
||||
this._timer = setTimeout(() => {
|
||||
void this._tick().finally(() => this._schedule());
|
||||
}, this._intervalMs);
|
||||
}
|
||||
|
||||
private async _tick(): Promise<void> {
|
||||
const heartbeatContent = this._loadHeartbeatMd();
|
||||
if (!heartbeatContent) {
|
||||
console.debug('[heartbeat] No HEARTBEAT.md found, skipping tick.');
|
||||
return;
|
||||
}
|
||||
|
||||
const now = new Date().toISOString();
|
||||
const messages: ModelMessage[] = [
|
||||
{
|
||||
role: 'system',
|
||||
content:
|
||||
'You are a heartbeat agent. Read the HEARTBEAT.md instructions and decide whether to act on this tick. Call heartbeat_decision with action="skip" or action="run".',
|
||||
},
|
||||
{
|
||||
role: 'user',
|
||||
content: `Current time: ${now}\n\n## HEARTBEAT.md\n\n${heartbeatContent}`,
|
||||
},
|
||||
];
|
||||
|
||||
const { response } = await this._provider.chatWithRetry({
|
||||
messages,
|
||||
tools: HEARTBEAT_TOOL,
|
||||
model: this._model,
|
||||
toolChoice: 'required',
|
||||
});
|
||||
|
||||
const decision = response.toolCalls[0];
|
||||
if (!decision) {
|
||||
console.debug('[heartbeat] No decision tool call returned, skipping.');
|
||||
return;
|
||||
}
|
||||
|
||||
const action =
|
||||
typeof decision.arguments['action'] === 'string' ? decision.arguments['action'] : 'skip';
|
||||
if (action !== 'run') {
|
||||
const reason =
|
||||
typeof decision.arguments['reason'] === 'string' ? decision.arguments['reason'] : '';
|
||||
console.debug(`[heartbeat] Decision: skip (${reason})`);
|
||||
return;
|
||||
}
|
||||
|
||||
const tasks = Array.isArray(decision.arguments['tasks'])
|
||||
? (decision.arguments['tasks'] as string[])
|
||||
: [];
|
||||
|
||||
console.info(`[heartbeat] Decision: run (${tasks.length} tasks)`);
|
||||
|
||||
try {
|
||||
const result = await this._onExecute(tasks);
|
||||
await this._onNotify(result);
|
||||
} catch (err) {
|
||||
console.error(`[heartbeat] Execution failed: ${String(err)}`);
|
||||
}
|
||||
}
|
||||
|
||||
private _loadHeartbeatMd(): string | null {
|
||||
const path = join(this._workspace, 'HEARTBEAT.md');
|
||||
if (!existsSync(path)) return null;
|
||||
return readFileSync(path, 'utf8');
|
||||
}
|
||||
}
|
||||
232
src/provider/index.ts
Normal file
232
src/provider/index.ts
Normal file
@@ -0,0 +1,232 @@
|
||||
import { createAnthropic } from '@ai-sdk/anthropic';
|
||||
import { createGoogleGenerativeAI } from '@ai-sdk/google';
|
||||
import { createOpenAI } from '@ai-sdk/openai';
|
||||
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
|
||||
import { type ModelMessage, generateText, stepCountIs } from 'ai';
|
||||
import { jsonrepair } from 'jsonrepair';
|
||||
import { createOllama } from 'ai-sdk-ollama';
|
||||
import type { AgentProvider, ProvidersConfig } from '../config/types.ts';
|
||||
import type { ChatOptions, LLMResponse, ToolDefinition } from './types.ts';
|
||||
|
||||
export type { ToolDefinition };
|
||||
export type { LLMResponse };
|
||||
|
||||
// Re-export ModelMessage so callers don't need to import from 'ai' directly
|
||||
export type { ModelMessage };
|
||||
|
||||
const TRANSIENT_MARKERS = [
|
||||
'429',
|
||||
'rate limit',
|
||||
'500',
|
||||
'502',
|
||||
'503',
|
||||
'504',
|
||||
'overloaded',
|
||||
'timeout',
|
||||
'timed out',
|
||||
'connection',
|
||||
'server error',
|
||||
'temporarily unavailable',
|
||||
];
|
||||
|
||||
const RETRY_DELAYS_MS = [1000, 2000, 4000];
|
||||
|
||||
function isTransient(err: unknown): boolean {
|
||||
const msg = String(err).toLowerCase();
|
||||
return TRANSIENT_MARKERS.some((m) => msg.includes(m));
|
||||
}
|
||||
|
||||
function sleep(ms: number): Promise<void> {
|
||||
return new Promise((r) => setTimeout(r, ms));
|
||||
}
|
||||
|
||||
function repairArgs(raw: unknown): Record<string, unknown> {
|
||||
if (typeof raw === 'object' && raw !== null && !Array.isArray(raw)) {
|
||||
return raw as Record<string, unknown>;
|
||||
}
|
||||
if (typeof raw === 'string') {
|
||||
try {
|
||||
return JSON.parse(jsonrepair(raw)) as Record<string, unknown>;
|
||||
} catch {
|
||||
return {};
|
||||
}
|
||||
}
|
||||
return {};
|
||||
}
|
||||
|
||||
/** Normalise a tool-call ID to a 9-char alphanumeric string for cross-provider compat. */
|
||||
function shortId(id: string): string {
|
||||
if (/^[a-zA-Z0-9]{9}$/.test(id)) return id;
|
||||
let h = 0;
|
||||
for (let i = 0; i < id.length; i++) h = (Math.imul(31, h) + id.charCodeAt(i)) | 0;
|
||||
return Math.abs(h).toString(36).padStart(9, '0').slice(0, 9);
|
||||
}
|
||||
|
||||
import type { LanguageModel } from 'ai';
|
||||
|
||||
export class LLMProvider {
|
||||
private _providers: ProvidersConfig;
|
||||
private _provider: AgentProvider;
|
||||
private _defaultModel: string;
|
||||
private _maxTokens: number;
|
||||
private _temperature: number;
|
||||
|
||||
constructor(
|
||||
providers: ProvidersConfig,
|
||||
provider: AgentProvider,
|
||||
defaultModel: string,
|
||||
maxTokens = 4096,
|
||||
temperature = 0.7,
|
||||
) {
|
||||
this._providers = providers;
|
||||
this._provider = provider;
|
||||
this._defaultModel = defaultModel;
|
||||
this._maxTokens = maxTokens;
|
||||
this._temperature = temperature;
|
||||
}
|
||||
|
||||
get defaultModel(): string {
|
||||
return this._defaultModel;
|
||||
}
|
||||
|
||||
private _resolveModel(model: string): LanguageModel {
|
||||
switch (this._provider) {
|
||||
case 'anthropic': {
|
||||
const cfg = this._providers.anthropic;
|
||||
return createAnthropic({ apiKey: cfg?.apiKey, baseURL: cfg?.apiBase })(model);
|
||||
}
|
||||
case 'openai': {
|
||||
const cfg = this._providers.openai;
|
||||
return createOpenAI({ apiKey: cfg?.apiKey, baseURL: cfg?.apiBase })(model);
|
||||
}
|
||||
case 'google': {
|
||||
const cfg = this._providers.google;
|
||||
return createGoogleGenerativeAI({ apiKey: cfg?.apiKey, baseURL: cfg?.apiBase })(model);
|
||||
}
|
||||
case 'openrouter': {
|
||||
const cfg = this._providers.openrouter;
|
||||
return createOpenRouter({ apiKey: cfg?.apiKey, baseURL: cfg?.apiBase })(model);
|
||||
}
|
||||
case 'ollama': {
|
||||
const cfg = this._providers.ollama;
|
||||
return createOllama({ apiKey: cfg?.apiKey, baseURL: cfg?.apiBase })(model);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async chat(
|
||||
opts: ChatOptions,
|
||||
): Promise<{ response: LLMResponse; responseMessages: ModelMessage[] }> {
|
||||
const model = this._resolveModel(opts.model ?? this._defaultModel);
|
||||
const maxTokens = opts.maxTokens ?? this._maxTokens;
|
||||
const temperature = opts.temperature ?? this._temperature;
|
||||
|
||||
// Convert ToolDefinition[] → AI SDK tools record (without execute — we run tools ourselves)
|
||||
const aiTools =
|
||||
opts.tools && opts.tools.length > 0
|
||||
? Object.fromEntries(
|
||||
opts.tools.map((t) => [
|
||||
t.function.name,
|
||||
{
|
||||
type: 'function' as const,
|
||||
description: t.function.description,
|
||||
parameters: t.function.parameters,
|
||||
},
|
||||
]),
|
||||
)
|
||||
: undefined;
|
||||
|
||||
try {
|
||||
let toolChoice: 'required' | 'none' | 'auto' = 'auto';
|
||||
if (opts.toolChoice === 'required' || opts.toolChoice === 'none')
|
||||
toolChoice = opts.toolChoice;
|
||||
const result = await generateText({
|
||||
model,
|
||||
messages: opts.messages as ModelMessage[],
|
||||
// biome-ignore lint/suspicious/noExplicitAny: AI SDK tools type is complex
|
||||
tools: aiTools as any,
|
||||
toolChoice,
|
||||
maxOutputTokens: maxTokens,
|
||||
temperature,
|
||||
stopWhen: stepCountIs(1),
|
||||
});
|
||||
|
||||
const toolCalls = (result.toolCalls ?? []).map((tc) => ({
|
||||
id: shortId(tc.toolCallId),
|
||||
name: tc.toolName,
|
||||
arguments: repairArgs(tc.input),
|
||||
}));
|
||||
|
||||
const llmResponse: LLMResponse = {
|
||||
content: result.text || null,
|
||||
toolCalls,
|
||||
finishReason: result.finishReason ?? 'stop',
|
||||
usage: {
|
||||
promptTokens: result.usage?.inputTokens,
|
||||
completionTokens: result.usage?.outputTokens,
|
||||
totalTokens: (result.usage?.inputTokens ?? 0) + (result.usage?.outputTokens ?? 0),
|
||||
},
|
||||
};
|
||||
|
||||
// response.messages contains the assistant + tool-result messages to append to history
|
||||
const responseMessages = result.response.messages;
|
||||
|
||||
return { response: llmResponse, responseMessages };
|
||||
} catch (err) {
|
||||
return {
|
||||
response: {
|
||||
content: `Error calling LLM: ${String(err)}`,
|
||||
toolCalls: [],
|
||||
finishReason: 'error',
|
||||
usage: {},
|
||||
},
|
||||
responseMessages: [],
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
async chatWithRetry(
|
||||
opts: ChatOptions,
|
||||
): Promise<{ response: LLMResponse; responseMessages: ModelMessage[] }> {
|
||||
for (const delay of RETRY_DELAYS_MS) {
|
||||
const result = await this.chat(opts);
|
||||
if (result.response.finishReason !== 'error') return result;
|
||||
if (!isTransient(result.response.content)) return result;
|
||||
|
||||
console.warn(
|
||||
`[provider] Transient error, retrying in ${delay}ms: ${String(result.response.content).slice(0, 120)}`,
|
||||
);
|
||||
await sleep(delay);
|
||||
}
|
||||
return this.chat(opts);
|
||||
}
|
||||
}
|
||||
|
||||
export function makeProvider(
|
||||
providers: ProvidersConfig,
|
||||
provider: AgentProvider,
|
||||
model: string,
|
||||
maxTokens: number,
|
||||
temperature: number,
|
||||
): LLMProvider {
|
||||
return new LLMProvider(providers, provider, model, maxTokens, temperature);
|
||||
}
|
||||
|
||||
/** Build a tool-result message to append after executing a tool call. */
|
||||
export function toolResultMessage(
|
||||
toolCallId: string,
|
||||
toolName: string,
|
||||
result: string,
|
||||
): ModelMessage {
|
||||
return {
|
||||
role: 'tool',
|
||||
content: [
|
||||
{
|
||||
type: 'tool-result',
|
||||
toolCallId,
|
||||
toolName,
|
||||
output: { type: 'text', value: result },
|
||||
},
|
||||
],
|
||||
};
|
||||
}
|
||||
41
src/provider/types.ts
Normal file
41
src/provider/types.ts
Normal file
@@ -0,0 +1,41 @@
|
||||
import { z } from 'zod';
|
||||
|
||||
export const ToolCallSchema = z.object({
|
||||
id: z.string(),
|
||||
name: z.string(),
|
||||
arguments: z.record(z.string(), z.unknown()),
|
||||
});
|
||||
export type ToolCall = z.infer<typeof ToolCallSchema>;
|
||||
|
||||
export const LLMResponseSchema = z.object({
|
||||
content: z.string().nullable(),
|
||||
toolCalls: z.array(ToolCallSchema).default([]),
|
||||
finishReason: z.string().default('stop'),
|
||||
usage: z
|
||||
.object({
|
||||
promptTokens: z.number().optional(),
|
||||
completionTokens: z.number().optional(),
|
||||
totalTokens: z.number().optional(),
|
||||
})
|
||||
.default(() => ({})),
|
||||
});
|
||||
export type LLMResponse = z.infer<typeof LLMResponseSchema>;
|
||||
|
||||
/** OpenAI function-calling tool definition shape passed to the LLM. */
|
||||
export interface ToolDefinition {
|
||||
type: 'function';
|
||||
function: {
|
||||
name: string;
|
||||
description: string;
|
||||
parameters: Record<string, unknown>;
|
||||
};
|
||||
}
|
||||
|
||||
export interface ChatOptions {
|
||||
messages: Array<Record<string, unknown>>;
|
||||
tools?: ToolDefinition[];
|
||||
model?: string;
|
||||
maxTokens?: number;
|
||||
temperature?: number;
|
||||
toolChoice?: 'auto' | 'required' | 'none';
|
||||
}
|
||||
151
src/session/manager.ts
Normal file
151
src/session/manager.ts
Normal file
@@ -0,0 +1,151 @@
|
||||
import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'node:fs';
|
||||
import { join } from 'node:path';
|
||||
import type { SessionMessage, SessionMeta } from './types.ts';
|
||||
|
||||
const MAX_HISTORY_MESSAGES = 200;
|
||||
|
||||
export class Session {
|
||||
key: string;
|
||||
messages: SessionMessage[];
|
||||
createdAt: string;
|
||||
updatedAt: string;
|
||||
lastConsolidated: number;
|
||||
|
||||
constructor(key: string) {
|
||||
const now = new Date().toISOString();
|
||||
this.key = key;
|
||||
this.messages = [];
|
||||
this.createdAt = now;
|
||||
this.updatedAt = now;
|
||||
this.lastConsolidated = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the slice of messages that haven't been consolidated yet,
|
||||
* aligned to start on a user turn.
|
||||
*/
|
||||
getHistory(maxMessages = 0): SessionMessage[] {
|
||||
let slice = this.messages.slice(this.lastConsolidated);
|
||||
|
||||
// Align to the first user message so we never start mid-turn
|
||||
const firstUser = slice.findIndex((m) => m.role === 'user');
|
||||
if (firstUser > 0) slice = slice.slice(firstUser);
|
||||
|
||||
if (maxMessages > 0 && slice.length > maxMessages) {
|
||||
// Take the last N messages, aligned to a user turn
|
||||
let trimmed = slice.slice(-maxMessages);
|
||||
const firstU = trimmed.findIndex((m) => m.role === 'user');
|
||||
if (firstU > 0) trimmed = trimmed.slice(firstU);
|
||||
return trimmed;
|
||||
}
|
||||
|
||||
return slice;
|
||||
}
|
||||
|
||||
clear(): void {
|
||||
this.messages = [];
|
||||
this.lastConsolidated = 0;
|
||||
this.updatedAt = new Date().toISOString();
|
||||
}
|
||||
|
||||
get meta(): SessionMeta {
|
||||
return {
|
||||
key: this.key,
|
||||
createdAt: this.createdAt,
|
||||
updatedAt: this.updatedAt,
|
||||
lastConsolidated: this.lastConsolidated,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export class SessionManager {
|
||||
private _dir: string;
|
||||
private _cache = new Map<string, Session>();
|
||||
|
||||
constructor(workspacePath: string) {
|
||||
this._dir = join(workspacePath, 'sessions');
|
||||
mkdirSync(this._dir, { recursive: true });
|
||||
}
|
||||
|
||||
private _keyToFilename(key: string): string {
|
||||
// Replace characters unsafe for filenames
|
||||
return key.replace(/[:/\\]/g, '_') + '.jsonl';
|
||||
}
|
||||
|
||||
private _filePath(key: string): string {
|
||||
return join(this._dir, this._keyToFilename(key));
|
||||
}
|
||||
|
||||
getOrCreate(key: string): Session {
|
||||
const cached = this._cache.get(key);
|
||||
if (cached) return cached;
|
||||
|
||||
const session = this._load(key);
|
||||
this._cache.set(key, session);
|
||||
return session;
|
||||
}
|
||||
|
||||
private _load(key: string): Session {
|
||||
const path = this._filePath(key);
|
||||
const session = new Session(key);
|
||||
|
||||
if (!existsSync(path)) return session;
|
||||
|
||||
const lines = readFileSync(path, 'utf8').split('\n').filter(Boolean);
|
||||
if (lines.length === 0) return session;
|
||||
|
||||
// First line is the metadata record
|
||||
try {
|
||||
const meta = JSON.parse(lines[0]!) as SessionMeta;
|
||||
session.createdAt = meta.createdAt;
|
||||
session.updatedAt = meta.updatedAt;
|
||||
session.lastConsolidated = meta.lastConsolidated ?? 0;
|
||||
} catch {
|
||||
// corrupt first line — treat as new session
|
||||
return session;
|
||||
}
|
||||
|
||||
// Remaining lines are messages
|
||||
for (const line of lines.slice(1)) {
|
||||
try {
|
||||
session.messages.push(JSON.parse(line) as SessionMessage);
|
||||
} catch {
|
||||
// skip corrupt message lines
|
||||
}
|
||||
}
|
||||
|
||||
// Cap total messages to avoid unbounded growth
|
||||
if (session.messages.length > MAX_HISTORY_MESSAGES) {
|
||||
const excess = session.messages.length - MAX_HISTORY_MESSAGES;
|
||||
session.messages = session.messages.slice(excess);
|
||||
session.lastConsolidated = Math.max(0, session.lastConsolidated - excess);
|
||||
}
|
||||
|
||||
return session;
|
||||
}
|
||||
|
||||
save(session: Session): void {
|
||||
session.updatedAt = new Date().toISOString();
|
||||
const lines = [JSON.stringify(session.meta), ...session.messages.map((m) => JSON.stringify(m))];
|
||||
writeFileSync(this._filePath(session.key), lines.join('\n') + '\n', 'utf8');
|
||||
}
|
||||
|
||||
invalidate(key: string): void {
|
||||
this._cache.delete(key);
|
||||
}
|
||||
|
||||
listSessions(): SessionMeta[] {
|
||||
const { readdirSync } = require('node:fs') as typeof import('node:fs');
|
||||
const files = readdirSync(this._dir).filter((f: string) => f.endsWith('.jsonl'));
|
||||
const metas: SessionMeta[] = [];
|
||||
for (const file of files) {
|
||||
try {
|
||||
const first = readFileSync(join(this._dir, file), 'utf8').split('\n')[0];
|
||||
if (first) metas.push(JSON.parse(first) as SessionMeta);
|
||||
} catch {
|
||||
// skip unreadable
|
||||
}
|
||||
}
|
||||
return metas;
|
||||
}
|
||||
}
|
||||
19
src/session/types.ts
Normal file
19
src/session/types.ts
Normal file
@@ -0,0 +1,19 @@
|
||||
import { z } from 'zod';
|
||||
|
||||
export const SessionMessageSchema = z.object({
|
||||
role: z.string(),
|
||||
content: z.unknown(),
|
||||
tool_calls: z.unknown().optional(),
|
||||
tool_call_id: z.string().optional(),
|
||||
name: z.string().optional(),
|
||||
timestamp: z.string().optional(),
|
||||
});
|
||||
export type SessionMessage = z.infer<typeof SessionMessageSchema>;
|
||||
|
||||
export const SessionMetaSchema = z.object({
|
||||
key: z.string(),
|
||||
createdAt: z.string(),
|
||||
updatedAt: z.string(),
|
||||
lastConsolidated: z.number().int().default(0),
|
||||
});
|
||||
export type SessionMeta = z.infer<typeof SessionMetaSchema>;
|
||||
21
templates/AGENTS.md
Normal file
21
templates/AGENTS.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Agent Instructions
|
||||
|
||||
You are a helpful AI assistant. Be concise, accurate, and friendly.
|
||||
|
||||
## Scheduled Reminders
|
||||
|
||||
Before scheduling reminders, check available skills and follow skill guidance first.
|
||||
Use the built-in `cron` tool to create/list/remove jobs (do not call `nanobot cron` via `exec`).
|
||||
Get USER_ID and CHANNEL from the current session (e.g., `8281248569` and `telegram` from `telegram:8281248569`).
|
||||
|
||||
**Do NOT just write reminders to MEMORY.md** — that won't trigger actual notifications.
|
||||
|
||||
## Heartbeat Tasks
|
||||
|
||||
`HEARTBEAT.md` is checked on the configured heartbeat interval. Use file tools to manage periodic tasks:
|
||||
|
||||
- **Add**: `edit_file` to append new tasks
|
||||
- **Remove**: `edit_file` to delete completed tasks
|
||||
- **Rewrite**: `write_file` to replace all tasks
|
||||
|
||||
When the user asks for a recurring/periodic task, update `HEARTBEAT.md` instead of creating a one-time cron reminder.
|
||||
16
templates/HEARTBEAT.md
Normal file
16
templates/HEARTBEAT.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# Heartbeat Tasks
|
||||
|
||||
This file is checked every 30 minutes by your nanobot agent.
|
||||
Add tasks below that you want the agent to work on periodically.
|
||||
|
||||
If this file has no tasks (only headers and comments), the agent will skip the heartbeat.
|
||||
|
||||
## Active Tasks
|
||||
|
||||
<!-- Add your periodic tasks below this line -->
|
||||
|
||||
|
||||
## Completed
|
||||
|
||||
<!-- Move completed tasks here or delete them -->
|
||||
|
||||
21
templates/SOUL.md
Normal file
21
templates/SOUL.md
Normal file
@@ -0,0 +1,21 @@
|
||||
# Soul
|
||||
|
||||
I am nanobot 🐈, a personal AI assistant.
|
||||
|
||||
## Personality
|
||||
|
||||
- Helpful and friendly
|
||||
- Concise and to the point
|
||||
- Curious and eager to learn
|
||||
|
||||
## Values
|
||||
|
||||
- Accuracy over speed
|
||||
- User privacy and safety
|
||||
- Transparency in actions
|
||||
|
||||
## Communication Style
|
||||
|
||||
- Be clear and direct
|
||||
- Explain reasoning when helpful
|
||||
- Ask clarifying questions when needed
|
||||
15
templates/TOOLS.md
Normal file
15
templates/TOOLS.md
Normal file
@@ -0,0 +1,15 @@
|
||||
# Tool Usage Notes
|
||||
|
||||
Tool signatures are provided automatically via function calling.
|
||||
This file documents non-obvious constraints and usage patterns.
|
||||
|
||||
## exec — Safety Limits
|
||||
|
||||
- Commands have a configurable timeout (default 60s)
|
||||
- Dangerous commands are blocked (rm -rf, format, dd, shutdown, etc.)
|
||||
- Output is truncated at 10,000 characters
|
||||
- `restrictToWorkspace` config can limit file access to the workspace
|
||||
|
||||
## cron — Scheduled Reminders
|
||||
|
||||
- Please refer to cron skill for usage.
|
||||
49
templates/USER.md
Normal file
49
templates/USER.md
Normal file
@@ -0,0 +1,49 @@
|
||||
# User Profile
|
||||
|
||||
Information about the user to help personalize interactions.
|
||||
|
||||
## Basic Information
|
||||
|
||||
- **Name**: (your name)
|
||||
- **Timezone**: (your timezone, e.g., UTC+8)
|
||||
- **Language**: (preferred language)
|
||||
|
||||
## Preferences
|
||||
|
||||
### Communication Style
|
||||
|
||||
- [ ] Casual
|
||||
- [ ] Professional
|
||||
- [ ] Technical
|
||||
|
||||
### Response Length
|
||||
|
||||
- [ ] Brief and concise
|
||||
- [ ] Detailed explanations
|
||||
- [ ] Adaptive based on question
|
||||
|
||||
### Technical Level
|
||||
|
||||
- [ ] Beginner
|
||||
- [ ] Intermediate
|
||||
- [ ] Expert
|
||||
|
||||
## Work Context
|
||||
|
||||
- **Primary Role**: (your role, e.g., developer, researcher)
|
||||
- **Main Projects**: (what you're working on)
|
||||
- **Tools You Use**: (IDEs, languages, frameworks)
|
||||
|
||||
## Topics of Interest
|
||||
|
||||
-
|
||||
-
|
||||
-
|
||||
|
||||
## Special Instructions
|
||||
|
||||
(Any specific instructions for how the assistant should behave)
|
||||
|
||||
---
|
||||
|
||||
*Edit this file to customize nanobot's behavior for your needs.*
|
||||
23
templates/memory/MEMORY.md
Normal file
23
templates/memory/MEMORY.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Long-term Memory
|
||||
|
||||
This file stores important information that should persist across sessions.
|
||||
|
||||
## User Information
|
||||
|
||||
(Important facts about the user)
|
||||
|
||||
## Preferences
|
||||
|
||||
(User preferences learned over time)
|
||||
|
||||
## Project Context
|
||||
|
||||
(Information about ongoing projects)
|
||||
|
||||
## Important Notes
|
||||
|
||||
(Things to remember)
|
||||
|
||||
---
|
||||
|
||||
*This file is automatically updated by nanobot when important information should be remembered.*
|
||||
Reference in New Issue
Block a user