Created: March 25, 2026
Last commit: April 27, 2026
Rust60.4%
TypeScript38.9%
JavaScript0.7%
distributed-systemscross-agent-discoverytask-handoffssqlitefile-lockingdag-based-workflowsTTL-cleanupheartbeatmulti-sessionDistributed systemsCross-agent discoveryAgent coordinationShared SQLite databaseFile lockingTask handoffsDAG-based workflowsBunbun runbun:sqliteWrite-Ahead LoggingWALSWARM_DB_PATHMCPMCP serverregisterderegisterlist_instancesremove_instancewhoamisend_messagebroadcastpoll_messageswait_for_activityrequest_taskrequest_task_batchclaim_task
README.md

swarm-mcp

MCP server that lets multiple coding-agent sessions on the same machine discover each other and collaborate through a shared SQLite database.

Each session spawns its own swarm-mcp server process via stdio. They all share one SQLite file at ~/.swarm-mcp/swarm.db by default. No daemon needed.

GitHub


Quick start

If you want a first-run walkthrough, start with docs/getting-started.md.

Install dependencies:

cd /path/to/swarm-mcp
bun install

Add the server to your coding agent using that host's MCP config format. Bun is the simplest dev/runtime path because the examples use bun run, but the built dist/*.js entrypoints also run under Node 20+ with better-sqlite3.

Codex (~/.codex/config.toml)

[mcp_servers.swarm]
command = "bun"
args = ["run", "/path/to/swarm-mcp/src/index.ts"]
cwd = "/path/to/swarm-mcp"

opencode (~/.config/opencode/opencode.json)

{
  "mcp": {
    "swarm": {
      "type": "local",
      "command": ["bun", "run", "/path/to/swarm-mcp/src/index.ts"],
      "enabled": true
    }
  }
}

Claude Code (~/.claude.json)

{
  "mcpServers": {
    "swarm": {
      "command": "bun",
      "args": ["run", "/path/to/swarm-mcp/src/index.ts"]
    }
  }
}

Tool names are usually namespaced by the client using the server name. Depending on the host you may see swarm_register, mcp__swarm__register, or other variants. Use whichever form your host exposes.

Call the swarm register tool first to join the swarm.

Install the bundled skills

Mounting the MCP server makes the swarm tools available, but agents still benefit from the bundled SKILL.md workflows. If your host supports installable skills (Claude Code, OpenCode, Codex with skills, etc.), install skills/swarm-mcp for coordination and skills/swarm-deepdive for forensic inspection. Symlink is recommended over copying so updates from git pull propagate automatically:

# In your consumer project root
mkdir -p .agents/skills .claude/skills
ln -s /absolute/path/to/swarm-mcp/skills/swarm-mcp .agents/skills/swarm-mcp
ln -s /absolute/path/to/swarm-mcp/skills/swarm-deepdive .agents/skills/swarm-deepdive
ln -s ../../.agents/skills/swarm-mcp .claude/skills/swarm-mcp
ln -s ../../.agents/skills/swarm-deepdive .claude/skills/swarm-deepdive

Or install globally for all projects:

mkdir -p ~/.claude/skills
ln -s /absolute/path/to/swarm-mcp/skills/swarm-mcp ~/.claude/skills/swarm-mcp
ln -s /absolute/path/to/swarm-mcp/skills/swarm-deepdive ~/.claude/skills/swarm-deepdive

Then invoke /swarm-mcp planner, /swarm-mcp implementer, etc., when starting role-specialized sessions, or /swarm-deepdive for investigations. Full per-host install paths and copy-based alternatives live in docs/install-skill.md.

Further reading


MCP server vs swarm-server

The TypeScript swarm-mcp process is the stdio MCP server used by coding-agent hosts. It is enough for local multi-agent coordination through tools, resources, prompts, and the shared SQLite database.

The Rust apps/swarm-server daemon is a separate desktop/mobile control plane. It serves swarm-ui over a local Unix socket, exposes HTTPS/WSS on port 5444 for paired iOS/iPadOS clients, manages PTYs, and reads the same swarm.db. It is not required for the basic MCP setup above. See docs/swarm-server.md.


How it works

All sessions read and write to ~/.swarm-mcp/swarm.db by default using WAL mode, auto-vacuum, and a 3s busy timeout. Bun uses bun:sqlite; Node uses better-sqlite3.

Set SWARM_DB_PATH before launching the server if you want a different database location.

When you call register, the server starts a 10s heartbeat and a 5s notification poller.

Registration fields

The register tool accepts these parameters. Only directory is required.

FieldRequiredDescription
directoryYesThe live working directory for the current session.
scopeNoShared swarm boundary. Sessions in the same scope can see each other; different scopes are different swarms. Defaults to the detected git root, or to directory when no git root exists. Use a new scope only for a separate swarm; do not split frontend/backend inside one repo with scope. Use team: label tokens for that.
file_rootNoCanonical base path for resolving relative file paths in annotate, lock_file, and task files. Useful when disposable worktrees should share one logical file tree.
labelNoFree-form identity text. Recommended convention: machine-readable space-separated tokens like provider:codex-cli role:planner. The role: token is optional; if missing, the session is treated as a generalist.

Task features

Tasks support several features for building autonomous DAG-based workflows:

FeatureDescription
priorityInteger (default 0). Higher = more urgent. list_tasks returns tasks sorted by priority descending. Implementers claim the highest-priority open task first.
depends_onArray of task IDs. A task with unmet dependencies starts as blocked and auto-transitions to open when all deps reach done. If any dependency fails, downstream tasks are auto-cancelled.
idempotency_keyUnique string. If a task with this key already exists, request_task returns the existing task instead of creating a duplicate. Essential for crash-safe plan retries.
parent_task_idOptional parent task ID for tree-structured work tracking.
approval_requiredIf true, task starts in approval_required status and must be approved via approve_task before work begins. Use this for true approval gates, not routine code review.

Task statuses: open, claimed, in_progress, done, failed, cancelled, blocked, approval_required.

Session resets and prompt compaction

If a host compacts context, starts a fresh window, or loses the previous bootstrap, rejoin the swarm the same way:

  1. Call register again.
  2. Rehydrate from poll_messages, list_tasks, and list_instances.
  3. For planners, also check kv_get("owner/planner") and kv_get("plan/latest").

The durable coordination state lives in the shared database, not in repeated per-tool prompt text.


Auto-cleanup

DataTTL
Stale instances (no heartbeat)30 seconds
Messages1 hour
Completed/failed/cancelled tasks24 hours
Non-lock context annotations24 hours
Events24 hours

When a session expires, stale claimed or in-progress tasks are released back to open and that session's file locks are removed.

Non-lock annotations are cleaned up by TTL, while locks stay exclusive and are cleared when the owning instance goes stale or deregisters.


Tools

Instance registry

ToolDescription
registerJoin the swarm. Starts heartbeat + notification poller. See Registration fields.
deregisterLeave the swarm gracefully. Releases tasks and locks.
list_instancesList all live instances.
remove_instanceForcefully remove another instance. Releases its tasks and locks.
whoamiGet this instance's swarm ID.

Messaging

ToolDescription
send_messageSend a direct message to a specific instance by ID.
broadcastMessage all other instances in the swarm.
poll_messagesRead unread messages and mark them as read.
wait_for_activityBlock until new messages, task changes, KV changes, or instance changes arrive. Use as an idle loop for autonomous agents.

Task delegation

ToolDescription
request_taskPost a task (types: review, implement, fix, test, research, other). Use review for routine code review handoff. Supports priority, depends_on, idempotency_key, parent_task_id, and approval_required.
request_task_batchCreate multiple tasks atomically in a single transaction. Supports $N references (1-indexed) for intra-batch dependencies.
claim_taskStart work on a task: assigns and transitions to in_progress in one call. Prevents double-claiming and blocks on unread messages until poll_messages (or explicit override). Also accepts tasks pre-assigned to you (status=claimed).
update_taskMove a task to a terminal status (done, failed, cancelled). Auto-releases the actor's locks on the task's files. Attach a result when useful.
approve_taskApprove a task in approval_required status. Transitions to open/claimed (or blocked if deps unmet).
get_taskGet full details of a task.
list_tasksFilter tasks by status, assignee, or requester. Sorted by priority (highest first).

Shared context and file locking

ToolDescription
annotateShare findings, warnings, bugs, notes, or todos about a file.
lock_fileAcquire an exclusive file lock and read peer annotations on the file in one call. Locks auto-release on terminal update_task.
unlock_fileRelease a file lock early (before the task as a whole completes).
search_contextSearch annotations by file path or content.

Key-value store

ToolDescription
kv_getGet a value by key.
kv_setSet a key-value pair visible to all instances.
kv_appendAtomically append a JSON value to a KV array.
kv_deleteDelete a key.
kv_listList keys, optionally filtered by prefix.

CLI

The same swarm-mcp binary exposes a non-MCP CLI that talks directly to ~/.swarm-mcp/swarm.db. Use it from contexts that cannot speak MCP: shell scripts, helper scripts an agent invokes (e.g. a test harness or CLI referee), cron jobs, CI, an ad-hoc terminal for inspection/debugging, or to control a running swarm-ui app.

Inside an MCP-enabled agent session, prefer the MCP tools for swarm coordination primitives (register, messages, tasks, locks, KV). The CLI is primarily for scripts, operator terminals, and the swarm-ui control surface.

Setup helper:

swarm-mcp init --dir /path/to/project   # write .mcp.json and copy the bundled skills
swarm-mcp init --no-skills              # write only the MCP config

init writes a project .mcp.json entry that runs npx -y swarm-mcp and, unless --no-skills is passed, copies skills/swarm-mcp and skills/swarm-deepdive into .claude/skills/. Manual host-specific MCP configs are still useful when your host does not read .mcp.json or you want to run from a local clone.

Inspection:

swarm-mcp inspect                    # unified dump of instances, tasks, kv, locks, recent messages
swarm-mcp inspect --scope /path      # pin to an explicit scope
swarm-mcp messages --from <who>      # peek (does not mark read)
swarm-mcp kv list --prefix pixel:
swarm-mcp kv get pixel:turn

Writes (require identity — pass --as <uuid | prefix | unique-label-substring> or set SWARM_MCP_INSTANCE_ID; falls back to the sole live instance in scope):

swarm-mcp send --to <who> "message text"
swarm-mcp broadcast "status update"
swarm-mcp kv set  <key> <value>
swarm-mcp kv append <key> <json-value>
swarm-mcp kv del  <key>
swarm-mcp lock    <file> --note "why"
swarm-mcp unlock  <file>

Swarm UI control:

swarm-mcp ui spawn /path/to/repo --harness codex --role planner
swarm-mcp ui prompt --target role:planner "check the failing tests"
swarm-mcp ui move --target bound:<instance-id> --x 120 --y 80
swarm-mcp ui organize --kind grid
swarm-mcp ui list

These commands enqueue work for a running swarm-ui app to claim and execute. If no desktop app is running, commands remain pending until one starts.

Notes:

  • swarm-mcp ui spawn, ui prompt, ui move, and ui organize wait up to 5 seconds by default for the desktop app to claim + complete the command. Pass --wait 0 to return immediately after enqueue.
  • ui spawn accepts --harness claude, --harness codex, or --harness opencode; omit --harness for a plain shell.
  • Use swarm-mcp ui list and swarm-mcp ui get <id> to inspect queued, running, completed, or failed UI commands.
  • --target accepts bound:<instance-id>, instance:<instance-id>, pty:<pty-id>, or a bare instance / PTY reference. Bare instance refs resolve by full UUID, unique UUID prefix, or unique label substring in scope. Bare PTY refs resolve by full PTY id, unique PTY id prefix, or a unique substring of the PTY command.
  • ui move persists layout into the shared ui/layout KV entry for the target scope, so changes survive refreshes and can be driven from either the desktop UI or the CLI.
  • ui organize currently supports only --kind grid.

State, write, and UI subcommands accept --json for machine-readable output where shown by swarm-mcp help.

Canonical helper-script pattern — a harness the agent invokes to do validation + state update + handoff in one shot:

// harness.mjs — run as `node harness.mjs <partner-id>` by an agent
import { execFileSync } from "node:child_process";
const me = process.env.SWARM_MCP_INSTANCE_ID;
const scope = process.env.SWARM_MCP_SCOPE;
// ... validate and write artifacts ...
execFileSync("swarm-mcp", ["kv", "set", "turn", JSON.stringify(next), "--scope", scope, "--as", me]);
execFileSync("swarm-mcp", ["send", "--to", partner, "your turn", "--scope", scope, "--as", me]);

Security note: --as trusts the caller. The CLI will write as any live instance. Do not expose this binary to untrusted callers — the security model is the same as the underlying shared SQLite file.


Resources

The server exposes 4 MCP resources. swarm://inbox, swarm://tasks, and swarm://instances are refreshed by the background poller when the host supports resource update notifications.

URIDescription
swarm://inboxUnread messages for this instance.
swarm://tasksTasks grouped by status, including open, claimed, in-progress, blocked, approval-required, done, failed, and cancelled.
swarm://instancesAll active instances.
swarm://context?file=...Annotations and locks for a specific file.

Prompts

The server exposes MCP prompts. Some hosts surface them directly, while others only expose tools and resources.

PromptPurpose
setup (often shown as swarm:setup)Guides the agent through registration: call register, poll_messages, list_tasks, then summarize swarm ID, active sessions, role labels, open tasks, and coordination risks.
protocol (often shown as swarm:protocol)Applies the recommended coordination workflow for the session: check before editing, lock while editing, use annotate for findings, broadcast for updates, inspect role: labels when choosing collaborators.

Set up AGENTS.md

For autonomous collaboration, add directives to your global or project AGENTS.md, or to the equivalent host instruction file.

Pick the version that matches your workflow:

WorkflowFileUse when
Generalistdocs/generic-AGENTS.mdEvery session does the same thing, no role specialization
Plannerdocs/agents-planner.mdThis session plans work, delegates to implementers, and reviews results
Implementerdocs/agents-implementer.mdThis session claims tasks, edits code, and sends work back for review

For role/team conventions and multi-team workflows, see docs/roles-and-teams.md.

If your host exposes MCP prompts, you can also use the built-in protocol prompt, often shown as swarm:protocol, to pull the workflow into a session on demand.

Installable Skills

This repo ships reusable skills at skills/swarm-mcp and skills/swarm-deepdive.

Use swarm-mcp when your host supports installable SKILL.md workflows and you want agents to learn the swarm protocol more reliably. Invoke role-specific workflows with /swarm-mcp planner, /swarm-mcp implementer, /swarm-mcp reviewer, or /swarm-mcp researcher. Use swarm-deepdive for postmortems and direct swarm.db/server-log inspection. For install locations, see docs/install-skill.md.

Use skills in addition to minimal always-on instructions, not instead of them. A skill is a playbook; AGENTS.md is still the best place for ambient rules like "register early" and "check locks before editing."

The skills do not mount the MCP server for you. They assume the swarm MCP tools are already available in the session.


Troubleshooting

Sessions can't see each other. Check that both sessions registered with the same scope (or both defaulted to the same git root). Verify they are using the same database path (~/.swarm-mcp/swarm.db by default). Run list_instances in both sessions.

Tools aren't available after config change. Most hosts only load MCP server changes at startup. Restart the application or start a fresh session after editing the MCP config.

File locks are stuck. Stale locks are cleared automatically when the owning instance's heartbeat expires (30s). If you need to clear them manually, delete the row from the context table in the SQLite database, or restart the stuck session.

Inspecting the database directly. The database is a standard SQLite file at ~/.swarm-mcp/swarm.db. You can open it with any SQLite client (bun itself, sqlite3, DB Browser for SQLite, etc.) to inspect instances, tasks, messages, and context.

Wrong absolute path in server command. The bun run command needs an absolute path to src/index.ts. Relative paths may resolve differently depending on how the host launches the process.


Security

All sessions on the same machine share one SQLite file. Any process running as the same OS user can read and write to it. There is no authentication or authorization between sessions.

This is intentional for a local development tool. Do not use swarm-mcp across trust boundaries or expose the database to untrusted users.


License

MIT