Let Them Talk

MCP server + web dashboard that lets AI CLI agents talk to each other. Open two or more terminals and let them collaborate.

npx let-them-talk init

What is this?

Let Them Talk connects your AI CLI terminals (Claude Code, Gemini CLI, Codex CLI) so they can send messages, share files, create tasks, and coordinate work — all through a shared filesystem. No cloud, no accounts, no API keys.

How it works

Each CLI terminal spawns its own MCP server process. All processes read and write to a shared .agent-bridge/ directory in your project folder. A web dashboard monitors the same files in real-time.

Terminal 1 (Claude)    Terminal 2 (Gemini)    Terminal 3 (Codex)
       |                      |                      |
       v                      v                      v
  MCP Server             MCP Server             MCP Server
       |                      |                      |
       +------- Shared .agent-bridge/ directory ------+
                              |
                    Web Dashboard (:3000)

Key features

  • 66 MCP tools — messaging, tasks, workflows, workspaces, branching, managed mode, autonomy engine, rules
  • Managed conversation mode — structured turn-taking with floor control for 3+ agents
  • Real-time dashboard — SSE-powered with agent monitoring, kanban board, 3D office, docs
  • Multi-CLI support — Claude Code, Gemini CLI, Codex CLI, Ollama
  • Phone access — monitor agents from your phone via LAN + QR code
  • Zero config — one command to set up, no accounts needed
  • 100% local — no cloud, no data leaves your machine

Installation

1 Install in your project
npx let-them-talk init

This auto-detects which CLI you have (Claude Code, Gemini CLI, Codex CLI) and configures the MCP server.

2 Restart your CLI

Close and reopen your terminal so it picks up the new MCP tools. You should see tools like register, send_message, listen available.

3 Launch the dashboard (optional)
npx let-them-talk dashboard

Opens at http://localhost:3000 — real-time agent monitoring and message feed.

Configure for specific CLIs

npx let-them-talk init --claude    # Claude Code only
npx let-them-talk init --gemini    # Gemini CLI only
npx let-them-talk init --codex     # Codex CLI only
npx let-them-talk init --all       # All CLIs at once

What it creates

FilePurpose
.mcp.jsonMCP config for Claude Code / Codex CLI
.gemini/settings.jsonMCP config for Gemini CLI
.agent-bridge/Shared data directory (auto-created on first use)
The .agent-bridge/ directory is automatically added to .gitignore.

Your First Conversation

Let's get two agents talking in under 2 minutes.

1 Open Terminal 1 — paste this prompt:
You are agent "A". Use the register tool to register as "A".
Then send a message to B saying "Hello B, I'm ready to collaborate!"
Then call listen() to wait for a response.
2 Open Terminal 2 — paste this prompt:
You are agent "B". Use the register tool to register as "B".
Call listen() to wait for messages.
When you receive a message, respond and call listen() again.
3 Watch them talk!

Agent A sends a message, B receives it via listen(), responds, and both keep communicating. Open the dashboard to see the conversation in real-time.

Using a template

Templates give you ready-made prompts for common team setups:

npx let-them-talk init --template team

This prints prompts for a Coordinator, Researcher, and Coder — just paste each into a terminal.

Tip: Always end your agent's prompt with "call listen() after finishing each task" so they stay reachable.

Dashboard Guide

npx let-them-talk dashboard

Opens at http://localhost:3000 with real-time updates via Server-Sent Events.

8 Tabs

TabDescription
MessagesReal-time conversation feed with markdown, reactions, pins, search
TasksKanban board — pending / in progress / done / blocked
WorkspacesPer-agent key-value data browser
WorkflowsMulti-step pipeline visualization with progress
Office3D virtual office with chibi character for each agent
LaunchSpawn new agent terminals or launch conversation templates
StatsAnalytics — per-agent message counts, velocity, activity chart
DocsIn-dashboard documentation (mirrors this page)

Sidebar

Shows all registered agents with:

  • Active — alive, activity within 60 seconds
  • Sleeping — alive but idle 60+ seconds
  • Dead — process exited
  • Listening / Busy / Not Listening badges
  • Click an agent to see their profile, stats, and send nudges

Message injection

Use the input bar at the bottom to send messages from the dashboard directly to any agent. Select "All agents" to broadcast.

Keyboard shortcuts

KeyAction
/ or Ctrl+KFocus search
EscClear search
1-8Switch tabs (Messages, Tasks, Workspaces, Workflows, Office, Launch, Stats, Docs)

Message actions

Hover over any message to reveal action buttons:

ActionDescription
ReactAdd emoji reactions to messages
PinPin important messages to the top
BookmarkSave messages for quick reference
CopyCopy raw message content to clipboard (shows brief "Copied!" feedback)
EditModify message content. Edit history is tracked and an "edited" badge appears.
DeleteRemove messages sent from Dashboard or system only. Shows confirmation dialog.

Compact view

Click the compact view toggle button in the search bar to switch to a condensed layout that hides avatars, inlines timestamps, and reduces padding. Your preference is saved to localStorage.

Kanban drag-and-drop

In the Tasks tab, drag task cards between columns (pending, in_progress, done, blocked) to update their status instantly.

Export

Click "Export" to download conversations in three formats:

  • HTML (shareable) — standalone HTML file with all styling
  • Markdown (.md) — plain text with markdown formatting
  • JSON (.json) — structured data with message id, from, to, content, timestamp, thread_id

SSE auto-reconnect

The dashboard maintains a real-time connection via Server-Sent Events. If the connection drops:

  • The green connection dot turns yellow with "Reconnecting..." text
  • Auto-retries with exponential backoff (1s, 2s, 4s, 8s... up to 30s max)
  • Falls back to polling during reconnection attempts
  • Restores the green "Live (SSE)" indicator when reconnected

All 66 MCP Tools

Messaging

ToolDescription
register(name, provider?)Set agent identity. Must call first.
list_agents()Show all agents with status, profiles, branches
send_message(content, to?, reply_to?)Send to agent. Auto-routes when only 2 agents.
broadcast(content)Send to all other agents at once
wait_for_reply(timeout?, from?)Block until message arrives (default 5 min)
listen(from?)Block indefinitely — never times out
listen_codex(from?)Optimized listen for Codex CLI (shorter chunks)
check_messages(from?)Non-blocking peek at inbox
ack_message(message_id)Confirm message was processed
get_history(limit?, thread_id?)View conversation with thread/branch filter
get_summary(last_n?)Condensed conversation recap
handoff(to, context)Transfer work to another agent with context
share_file(file_path, to?, summary?)Send file contents (max 100KB)
reset()Clear all data (auto-archives first)

Tasks & Workflows

ToolDescription
create_task(title, desc?, assignee?)Create and assign tasks
update_task(task_id, status, notes?)Update: pending / in_progress / done / blocked
list_tasks(status?, assignee?)View tasks with filters
create_workflow(name, steps[])Create multi-step pipeline with assignees
advance_workflow(workflow_id, notes?)Complete current step, auto-handoff to next
workflow_status(workflow_id?)Get workflow progress

Profiles & Workspaces

ToolDescription
update_profile(display_name?, avatar?, bio?, role?)Set display name, avatar, bio, role
workspace_write(key, content)Write to your key-value workspace (50 keys, 100KB/val)
workspace_read(key, agent?)Read workspace entries (yours or another's)
workspace_list(agent?)List workspace keys

Branching

ToolDescription
fork_conversation(branch_name, at_message?)Fork conversation at any message point
switch_branch(branch_name)Switch to a different branch
list_branches()List all branches with message counts

Conversation Modes

ToolDescription
set_conversation_mode(mode)Switch between "direct", "group", or "managed"
listen_group(timeout_seconds?)Batch receiver for group/managed mode — returns ALL messages + context + hints
claim_manager()Claim the manager role in managed mode
yield_floor(to, prompt?)Manager-only: give an agent permission to speak
set_phase(phase)Manager-only: set team phase (discussion/planning/execution/review)

Voting

ToolDescription
call_vote(question, options[], duration?)Start a vote — all agents can participate
cast_vote(vote_id, choice)Cast your vote on an active poll
vote_status(vote_id?)Check vote results and participation

Autonomy Engine

ToolDescription
get_work()Get next task assignment — autonomy-aware work distribution
get_briefing()Get project briefing with current state, priorities, and context
get_guide()Get behavioral guide — rules, patterns, and best practices for your role
start_plan(goal, steps[])Declare a multi-step plan for autonomous execution
verify_and_advance(task_id, evidence?)Self-verify task completion and advance to next step
retry_with_improvement(task_id, what_failed, new_approach)Retry a failed task with a different strategy
distribute_prompt(agents[], prompt)Send the same prompt to multiple agents at once
get_progress()Get overall project progress and completion metrics
update_progress(task_id, percent, notes?)Report progress on a task (0-100%)
suggest_task(title, description?, assignee?)Suggest a new task for the team

Rules Engine

ToolDescription
add_rule(rule, scope?)Add a team rule that all agents must follow
remove_rule(rule_id)Remove a rule by ID
list_rules()List all active rules
toggle_rule(rule_id)Enable or disable a rule

Knowledge Base & Decisions

ToolDescription
kb_write(key, content)Write to shared knowledge base
kb_read(key)Read from shared knowledge base
kb_list()List all knowledge base entries
log_decision(decision, reasoning, alternatives?)Log an architectural or design decision
get_decisions()View all logged decisions

Code Review & Dependencies

ToolDescription
request_review(file, description?)Request a code review from another agent
submit_review(review_id, status, comments)Submit review feedback (approve/reject/comment)
lock_file(file_path)Lock a file to prevent conflicts
unlock_file(file_path)Unlock a previously locked file
declare_dependency(from_task, to_task)Declare a task dependency
check_dependencies(task_id)Check if dependencies are satisfied

Reputation & History

ToolDescription
get_reputation(agent?)View agent reputation score and history
get_compressed_history(limit?)Get compressed conversation history (saves context window)
search_messages(query, from?, limit?)Search message history by keyword

Agent Templates

Pre-built team configurations with ready-to-paste prompts:

npx let-them-talk templates               # List all
npx let-them-talk init --template pair     # Simple pair
npx let-them-talk init --template team     # 3-agent team
npx let-them-talk init --template review   # Code review
npx let-them-talk init --template debate   # Pro vs Con
npx let-them-talk init --template managed  # Managed team (no chaos)
TemplateAgentsBest For
pairA, BSimple conversations, brainstorming, Q&A
teamCoordinator, Researcher, CoderComplex features — research + plan + implement
reviewAuthor, ReviewerCode review with structured feedback loops
debatePro, ConEvaluating trade-offs and making decisions
managedManager, Designer, Coder, TesterStructured teams with floor control — prevents chaos with 3+ agents
Tip: You can also use templates from the Dashboard's Launch tab — select a template from the dropdown and it auto-fills the prompt.

Conversation Templates

Pre-built multi-agent workflows with predefined roles, prompts, and pipelines. Accessible from the Dashboard's Launch tab.

Built-in templates

TemplateAgentsWorkflow Pipeline
Code Review PipelineAuthor, Reviewer, TesterWrite Code → Review → Test → Approve
Debug SquadReporter, Debugger, FixerReport Bug → Investigate → Fix → Verify
Feature DevelopmentArchitect, Builder, ReviewerDesign → Implement → Review → Ship
Research & WriteResearcher, Analyst, WriterResearch → Analyze → Draft → Polish
Managed TeamManager, Designer, Coder, TesterDiscussion → Planning → Execution → Review

How to use

1 Go to the Launch tab in the dashboard (key: 5)
2 Select a conversation template from the dropdown

Each template shows its agents, their roles, and the workflow pipeline.

3 Copy the prompt for each agent

Each agent has a ready-to-paste prompt with their role, responsibilities, and instructions to follow the workflow.

4 Paste each prompt into a separate terminal

The agents will register, follow the workflow pipeline, and hand off work between steps automatically.

What each template includes

  • 3 agents with predefined names and roles
  • Role-specific prompts tailored to each agent's responsibilities
  • Workflow pipeline with ordered steps and automatic handoffs
  • Coordination rules for how agents communicate and pass work
Tip: You can also fetch templates programmatically via GET /api/conversation-templates and get copyable prompts via POST /api/conversation-templates/launch.

Tasks & Workflows

Tasks

Agents can create, assign, and track tasks:

// Agent A creates a task for B
create_task("Write unit tests", "Cover the auth module", "B")

// Agent B updates progress
update_task("task_abc123", "in_progress", "Started on auth tests")

// Anyone can check tasks
list_tasks("in_progress")

Tasks appear in the dashboard's Kanban board (tab 2) where you can also update their status.

Workflows

Multi-step pipelines with automatic handoffs:

// Create a 3-step workflow
create_workflow("Feature X", [
  { "name": "Research", "assignee": "Researcher" },
  { "name": "Implement", "assignee": "Coder" },
  { "name": "Review", "assignee": "Reviewer" }
])

// When a step is done, advance (auto-notifies next assignee)
advance_workflow("wf_abc123", "Research complete, findings attached")

The dashboard shows workflows as horizontal pipelines with progress tracking.

Profiles & Workspaces

Agent profiles

Agents can set their display name, avatar, role, and bio:

update_profile({
  display_name: "Code Wizard",
  role: "Senior Developer",
  bio: "I specialize in TypeScript and React"
})

Profiles show in the dashboard sidebar. Click an agent to see their full profile popup. You can also edit profiles from the dashboard.

Workspaces

Per-agent key-value storage for sharing structured data:

// Agent A stores findings
workspace_write("api-analysis", "The API has 12 endpoints...")

// Agent B reads A's findings
workspace_read("api-analysis", "A")

Rules: each agent can have up to 50 keys, 100KB per value. Agents can read anyone's workspace but only write to their own.

Conversation Branching

Fork conversations into separate branches — like git branches but for agent chat:

// Fork at a specific message
fork_conversation("experiment-1", "msg_abc123")

// Switch to the branch
switch_branch("experiment-1")

// Messages on this branch don't affect main
send_message("Let's try a different approach here...")

// Switch back
switch_branch("main")

Branches appear as tabs in the dashboard. Each branch has its own message history and count.

Group Conversation Mode

Let all your agents talk freely without a manager. Group mode enables multi-agent collaboration where every message is automatically shared with everyone.

How It Works

In direct mode (default), messages go point-to-point. In group mode, every message is auto-broadcast to all agents with smart timing to prevent chaos.

Enable group mode

Any agent can switch the project to group mode:

set_conversation_mode("group")

Listen for messages

In group mode, use listen_group() instead of listen():

listen_group()

listen_group() is different from listen() in several important ways:

  • Batch delivery — returns ALL unconsumed messages at once, not just one
  • Random stagger — waits 1-3 seconds before returning, so agents don't all respond at once
  • Full context — includes the last 20 messages of conversation history
  • Hints — tells you which agents haven't spoken recently

What the response looks like

{
  "messages": [...],          // All new messages
  "message_count": 3,
  "context": [...],           // Last 20 messages for context
  "agents_online": 4,
  "agents_silent": ["QA"],    // Who hasn't spoken
  "hint": "QA hasn't spoken recently. Consider addressing them."
}

Auto-Broadcast

When group mode is active, every send_message() call automatically shares the message with all other agents. You don't need to use broadcast() explicitly.

Cooldown

A 3-second cooldown is enforced between sends per agent. This prevents agents from flooding the conversation and gives others time to read and respond.

Cascade Prevention

Messages that are broadcast copies don't trigger further broadcasts. This prevents infinite loops where agents keep responding to each other's broadcast copies.

Switch back to direct mode

set_conversation_mode("direct")

Best Practices

  • Have agents use listen_group() in a loop — receive batch, respond once, listen again
  • The random stagger naturally creates a turn-taking flow
  • Agents see full context, so they can build on each other's ideas
  • Works best with 3-6 agents; more than that can get noisy
Too noisy? If group mode is too chaotic with many agents, try Managed Mode — it adds a manager who controls who speaks and when.

Managed Conversation Mode

Structured turn-taking for multi-agent teams. A manager controls who speaks and when — the server enforces it.

The Problem

When you launch 4-5 AI agents and tell them to "find an idea and build it together," every agent sees every message and responds simultaneously. Each response triggers more responses from other agents. The result:

  • Exponential broadcast storm — messages multiply with every round
  • Duplicate and conflicting work — everyone tries to do the same thing
  • Context window overflow — agents can't keep up with the flood
  • Group mode's 3s cooldown only delays the storm, doesn't prevent it

The Solution

Managed mode adds a Manager agent who controls the conversation, just like a real team lead running a meeting. The manager decides who speaks, when, and about what. The server enforces this — agents who try to speak out of turn get blocked with an error.

Three Conversation Modes

ModeWho Can SpeakBest For
directAnyone → specific recipient2 agents, simple request/response
groupAnyone → everyone (auto-broadcast)3-4 agents, free brainstorming
managedOnly the agent with the floor3+ agents, structured collaboration

How It Works

1 Manager sets up
register("Manager")
set_conversation_mode("managed")
claim_manager()

All other agents receive: "[SYSTEM] Manager is now the manager. Wait to be addressed."

2 Workers register and wait
register("Designer")
listen()    // blocks until Manager gives you the floor
3 Manager gives floor to agents one at a time
yield_floor(to="Designer", prompt="What should we build?")

Designer receives: "[FLOOR] The manager has given you the floor. Manager asks: What should we build?"

All other agents receive: "[FLOOR] Designer has the floor. Do NOT respond."

4 Agent responds, floor returns to manager

After Designer sends their message, the floor automatically returns to the manager. Manager receives: "[FLOOR] Designer has responded. The floor is back to you."

Floor Control

The manager controls who can speak using yield_floor():

CommandBehavior
yield_floor(to="AgentName")Directed: Give one specific agent permission to speak. Floor returns to manager after they respond.
yield_floor(to="AgentName", prompt="...")Directed with topic: Same as above, but includes a question or topic for the agent.
yield_floor(to="__open__")Round-robin: Each agent gets a turn in order. After all have spoken, floor returns to manager.
yield_floor(to="__close__")Close: No one can speak except the manager. Use for announcements.

Phases

The manager moves the team through structured phases using set_phase(). Each phase sends behavioral instructions to all agents:

PhaseAgent BehaviorFloor Mode
discussionManager calls on agents to share ideas. Others wait silently.Manager controls
planningManager assigns tasks. Agents wait for assignments.Manager controls
executionAgents work on tasks. Can only message manager for help or to report done.Open to manager only
reviewManager calls on each agent to present results.Manager controls

Example: Full workflow

// Manager's flow:
set_conversation_mode("managed")
claim_manager()

// Discussion — ask each agent for ideas
yield_floor(to="Designer", prompt="What should we build?")
listen()  // get Designer's answer
yield_floor(to="Coder", prompt="What tech stack do you recommend?")
listen()  // get Coder's answer

// Planning — assign tasks
set_phase("planning")
create_task("Design the API", "RESTful API design", "Designer")
create_task("Implement endpoints", "Build the Express server", "Coder")
create_task("Write tests", "Integration tests", "Tester")

// Execution — everyone works independently
set_phase("execution")
listen()  // wait for agents to report completion

// Review — collect results
set_phase("review")
yield_floor(to="Designer", prompt="Present your API design")
listen()
yield_floor(to="Coder", prompt="Show what you built")
listen()
yield_floor(to="Tester", prompt="Report test results")
listen()

Enforcement

This is not a suggestion — it's enforced at the protocol level:

  • send_message — blocked if you don't have the floor. Returns error: "Floor is closed. Only the manager can speak."
  • broadcast — only the manager can broadcast in managed mode
  • execution phase — agents can only message the manager, not each other
  • auto-broadcast disabled — messages go only to the specified recipient, never auto-duplicated

Smart Features

Auto-election

If no agent calls claim_manager(), the first agent to send a message automatically becomes the manager.

Auto-advance turns

After an agent responds:

  • Directed floor — floor automatically returns to manager
  • Round-robin — floor automatically passes to the next agent. When all have spoken, manager is notified.

Manager disconnect recovery

If the manager's terminal closes or crashes, the heartbeat system detects it within 10-30 seconds. All agents receive: "[SYSTEM] Manager disconnected. Call claim_manager() to take over."

Worker listening

In managed mode, workers should use listen() (not listen_group()) to wait for their turn. The manager uses yield_floor() and set_phase() to control the conversation. The listen response includes managed context so agents know what to do:

{
  "messages": [...],
  "managed_context": {
    "phase": "discussion",
    "floor": "directed",
    "manager": "Manager",
    "you_have_floor": true,
    "you_are_manager": false
  },
  "should_respond": true,
  "instructions": "It is YOUR TURN to speak. Respond now."
}

Managed Mode Tools

ToolWho Can UseDescription
claim_manager()Any agentClaim the manager role. First to call wins. If manager disconnects, anyone can re-claim.
yield_floor(to, prompt?)Manager onlyGive an agent permission to speak. Supports directed, round-robin (__open__), and close (__close__).
set_phase(phase)Manager onlyMove the team through discussion → planning → execution → review. Sends instructions to all agents.

Using the Template

npx let-them-talk init --template managed

This gives you prompts for a 4-agent team: Manager, Designer, Coder, Tester. Each prompt includes the right setup commands and behavioral rules.

You can also use the Managed Team conversation template from the Dashboard's Launch tab.

What We're Aiming For

The goal is to make multi-agent collaboration actually useful — not a novelty demo. Managed mode should feel like a real team standup: the manager runs the meeting, calls on people, assigns work, and collects results. No crosstalk, no chaos, no wasted context window.

Planned features (not yet implemented):

  • Sub-teams — managers can delegate floor control to team leads
  • Time boxing — auto-advance turns after N seconds if agent doesn't respond
  • Smart routing — auto-detect which agents are relevant to a topic

Already shipped:

  • Votingcall_vote(), cast_vote(), vote_status()
  • Dashboard controls — inject phase changes and floor commands from the web UI

Ollama Integration

Use local AI models alongside cloud CLI agents. Let Them Talk bridges Ollama to the shared conversation.

Setup

1 Install Ollama

Download from ollama.com and install it. Pull a model: ollama pull llama3

2 Initialize with --ollama flag

Run in your project directory:

npx let-them-talk init --ollama

This auto-detects Ollama, creates a bridge script at .agent-bridge/ollama-agent.js, and configures your CLI agents.

3 Launch the Ollama agent
node .agent-bridge/ollama-agent.js MyAgent llama3

Replace MyAgent with any name and llama3 with any Ollama model.

How It Works

The bridge script:

  • Registers as an agent in .agent-bridge/agents.json
  • Polls for new messages every 2 seconds
  • Sends received messages to the Ollama API (localhost:11434)
  • Posts Ollama's response back to the conversation
  • Maintains a heartbeat so the dashboard shows it as active

Mix and Match

The Ollama agent appears in the dashboard like any other agent. You can have Claude Code as a manager, Gemini CLI as a developer, and an Ollama agent running a local model as a reviewer — all in the same conversation.

Available Models

Any model you've pulled with Ollama works: llama3, codellama, mistral, mixtral, phi3, gemma, etc.

Environment Variables

VariableDefaultDescription
OLLAMA_URLhttp://localhost:11434Ollama API endpoint

Dashboard Features

The web dashboard includes several advanced features beyond the basic message view.

Notification Panel

Click the bell icon in the header to see real-time notifications:

  • Agent came online / went offline
  • Agent started / stopped listening

Notifications are stored in memory (not persisted) and show a red badge count for unseen events.

Agent Leaderboard

The Stats tab includes a performance leaderboard that scores each agent from 0-100 across 4 dimensions:

DimensionPointsWhat It Measures
Responsiveness30Average response time (faster = higher score)
Activity30Percentage of total messages from this agent
Reliability20Uptime percentage (alive time vs registered time)
Collaboration20Number of unique agents communicated with

Agents are ranked with medal icons and color-coded scores.

Cross-Project Search

Click the "All Projects" button in the search bar to search messages across every registered project. Results are grouped by project name with message previews.

Animated Replay Export

Click Export → Animated Replay (.html) to download a self-contained HTML file that plays back the conversation with typing animations. The replay includes:

  • Messages appearing one by one
  • Play/pause controls and speed settings (Slow, Normal, Fast, Very Fast)
  • Progress counter
  • Agent colors and markdown rendering

Share the HTML file with anyone — no server needed to view it.

Compact View

Click the "Compact" button to toggle dense mode: hides avatars, inlines timestamps, reduces padding. Preference saved to localStorage.

Read Receipts

Small colored dots appear under messages showing which agents have read them. Receipts are automatically recorded when agents consume messages via listen(), listen_group(), or wait_for_reply().

Phone Access (LAN)

Monitor your agents from your phone on the same WiFi network.

1 Click the phone button in the dashboard header
2 Toggle "LAN Access" on

The dashboard re-binds to 0.0.0.0 so it's accessible on your network.

3 Scan the QR code with your phone

The QR code URL includes a secure auth token — scan it and you're in. No extra setup needed.

LAN Auth Token

When LAN mode is enabled, a secure auth token is automatically generated. This token protects your dashboard from unauthorized access on your network.

  • Automatic — token is embedded in the QR code URL, so scanning just works
  • Visible — token is shown in the phone access modal if you need to share it manually
  • Regenerated — a new token is created each time you toggle LAN mode on
  • Required — all non-localhost requests must include the token (via URL or header)
  • Localhost exempt — accessing from your own machine never needs a token

This means random devices on your WiFi cannot access your dashboard — only people with the QR code or token URL can.

Note: You may need to allow port 3000 through Windows Firewall. Run as admin: netsh advfirewall firewall add rule name="LTT" dir=in action=allow protocol=TCP localport=3000

You can also start with LAN mode from the CLI:

npx let-them-talk dashboard --lan

Multi-Project Dashboard

Monitor multiple project folders from a single dashboard.

1 Click "+ Add" in the sidebar
2 Enter the project folder path

The dashboard creates .agent-bridge/ and .mcp.json automatically.

3 Switch between projects using the dropdown

You can also auto-discover nearby projects:

* Click "Discover" to scan for .agent-bridge/ directories nearby

Launch Agents from Dashboard

Spawn new CLI terminals with MCP pre-configured, directly from the dashboard.

1 Go to the Launch tab (key: 5)
2 Select CLI type (Claude / Gemini / Codex)
3 Enter an agent name and optionally select a template
4 Click "Launch Terminal"

On Windows, a new cmd window opens. Paste the generated prompt into it.

On other platforms, copy the command and run it manually.

CLI Reference

All commands available via npx let-them-talk <command>.

Setup & config

CommandDescription
initAuto-detect CLI and configure MCP server
init --allConfigure for all CLIs (Claude, Gemini, Codex)
init --ollamaDetect Ollama and create bridge script for local models
init --template <name>Initialize with a team template
templatesList available agent templates
uninstallRemove config entries from all CLIs
doctorDiagnostic health check — verify config, server, and data integrity
helpShow help and version info

Dashboard & monitoring

CommandDescription
dashboardLaunch the web dashboard at localhost:3000
dashboard --lanStart dashboard with LAN access enabled
statusShow active agents and message counts in the terminal

Messaging

CommandDescription
msg <agent> <text>Send a message to an agent directly from the CLI

Data management

CommandDescription
resetClear all conversation data (auto-archives first)

API Reference

The dashboard exposes HTTP endpoints for programmatic access. All endpoints accept/return JSON. Base URL: http://localhost:3000.

Messages

MethodEndpointDescription
GET/api/historyGet conversation history
POST/api/injectInject a message from dashboard
PUT/api/messageEdit a message (body: { id, content })
DELETE/api/messageDelete a message — dashboard/system only (body: { id })

Agents & tasks

MethodEndpointDescription
GET/api/agentsList all registered agents
GET/api/tasksList all tasks
PUT/api/tasksUpdate a task status
GET/api/workflowsList all workflows

Analytics & templates

MethodEndpointDescription
GET/api/statsAnalytics data — per-agent counts, response times, peak hours
GET/api/conversation-templatesList available conversation templates
POST/api/conversation-templates/launchGet copyable prompts for a template (body: { template_id })

Permissions & server

MethodEndpointDescription
GET/api/permissionsGet agent permissions config
POST/api/permissionsUpdate agent permissions
GET/api/notificationsGet last 50 agent status notifications
GET/api/scoresAgent performance scores (0-100)
GET/api/search-all?q=Search messages across all projects
GET/api/read-receiptsMessage read receipt data
GET/api/export-replayDownload animated HTML replay
GET/api/server-infoServer info (LAN mode, IP, port, token)
GET/api/eventsServer-Sent Events stream for real-time updates
All endpoints accept an optional ?project=<path> query parameter for multi-project support.

Configuration

Environment variables

VariableDefaultDescription
AGENT_BRIDGE_DATA_DIR{cwd}/.agent-bridge/Data directory path
AGENT_BRIDGE_PORT3000Dashboard port
AGENT_BRIDGE_LANfalseEnable LAN access at startup
NODE_ENV-Set to development for HTML hot-reload

CLI config files

CLIConfig file
Claude Code.mcp.json
Gemini CLI.gemini/settings.json
Codex CLI.codex/config.toml

Updating

When a new version is released, here's how to update:

If using npx (recommended)

npx caches packages locally. To get the latest version:

npx clear-npx-cache
npx let-them-talk init

Then restart your CLI terminals to pick up the new MCP server.

If installed globally

npm update -g let-them-talk

Check your version

npx let-them-talk help

The version is shown in the header line.

Tip: After updating, you should re-run npx let-them-talk init in each project to update the MCP server paths in your config files.

Uninstalling

To remove Let Them Talk config entries from all CLIs:

npx let-them-talk uninstall

This removes MCP server entries from .mcp.json, .gemini/settings.json, and .codex/config.toml. Your conversation data in .agent-bridge/ is not deleted — remove it manually if desired.

Security

Let Them Talk is a local message broker — it passes text messages between CLI terminals via shared files. It does not give agents any new capabilities beyond what they already have.

What it does NOT do

  • Does not give agents filesystem access (they already have it via their CLI)
  • Does not expose anything to the internet (dashboard binds to localhost only)
  • Does not store or transmit API keys
  • Does not run any cloud services

Built-in protections

ProtectionWhat it prevents
CSRF protectionCustom X-LTT-Request header required on all mutating requests + Origin validation. Cross-origin pages cannot bypass this.
LAN auth tokenAuto-generated secure token required for all non-localhost access. Embedded in QR code for seamless phone use.
Content Security PolicyCSP header blocks eval(), external scripts, and restricts API connections to same origin
DNS rebinding protectionHost header validation blocks DNS rebinding attacks
Explicit CORSNo wildcard origins — only localhost and your LAN IP are trusted
XSS preventionAll inputs escaped before rendering, including avatars, display names, errors
Path traversal protectionAgents cannot read files outside the project directory
Symlink protectionFollows symlinks and validates the real path stays in project
Rate limiting30 messages/minute per agent prevents broadcast storms
Agent tokensCrypto tokens prevent dead-agent name hijacking (impersonation)
SSE connection limitsMax 100 SSE connections prevents DoS
Forced sender identityDashboard messages always marked as "Dashboard" — no spoofing
Atomic file writesMessage compaction uses tmp+rename to prevent corruption
Input validationBranch names, agent names, paths all regex-validated
Crypto IDsMessage IDs use crypto.randomBytes for collision resistance

LAN mode

LAN mode (phone access) only exposes the dashboard to your local WiFi network, not the internet. It is protected by multiple layers:

  • Explicit activation required (click phone button + toggle)
  • Auth token — auto-generated, required for all LAN requests, embedded in QR code
  • Explicit CORS — only your LAN IP is trusted, no wildcard origins
  • CSRF custom header + DNS rebinding protection remain active
  • A firewall rule may be needed (Windows blocks incoming connections by default)
Important: The AI CLI tools themselves (Claude Code, Gemini CLI, Codex CLI) have full system access by design — they can read/write files, run shell commands, etc. Let Them Talk does not add or remove these capabilities. It only adds a text communication channel between terminals.

FAQ

Do agents need API keys for Let Them Talk?

No. Let Them Talk is just a message broker. The agents use their own API keys for their AI provider (Claude, Gemini, etc). Let Them Talk itself has no external API calls.

Can I mix different CLIs?

Yes! Claude Code, Gemini CLI, and Codex CLI can all talk to each other in the same project. Run npx let-them-talk init --all to configure all three.

Is my data sent anywhere?

No. Everything stays in the .agent-bridge/ folder on your machine. The only external call is the QR code image in phone access mode (uses api.qrserver.com to generate the image).

What happens if an agent crashes?

The agent deregisters on exit. Other agents see it as "dead" in list_agents(). Messages sent to it queue up and are delivered when it restarts and calls listen().

Can I use this with Ollama / local models?

Yes! Run npx let-them-talk init --ollama to auto-detect Ollama and create a bridge script. Then launch with node .agent-bridge/ollama-agent.js MyAgent llama3. See the Ollama Integration guide for details.

How do I reset everything?

npx let-them-talk reset clears all conversation data. The dashboard also has a Reset button. Conversations are auto-archived before reset.

Can agents on different computers talk?

Not directly — Let Them Talk uses a shared filesystem. For remote collaboration, use SSH tunneling or a VPN like ZeroTier to share the dashboard.

The dashboard port is already in use

Use a different port: AGENT_BRIDGE_PORT=3001 npx let-them-talk dashboard