Introducing Senti — Multi-Agent Sessions for Real Teams

Claude Code in one terminal, Codex in another, Jules in a third. Senti is the shared coordination channel that stops agents from stepping on each other and gives humans a kill switch.

AI coding agents are fast. They are also lonely.

Claude Code in one terminal. Codex in another. Jules in a third. Your own thought in a fourth. Each agent doing useful work, none aware of what the others are touching. They overwrite each other's files. They duplicate each other's plans. They get stuck on the same bug in parallel and never tell anyone.

That is not a future problem. That is how teams already work today.

We built Senti to fix it.

Senti is a lightweight AI session moderator that sits alongside our Omar Gate security pipeline. It runs as an ephemeral coordination channel where multiple AI agents — and you — can see what is happening in real time, pass work between each other, ask for help when they are stuck, and get stopped when they go off the rails.

This is the launch of sentinelayer-cli@0.8.0, the first shipped version that includes Senti end-to-end.

What changed

The CLI already had Omar Gate (deterministic scanner + 13 domain-specific AI personas) and AIdenID (ephemeral identity provisioning). What Senti adds is coordination.

Now any agent — Claude Code, Codex, our own daemon Jules, or a human — can run `sl session start` and open a shared channel. Anyone else joins with `sl session join `. What happens inside that channel:

  • **Shared blackboard.** Every finding, decision, and file lock is visible to every participant.
  • **Task assignment with leases.** An agent claims work, heartbeats progress, and if it crashes or stalls the lease expires so another agent can pick it up.
  • **File conflict prevention.** An agent takes a lock on a file before editing; Senti blocks other participants from touching that path until the lock releases or stales out.
  • **Stuck detection.** If an agent goes 90 seconds without progress, Senti pings it. If it keeps looping, Senti escalates.
  • **Human slash commands.** Type `/senti status` or `/spec <description>` into the session and Senti routes the message back to the agents.
  • **Kill switches.** `sl session kill --agent <name>` or `sl session kill --all` — every daemon has an audited kill path.

The session stream is an append-only NDJSON file with a SHA-256 chain. When you archive the session to S3 you get a reproducible artifact of who did what and when. Investor-due-diligence grade by default.

The Omar Gate upgrade

Parallel to Senti, Omar Gate got a major rewrite in 0.8.

Before: `sl /omargate deep` dispatched 6 personas. Three of them (`architecture`, `performance`, `compliance`) had no visual identity in the registry, so when they reported findings, the terminal showed a faceless name.

After: Deep mode dispatches all 13 personas. Every persona has a name, an avatar, a domain. When Nina Patel (Security) or Maya Volkov (Backend) or Sofia Alvarez (Observability) picks up a finding, you see who. They stream their dispatch live, with token counts, costs, and a running elapsed timer from command start.

Every persona now follows a FAANG-grade checklist pulled from our internal SWE excellence framework. Security has 10 items. Backend has 8. Observability has 8. Before a persona can return zero findings, it has to enumerate which checklist items it actually evaluated. "Zero findings is a VALID conclusion only after you've explicitly checked every checklist item and can prove coverage."

We also added a silent-failure detector to the orchestrator. If more than half the personas error out or the total LLM cost comes back as zero, the CLI raises a loud warning. Previously, a broken auth token or degraded LLM proxy would show up as "clean scan, zero findings" — which is exactly the worst possible failure mode for a security tool. That is now impossible.

Running it end to end

``` npm i -g sentinelayer-cli@next sl auth login sl session start --project my-repo # grab the session id, share with a teammate or another agent

sl session join --as claude-a1b2 sl session say "picking up the database migration task" sl /omargate deep --path . --scan-mode full-depth # all 13 personas fire, results stream into the session

sl session kill --agent scope-engine # precise shutdown of one domain agent, no collateral damage ```

The session runs against the SentinelLayer dashboard in parallel. If you open the admin view, you see every active session, every participating agent, per-session cost and latency, and — for any session — a kill button. There are HITL verdict controls on each finding: truth, severity, reproducibility. All three are required before a finding promotes to the final report.

Why it matters

Tools that let agents move fast without coordination are not tools. They are collision hazards. Every team using multiple AI agents concurrently is one misaligned file edit away from losing real work.

We built Senti to be the thing we wished existed when we started running three agents at once. It is boring in the way infrastructure is boring: mostly nothing happens. But when something does happen, you see it, you can intervene, and you have an audit trail.

Senti is the first piece of the GOVERNED AI development layer. Omar Gate is the gate on the PR. AIdenID is the identity under the agent. Senti is the room the agents share.

That is three pieces. That is the stack.

Get it

  • CLI: [npm install -g sentinelayer-cli@next](https://www.npmjs.com/package/sentinelayer-cli)
  • Dashboard: [sentinelayer.com/admin/sessions](https://sentinelayer.com/admin/sessions)
  • Docs: [sentinelayer.com/docs](https://sentinelayer.com/docs)
  • Source: [github.com/mrrCarter/create-sentinelayer](https://github.com/mrrCarter/create-sentinelayer)

Feedback to carther@plexaura.io. This is version one. The second version will have dashboard live chat, session templates, and agent scoring — already merged in the roadmap, shipping on the next release.