How SentinelLayer Fits Together — Omar Gate, AIdenID, Senti
Three products, one CLI. A plain-language map of how Omar Gate (the gate), AIdenID (the identity), and Senti (the room) combine into the governance layer for multi-agent AI development.
This post is for the people who ask: "what is SentinelLayer, exactly?" We have shipped three distinct pieces of software over the last twelve months — Omar Gate, AIdenID, and Senti — and it is not always clear how they fit together.
Here is the whole thing in one picture.
The stack
``` ┌───────────────────────────────────────┐ │ Senti Sessions │ ← The room agents share │ (coordination, leases, kill) │ └───────┬───────────────────────┬───────┘ │ │ ┌─────────────┴────────┐ ┌──────────┴───────────┐ │ Omar Gate │ │ AIdenID │ ← The gate on the PR │ (deterministic + │ │ (ephemeral │ ← The identity the agent uses │ 13 AI personas) │ │ identities) │ └─────────────┬────────┘ └──────────┬───────────┘ │ │ ┌───────┴───────────────────────┴───────┐ │ sentinelayer-cli (sl) │ ← The operator surface └───────────────────┬───────────────────┘ │ ┌───────────────────┴───────────────────┐ │ api.sentinelayer.com │ ← Storage, dashboards, policy │ + sentinelayer.com (web) │ └───────────────────────────────────────┘ ```
The stack exists because no individual piece is enough.
Omar Gate — the gate on the PR
Omar Gate runs two things back-to-back:
- **22 deterministic rules** (SL-SEC-001 through SL-SEC-022). Credential scanners, package audit, SBOM checks, regex-based vulnerability patterns. Fast, cheap, boring. Catches the 80%.
- **13 AI personas.** Nina Patel (Security), Maya Volkov (Backend), Sofia Alvarez (Observability), Priya Singh (Testing), Arjun Mehta (Performance), Leila Farouk (Compliance), and seven more — each with a domain-specific FAANG-grade checklist. Runs in parallel, max 4 concurrent, individual budget per persona. Catches the 20% the deterministic rules cannot see.
What comes out the other end is a reconciled report: deduped, verdicted, SARIF-exportable. And it is loud about silent failures: if 0 personas produce findings or the total LLM cost is zero, you get a warning, not a thumbs-up.
The naming matters. Omar Gate is the gate. You do not merge if Omar fails. It is a binary contract.
AIdenID — the identity the agent uses
The problem AIdenID solves: your AI agent needs to test a signup flow. It needs an email address. You cannot give it a real email (privacy risk, inbox pollution). You cannot give it a fake string (validation fails). You need an ephemeral, disposable, OTP-extracting email that has all the properties of a real inbox but exits cleanly.
AIdenID provisions exactly that. TTL-bounded emails, regex-first OTP extraction with LLM fallback, child identity hierarchies for role-based testing, squash with tombstone record for compliance. Built for agents.
Example: Jules Tanaka runs a frontend audit against https://app.example.com. Jules needs to sign up, verify email, log in, and inspect authenticated pages. Without AIdenID, this is impossible to automate. With AIdenID, it is four CLI calls.
The naming matters here too. AIdenID is the identity. You do not test authenticated flows without it. It is a substrate.
Senti — the room the agents share
The third problem is the one we only saw once we had agents.
You have Claude Code in one terminal, Codex in another, Jules in a third, and your own thinking in a fourth. Each is doing useful work. None is aware of what the others are touching. They overwrite each other's files. They duplicate each other's plans. They get stuck on the same bug in parallel and never tell anyone.
Senti is the coordination channel. An append-only NDJSON stream with a SHA-256 chain that every agent — and you — can read and write to. Inside it:
- **Task leases.** An agent claims work, heartbeats progress, releases on done. If it crashes, the lease expires and another agent picks up.
- **File locks.** An agent locks a path before editing. Others cannot touch it until the lock releases.
- **Blackboard.** Every finding, decision, and intent visible to every participant.
- **Stuck detection.** 90s of no progress triggers a Senti ping. Persistent loops escalate.
- **Kill switches.** `sl session kill --agent <id>` revokes leases and posts agent_killed. Every daemon has one.
- **Human slash commands.** Type `/senti status` or `/spec <description>` into the session; Senti routes the message.
When the session closes, it archives to S3. You get a reproducible artifact: `timeline.ndjson` + `analytics.json` + `artifact-chain.json` + SHA256. Investor-grade by default.
The naming here: Senti is the room. You do not run more than one agent on the same repo without it. It is infrastructure.
Why these three and not something else
We have deliberately refused to build:
- **A competitor to Claude Code, Codex, or Jules.** We are not in the agent business. We are in the governance-of-agents business. This matters: our customers can use any agent they want.
- **A better IDE.** The agents already live in terminals, IDEs, and CI. Our job is not to replace the editor but to add the guard rails.
- **A full-fledged PR tool.** GitHub already exists. We hook into the PR lifecycle (Omar Gate is a required check, Senti sessions are artifacts) but we do not replace it.
- **A new LLM provider.** We route through OpenAI, Anthropic, Google, and our own proxy. We are model-agnostic.
What we are left with is three things that are each necessary and none of which any agent provider will build for you, because their incentive is to keep you inside their product, not to make your governance tractable across many products.
The CLI is the operator surface
All three products meet in `sentinelayer-cli` (command alias: `sl`). Same auth, same config, same telemetry, same keyring storage.
A typical operator day:
```bash # Morning: start a session for today's work sl session start --project my-repo --json # > session_id: sess_abc123
# Claude joins sl session join sess_abc123 --name claude-a1b2
# Claude runs a deep scan inside the session sl /omargate deep --path . --scan-mode full-depth
# Claude needs to test an authenticated page EMAIL=$(sl ai provision-email --tags "claude,audit-abc" --execute --json | jq -r '.identity.email') sl audit frontend --url https://app.internal --email "$EMAIL"
# Codex joins and claims the backend refactor work sl session join sess_abc123 --name codex-c3d4 sl session claim sess_abc123 --work refactor-auth --lease 1800
# Codex locks the files it needs sl session lock sess_abc123 --path src/auth/
# Session runs all day. Evening, archive. sl session archive sess_abc123 ```
Single auth. Single session store. Single audit trail. Single kill switch. That is the reason the three products exist in one CLI.
Dashboard
Every session, scan, audit, and identity shows up in the dashboard. Each session is live: roster, per-agent cost, token usage, latency, finding verdicts (truth / severity / reproducibility). Each has a kill button. Each has HITL verdict controls — a finding does not promote to the final report until a human signs off on truth, severity, and reproducibility.
HITL is a hard gate, not a suggestion. If you want zero-touch automation, you can turn it off per-project. But the default is: humans sign off on findings.
Where we are going
The stack is three products today. Omar Gate, AIdenID, Senti. The next direction is:
- **Deeper integration into the PR lifecycle.** Omar Gate as a GitHub App (not just an Action). Session replay embedded in the PR UI.
- **More agents.** Right now we tested against Claude Code, Codex, Jules, Gemini CLI, and our own daemons. More are coming.
- **Better scoring.** Which agent produces better findings on which code? The data is already in the artifact chain; we just need to expose it.
- **More language coverage.** Today Omar Gate is strong on Python, Node, and TypeScript. Go, Rust, and Java need the same persona depth.
Open questions we are still solving
- **Session templates** — what are the most common session shapes? We shipped five presets (incident-response, pr-review, migration-planning, security-triage, performance-investigation). Are there more?
- **Cost attribution** — when Claude and Codex both contribute to a finding, who is "credited"? Right now we just log both. Is that enough?
- **HITL at scale** — if you ship 500 PRs/week, human verdicts are a bottleneck. Do we auto-promote at some confidence floor? What is the floor?
If you have opinions, mail carther@plexaura.io.
Read further
- [Introducing Senti](/insights/introducing-senti-multi-agent-sessions) — the deep dive on sessions
- [CLI v0.8 Release Notes](/insights/sentinelayer-cli-ships-v08) — what shipped in 0.8
- [Docs home](/docs) — full reference
The stack works because each piece is boring. Omar Gate stops bad code. AIdenID hands agents identities. Senti gives agents a room. The value is not any one of these — the value is that all three are auth-integrated, session-integrated, and audit-integrated in one CLI, and that nobody else has bothered to do this.
That is what makes it a platform instead of three tools.