Run your first Omar Gate deep scan
End-to-end recipe for running a 13-persona deep scan on a local repo, reviewing findings, and understanding the reconciled report.
- how-to
- omar-gate
- deep-scan
- getting-started
Five-minute recipe for running your first Omar Gate deep scan.
Prerequisites
- `sentinelayer-cli` installed: `npm install -g sentinelayer-cli@next`
- Authenticated: `sl auth login`
- A repo with at least some JS/TS/Python code to scan
Step 1 — Run the scan
cd /path/to/your/repo
sl /omargate deep --path . --json
That runs all 13 personas concurrently (max 4 in parallel) on your local codebase. Typical runtime: 60-180 seconds.
Step 2 — Read the live output
The CLI streams each persona as it dispatches:
[nina-patel] Security 1.2s 2 findings
[maya-volkov] Backend 1.4s 1 finding
[arjun-mehta] Performance 1.6s 3 findings
[leila-farouk] Compliance 1.3s 0 findings (10/10 items inspected)
Each persona reports: elapsed time, token count, cost, and finding count. Zero-finding rows show checklist-coverage proof (e.g. `10/10 items inspected`).
Step 3 — Inspect the reconciled report
Artifacts land under `.sentinelayer/runs/<run-id>/`:
- `REVIEW.json` — deterministic findings (22 rules)
- `REVIEW_AI.json` — per-persona AI findings
- `REVIEW_PERSONAS.json` — persona roster + elapsed + cost
- `REVIEW_RECONCILED.json` — merged, deduped, verdicted
The reconciled report is what you ship to reviewers. Each finding has: severity, persona who raised it, evidence citation, recommended fix.
Step 4 — Export SARIF for GitHub
sl review export --format sarif --run-id <run-id> > omar-review.sarif
Upload that to GitHub Code Scanning for PR-annotated findings.
What to expect
- On your first scan, expect ~50-200 P3 (low) findings from deterministic rules. That is signal, not noise.
- P2 (medium) typically 5-20 for first scan on a non-hardened repo.
- P0/P1 should be rare; if you see them, they are genuinely worth fixing before merge.
Related
- [CLI v0.8 Reference](/docs/cli/v0-8-reference) — all flags
- [Omar Gate + Jules workflow](/docs/cli/omar-baseline-and-jules-deep-audit) — baseline + deep audit pattern
Structured Answers
How long does a deep scan take?
Typically 60-180 seconds for a mid-size repo (100-500 files). Cost is usually under $0.10 at default model settings.
What if my scan shows zero findings and zero cost?
That is the silent-failure signal added in 0.8. Re-run with --stream to see per-persona dispatch traces; the most common cause is an expired auth token or LLM proxy outage.