# Sentinelayer Documentation (LLM Full Export) This file is generated from src/docs/content.ts to keep machine retrieval aligned with live docs. Updated: 2026-03-21 ## Exit Codes Deterministic exit code map for CI decisions. URL: https://sentinelayer.com/docs/api-reference/exit-codes Section: API Reference Omar Gate uses deterministic exit codes so CI pipelines can make decisions without parsing output text. ## Exit Code Reference | Code | Name | Description | CI Action | |------|------|-------------|-----------| | `0` | Pass | No findings at or above severity gate. | Allow merge. | | `1` | Blocked | One or more findings at or above the configured severity gate. | Block merge. Remediate findings. | | `2` | Config Error | Configuration or runtime context issue (missing token, invalid input, API unreachable). | Fix configuration. Do not retry without changes. | | `10` | LLM Failure | LLM provider returned an error or timed out. Behavior depends on `llm_failure_policy`. | If policy is `block`: treat as blocked. If `pass`: allow merge. | | `12` | Fork Blocked | PR originated from a fork and `fork_policy` is set to `block`. | Reject fork PR or change fork policy. | | `13` | Approval Required | Scan cost exceeds threshold and `approval_mode` is `manual`. | Manually approve the scan or adjust cost threshold. | ## Using Exit Codes in Workflows ~~~yaml - name: Run Omar Gate id: omar uses: mrrCarter/sentinelayer-v1-action@v1 continue-on-error: true with: github_token: ${{ secrets.GITHUB_TOKEN }} openai_api_key: ${{ secrets.OPENAI_API_KEY }} severity_gate: P1 - name: Handle gate result run: | EXIT_CODE=${{ steps.omar.outputs.exit_code || '0' }} case $EXIT_CODE in 0) echo "Gate passed" ;; 1) echo "Findings block merge" && exit 1 ;; 2) echo "Config error — check secrets and inputs" && exit 1 ;; 10) echo "LLM failure — check provider status" ;; 12) echo "Fork PR blocked by policy" && exit 1 ;; 13) echo "Manual approval required" && exit 1 ;; *) echo "Unexpected exit code: $EXIT_CODE" && exit 1 ;; esac ~~~ ### Structured Q&A **Q: What does exit code 2 indicate?** Configuration or runtime context failure, not a policy finding block. Check that required secrets are set and inputs are valid. **Q: What is the difference between exit code 1 and exit code 10?** Exit code 1 means policy findings were detected above the severity gate. Exit code 10 means the LLM provider failed (error or timeout) and the configured llm_failure_policy determines whether this blocks the merge. **Q: How should CI handle exit code 12?** Exit code 12 means a fork PR was blocked by the fork_policy setting. Either change fork_policy to allow or report-only, or reject the fork PR. --- ## API Reference Introduction Machine interfaces exposed by action outputs and artifacts. URL: https://sentinelayer.com/docs/api-reference/introduction Section: API Reference Machine surfaces: - workflow outputs - artifact files - check-run status - optional managed mode interfaces ### Structured Q&A **Q: What is the canonical finding stream?** FINDINGS.jsonl is the primary machine-readable findings source. --- ## Outputs and Artifacts Stable outputs and artifact names for automation. URL: https://sentinelayer.com/docs/api-reference/outputs-and-artifacts Section: API Reference ## Workflow Outputs These are available via `steps..outputs` in subsequent workflow steps. | Output | Type | Description | |--------|------|-------------| | `gate_status` | string | Final gate result: `pass`, `block`, or `error`. | | `run_id` | string | Unique identifier for this scan run. Use as correlation key. | | `estimated_cost_usd` | string | Estimated LLM spend for this run (e.g. `"0.0042"`). | | `findings_count` | integer | Total number of findings produced. | | `critical_count` | integer | Number of P0/P1 findings that triggered the gate. | ## Artifacts Uploaded to the GitHub Actions artifact store. Download via the Actions UI or `actions/download-artifact`. | Artifact | Format | Description | |----------|--------|-------------| | `FINDINGS.jsonl` | JSON Lines | One finding per line. Primary machine-readable output. | | `PACK_SUMMARY.json` | JSON | Scan metadata, severity counts, and gate decision. | | `AUDIT_REPORT.md` | Markdown | Human-readable narrative report for full-repo scans. | | `REVIEW_BRIEF.md` | Markdown | Condensed review summary posted as PR comment. | ## FINDINGS.jsonl Schema Each line is a self-contained JSON object: ~~~json { "finding_id": "f-abc123", "run_id": "run-20260223-001", "severity": "P1", "category": "security", "subcategory": "sql-injection", "title": "Unsanitized user input in SQL query", "description": "User-supplied value is interpolated directly into a SQL string without parameterization.", "file": "src/db/queries.py", "line_start": 42, "line_end": 42, "snippet": "cursor.execute(f\"SELECT * FROM users WHERE id = {user_id}\")", "remediation": "Use parameterized queries: cursor.execute(\"SELECT * FROM users WHERE id = %s\", (user_id,))", "confidence": 0.95, "agent": "security-scanner", "spec_reference": "security-checklist.input-validation" } ~~~ ### Finding Fields | Field | Type | Description | |-------|------|-------------| | `finding_id` | string | Unique identifier for this finding. | | `run_id` | string | Matches the workflow output `run_id`. | | `severity` | string | `P0`, `P1`, `P2`, `P3`, or `info`. | | `category` | string | Top-level domain: `security`, `quality`, `spec-compliance`, `supply-chain`. | | `subcategory` | string | Specific issue type (e.g. `sql-injection`, `unused-import`). | | `title` | string | One-line summary of the finding. | | `description` | string | Detailed explanation. | | `file` | string | Relative path to the affected file. | | `line_start` | integer | Starting line number. | | `line_end` | integer | Ending line number. | | `snippet` | string | Relevant code excerpt. | | `remediation` | string | Suggested fix or action. | | `confidence` | number | 0.0 to 1.0. Model confidence for AI findings; 1.0 for deterministic. | | `agent` | string | Which review agent produced this finding. | | `spec_reference` | string | Dot-path into the project spec this finding relates to. | ## PACK_SUMMARY.json Schema ~~~json { "run_id": "run-20260223-001", "repo": "acme/backend", "branch": "feature/auth", "commit_sha": "a1b2c3d", "gate_status": "block", "severity_counts": { "P0": 0, "P1": 2, "P2": 3, "P3": 5, "info": 1 }, "total_findings": 11, "scan_mode": "pr-diff", "estimated_cost_usd": "0.0042", "duration_ms": 8420, "agents_used": ["security-scanner", "spec-compliance", "quality-reviewer"], "timestamp": "2026-02-23T08:15:30Z" } ~~~ ## Accessing Artifacts in CI ~~~yaml - name: Download findings uses: actions/download-artifact@v4 with: name: omar-gate-findings - name: Count critical findings run: | CRITICAL=$(grep -c '"severity": "P1"' FINDINGS.jsonl || true) echo "Critical findings: $CRITICAL" ~~~ ### Structured Q&A **Q: What should SIEM ingest first?** Ingest FINDINGS.jsonl and correlate by run_id. Each line is a self-contained finding object with severity, category, file, line, and remediation fields. **Q: How do I parse FINDINGS.jsonl?** Each line is a valid JSON object. Read line-by-line and parse each as JSON. In Python: [json.loads(line) for line in open('FINDINGS.jsonl')]. In Node: fs.readFileSync('FINDINGS.jsonl','utf8').split('\n').filter(Boolean).map(JSON.parse). **Q: What is the difference between AUDIT_REPORT.md and REVIEW_BRIEF.md?** AUDIT_REPORT.md is a detailed narrative for full-repo scans. REVIEW_BRIEF.md is a condensed summary used as the PR comment body for pull request reviews. --- ## Runtime Runs API REST and stream interfaces for orchestrated runs, approvals, artifacts, and Omar loop execution. URL: https://sentinelayer.com/docs/api-reference/runtime-runs-api Section: API Reference Runtime APIs expose deterministic run lifecycle and replayable evidence. ## Core endpoints - `POST /api/v1/runs` create run - `GET /api/v1/runs/{run_id}/status` fetch status and summaries - `POST /api/v1/runs/{run_id}/cancel` cancel run - `GET /api/v1/runs/{run_id}/events` SSE event stream - `GET /api/v1/runs/{run_id}/events/list` paged poll endpoint for deterministic replay - `WS /api/v1/runs/{run_id}/terminal` terminal stream with resume cursor - `POST /api/v1/runs/{run_id}/approvals` approve or deny gated actions - `GET /api/v1/runs/{run_id}/evidence` export run evidence bundle - `GET /api/v1/runtime/kpi?compare_window_days=&baseline_window_days=` compare and baseline KPI windows - `POST /api/v1/runs/git/pr/checkpoints` execute git/PR checkpoints across multiple run IDs - `GET /api/v1/models/catalog` account-aware model catalog and ranking - `POST /api/v1/loops/omar` start remediation loop - `GET /api/v1/loops/omar/{loop_id}` inspect loop state and exit reason ## Event contract All runtime events include: - stable identifiers and timestamps - event kind, actor, and normalized payload - usage/accounting metadata for audit reporting ## Typical event families - lifecycle transitions (start, progress, complete, failure) - tool execution summaries (call + result) - output streaming updates (terminal and narrative) - governance checkpoints (approval requested, approved, denied) ## Omar loop stop semantics Loop terminates only when one of the following is true: - clean `P0-P2` and green quality gates - explicit budget or iteration ceiling reached - policy or approval checkpoint denies continuation ### Structured Q&A **Q: How do I stream run output in real time?** Subscribe to SSE events at /api/v1/runs/{run_id}/events and open terminal WebSocket at /api/v1/runs/{run_id}/terminal for stdout/stderr chunks. **Q: What should I use for audit exports?** Use /api/v1/runs/{run_id}/evidence for downloadable run evidence with timeline, integrity chain metadata, redaction status, findings summary, and artifacts. --- ## Inputs Reference Structured list of common action inputs and defaults. URL: https://sentinelayer.com/docs/configuration/inputs-reference Section: Configuration All inputs are set under the `with:` block in your GitHub Actions workflow YAML. ## Core Inputs | Input | Type | Required | Default | Description | |-------|------|----------|---------|-------------| | `github_token` | string | Yes | — | GitHub token with `contents: read`, `pull-requests: write`, `checks: write` permissions. | | `openai_api_key` | string | No | — | OpenAI API key for AI-powered review. Omit for deterministic-only mode. | | `sentinelayer_api_key` | string | No | — | API key for Sentinelayer managed mode. Required only if using managed model routing. | | `severity_gate` | string | No | `P1` | Merge-blocking threshold. One of: `P0`, `P1`, `P2`, `none`. | | `scan_mode` | string | No | `pr-diff` | Scope of analysis. One of: `pr-diff`, `full-repo`, `changed-files`. | ## Policy Inputs | Input | Type | Required | Default | Description | |-------|------|----------|---------|-------------| | `fork_policy` | string | No | `block` | How to handle PRs from forks. One of: `block`, `allow`, `report-only`. | | `llm_failure_policy` | string | No | `block` | Behavior when LLM call fails. One of: `block`, `pass`, `report-only`. | | `approval_mode` | string | No | `auto` | Whether to require manual cost approval. One of: `auto`, `manual`. | | `report_only` | boolean | No | `false` | When true, findings are posted as comments but never block the merge. | ## Cost and Rate Controls | Input | Type | Required | Default | Description | |-------|------|----------|---------|-------------| | `max_daily_scans` | integer | No | `100` | Maximum scans per repository per day. Prevents runaway reruns. | | `min_scan_interval_minutes` | integer | No | `0` | Minimum minutes between scans on the same PR. | | `require_cost_confirmation` | boolean | No | `false` | Require explicit approval before scans exceeding cost threshold. | ## Telemetry Inputs | Input | Type | Required | Default | Description | |-------|------|----------|---------|-------------| | `telemetry_tier` | integer | No | `1` | Telemetry level: `0` off, `1` aggregate counts, `2` metadata, `3` artifact upload. | ## Example ~~~yaml - uses: mrrCarter/sentinelayer-v1-action@v1 with: github_token: ${{ secrets.GITHUB_TOKEN }} openai_api_key: ${{ secrets.OPENAI_API_KEY }} severity_gate: P1 scan_mode: pr-diff fork_policy: block llm_failure_policy: block report_only: false telemetry_tier: 1 ~~~ ### Structured Q&A **Q: Can Sentinelayer run without a provider key?** Yes. Deterministic scanning can run with deterministic-only fallback policy. Omit the openai_api_key input entirely. **Q: What is the difference between scan_mode pr-diff and full-repo?** pr-diff analyzes only the changed lines in the pull request for fast CI feedback. full-repo scans the entire repository and is typically used for nightly audits. **Q: What happens when llm_failure_policy is set to block?** If the LLM provider returns an error or times out, the gate fails closed and the PR is blocked. Use pass to fail open or report-only to post a warning without blocking. --- ## LLM Modes BYO provider, managed mode, and deterministic-only options. URL: https://sentinelayer.com/docs/configuration/llm-modes Section: Configuration Execution options: - BYO provider key mode - Sentinelayer-managed mode - deterministic-only mode Select based on governance, cost, and environment constraints. ### Structured Q&A **Q: What if primary model path fails?** Fallback behavior follows your configured failure policy and fallback model path. --- ## Configuration Overview Map of auth, scan mode, gate controls, and telemetry settings. URL: https://sentinelayer.com/docs/configuration/overview Section: Configuration Core control areas: - authentication and model mode - scan scope and depth - severity gate and fork policy - cost/rate governance - telemetry and consent ### Structured Q&A **Q: Which controls matter most first?** Severity gate, scan mode, and llm failure policy drive most behavior changes. --- ## Rate Limits and Cost Controls Use scan caps and approval thresholds to control spend. URL: https://sentinelayer.com/docs/configuration/rate-limits-and-costs Section: Configuration Primary controls: - max_daily_scans - min_scan_interval_minutes - require_cost_confirmation - approval_mode ### Structured Q&A **Q: Why enforce scan caps?** Scan caps prevent runaway reruns and keep model spend predictable. --- ## Severity Gates P0/P1/P2/none merge-blocking semantics. URL: https://sentinelayer.com/docs/configuration/severity-gates Section: Configuration Gate options: - P0: critical only - P1: critical and high - P2: critical, high, medium - none: findings do not block merge ### Structured Q&A **Q: What should most teams use first?** P1 is the recommended starting point for practical enforcement. --- ## Telemetry and Consent Tiered telemetry model and consent controls. URL: https://sentinelayer.com/docs/configuration/telemetry-and-consent Section: Configuration Telemetry tiers: - 0 off - 1 aggregate - 2 metadata - 3 artifact upload ### Structured Q&A **Q: Can telemetry be disabled fully?** Yes. Use telemetry off or tier 0 settings. --- ## Monorepo Path Filter Example Path-aware scan strategy for large multi-service repos. URL: https://sentinelayer.com/docs/examples/monorepo-path-filter Section: Examples Combine workflow path filters and .sentinelayerignore to control cost and noise in monorepo environments. ### Structured Q&A **Q: Is ignore-file-only enough?** No. Combine workflow-level and scanner-level scoping. --- ## Nightly Audit Example Scheduled deep scan template for broader coverage. URL: https://sentinelayer.com/docs/examples/nightly-audit Section: Examples Use scheduled deep scans for full-repo drift detection while keeping PR scans fast. ### Structured Q&A **Q: Why use nightly deep scans?** It expands coverage without slowing normal pull-request feedback loops. --- ## Report-only Mode Example Temporary non-blocking rollout profile. URL: https://sentinelayer.com/docs/examples/report-only-mode Section: Examples Set severity_gate to none for onboarding windows, then move to enforcement by planned date. ### Structured Q&A **Q: Should report-only be permanent?** No. Treat it as a bounded adoption phase. --- ## Builder Studio Runtime Live audit runtime with streaming actions, terminal output, and file-context aware chat. URL: https://sentinelayer.com/docs/features/builder-studio-runtime Section: Features Builder Studio is Sentinelayer's live audit workspace for repo-aware runs. ## Runtime UX - run timer from `run_started` to completion - action badges (`READ`, `SEARCH`, `EDIT`, `TEST`, `SCAN`, `SUBAGENT`, `RECONCILE`, `PR`) - collapsible timeline with nested activity rows - terminal stream for command output and failure diagnostics - auto-collapsed completed traces with final run summary pinned ## Chat and context behavior - typing indicator replaces generic processing labels - selected files are represented as file-context chips in chat - context chips track loaded status so you can confirm what the model can reference - footer reminder: "AI can make mistakes. Verify important changes." ## Runtime safety modes - `advice` and `audit_readonly` for non-mutating analysis - `edit_gated` and `autonomous_gated` for guarded changes and PR flow - approval checkpoints are required before push/PR actions in gated modes ### Structured Q&A **Q: How do I know what the AI is doing during a run?** Use the live timeline badges and terminal stream. Every meaningful action is emitted as an event and shown in run order. **Q: Can Builder Studio see the file I selected?** Yes, when the file is attached as a context chip and marked loaded. The chip state is the source of truth for model-visible file context. --- ## Git/PR Checkpoint Automation Single-run and batch checkpoint automation for commit/PR flow with approval gating. URL: https://sentinelayer.com/docs/features/git-pr-checkpoint-automation Section: Features Sentinelayer supports deterministic git checkpoint automation for write-capable runtime flows. ## Single run - `POST /api/v1/runs/{run_id}/git/pr` - validates approval checkpoint before any write-capable action - can commit only, or push/open PR when enabled ## Batch execution - `POST /api/v1/runs/git/pr/checkpoints` - processes multiple run IDs with per-run success/error envelopes - supports fail-fast or continue-on-error execution modes ## Guardrails - no automation for non write-capable modes - explicit checkpoint approval required - deterministic error codes for missing approval, missing workspace, or tool failures ### Structured Q&A **Q: Can one call process multiple repos?** Yes. Batch checkpoint execution accepts multiple run IDs and returns deterministic per-run outcomes. **Q: What happens when one run fails in batch mode?** Choose continue-on-error to keep going, or fail-fast to stop immediately after the first failure. --- ## Prompt Builder AI-powered spec generation with streaming output, repo context, and security-first defaults. URL: https://sentinelayer.com/docs/features/prompt-builder Section: Features The Prompt Builder generates structured project specs, AI builder prompts, Omar Gate configurations, and step-by-step build guides from a natural language description. ## Capabilities - **SSE Streaming**: Real-time output with artifact progress tracking - **Multi-model**: Choose between OpenAI (Codex), Anthropic (Claude), and Google (Gemini) - **Repo Context**: Connect a GitHub repository for codebase-aware generation - **URL Scan Integration**: Inject security scan findings into the generation context - **File Attachments**: Upload existing specs, configs, or documentation for context - **Advanced Settings**: Generation mode (Quick/Detailed/Enterprise), audience level, project type, platform, tech stack ## Output Artifacts 1. **Spec Sheet**: Structured project specification with architecture, flows, and requirements 2. **AI Builder Prompt**: Ready-to-paste prompt for Cursor, Claude Code, Copilot, or other AI tools 3. **Omar Gate YAML**: Pre-configured security gate with project-specific rules 4. **Build Guide**: Phased implementation plan with acceptance criteria ## Workflow 1. Describe your project in the text area 2. Optionally: connect a repo, add a URL scan, select tech stack 3. Click Generate — streaming output appears in real-time 4. Review and refine with the Personalize feature 5. Export to Jira, save to dashboard, or copy artifacts ### Structured Q&A **Q: What models does the Prompt Builder support?** OpenAI (GPT-5.3 Codex, GPT-5.2 Codex), Anthropic (Claude Opus 4.6, Claude Sonnet 4.6), and Google (Gemini 3.1 Pro, Gemini 2.5 Flash). **Q: Can I stop generation mid-stream?** Yes. Click the Stop button during streaming to abort and preserve partial output. --- ## Runtime Insights Dashboard Dedicated KPI and evidence dashboard for deterministic run operations and investor/demo readiness. URL: https://sentinelayer.com/docs/features/runtime-insights-dashboard Section: Features Runtime Insights is a dedicated dashboard for run-level trust metrics and evidence replay. ## Included views - compare-window vs baseline closure trend - time-to-clean and cost-per-clean indicators - deterministic reproducibility tracking - evidence lookup by run ID for replay and export validation ## Why this page exists - runtime panel metrics are useful in-session, but operator reporting needs a focused surface - investor and diligence demos require clear KPI snapshots without opening raw run timelines - product operators need one page to spot regressions quickly ### Structured Q&A **Q: How is Runtime Insights different from the in-run panel?** The run panel focuses on one active session. Runtime Insights is a cross-run KPI and evidence surface for operational reporting. **Q: Can I compare current and baseline windows?** Yes. Runtime Insights supports compare and baseline windows for closure-rate and time-to-clean trend analysis. --- ## URL Scanner Comprehensive security, performance, and compliance analysis for any public URL. URL: https://sentinelayer.com/docs/features/url-scanner Section: Features The Sentinelayer URL Scanner performs deep analysis of any public URL across security, performance, and compliance dimensions. Results feed directly into Prompt Builder for automated remediation specs. ## How it works 1. Submit any public URL at [/scan](/scan) or via the Prompt Builder 2. The scanner runs 10+ deterministic check categories in parallel 3. An LLM synthesizes findings into a prioritized narrative 4. Results are available as structured JSON, markdown artifact, and in-app UI ## Check Categories ### Security Headers (15+ checks) Strict-Transport-Security, Content-Security-Policy, X-Frame-Options, X-Content-Type-Options, Referrer-Policy, Permissions-Policy, Cross-Origin-Opener-Policy, Cross-Origin-Embedder-Policy, Cross-Origin-Resource-Policy, X-XSS-Protection (deprecated check), Cache-Control, X-Permitted-Cross-Domain-Policies, Content-Security-Policy-Report-Only, and more. ### TLS / Certificate Certificate validity, expiration window, protocol version (TLS 1.2+), cipher strength, certificate chain completeness, and HSTS preload eligibility. ### Cookie Security HttpOnly, Secure, SameSite attributes, `__Host-` / `__Secure-` prefix compliance, expiration policy, and third-party cookie exposure. ### Secrets & Credential Exposure Scans page source and linked resources for API keys, tokens, private keys, AWS credentials, database connection strings, and other sensitive patterns. ### Exposed Files & Paths Probes for common sensitive paths: `.env`, `.git/config`, `wp-config.php`, `/debug`, `/admin`, `/api/docs`, `/swagger`, backup files, and directory listings. ### Open Redirect Detection Tests for unvalidated redirect parameters (`?url=`, `?redirect=`, `?next=`) that could be exploited for phishing. ### Mixed Content Detects HTTP resources loaded on HTTPS pages — scripts, stylesheets, images, iframes — that weaken transport security. ### PageSpeed / Lighthouse (Core Web Vitals) Runs Google PageSpeed Insights API for both mobile and desktop strategies. Reports Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), Time to Interactive (TTI), Speed Index, First Contentful Paint (FCP), and Total Blocking Time (TBT). Includes performance, accessibility, best practices, and SEO scores. ### LLM Synthesis Claude or GPT synthesizes all deterministic findings into a prioritized executive summary with severity classification, remediation guidance, and CI/CD integration recommendations. ## Severity Classification - **Critical**: Immediate exploit risk (exposed credentials, open redirects to malicious targets) - **High**: Significant security gap (missing HSTS, no CSP, expired certificate) - **Medium**: Best-practice violation (missing Referrer-Policy, weak cookie attributes) - **Low**: Informational or optimization (deprecated headers, minor Lighthouse warnings) ## Tiers - **Free**: Full scan with all check categories, rate-limited - **Authenticated**: Higher rate limits, scan history, claim scans to dashboard - **Pro**: Unlimited scans, priority queue, API access ## Integration with Prompt Builder After a scan completes, click "Generate Spec from Findings" to automatically create a security hardening spec with Omar Gate configuration. The Prompt Builder pre-fills with scan findings and produces remediation-ready artifacts. ### Structured Q&A **Q: What does the Sentinelayer URL Scanner check?** It checks security headers (15+), TLS/certificate, cookies, secrets exposure, exposed files, open redirects, mixed content, PageSpeed/Lighthouse (Core Web Vitals), and synthesizes findings with an LLM. **Q: Is the URL Scanner free?** Yes. The free tier includes full scans with all check categories. Authenticated users get higher rate limits and scan history. **Q: Can I generate a spec from URL scan findings?** Yes. After a scan completes, the Prompt Builder can auto-generate a security hardening spec with Omar Gate configuration based on findings. --- ## Branch Protection Require Omar Gate status checks for enforceable merge controls. URL: https://sentinelayer.com/docs/getting-started/branch-protection Section: Getting Started Configure branch protection so Omar Gate controls merge policy. ## Required settings - require status checks before merging - require check named Omar Gate - enforce on protected branches ### Structured Q&A **Q: Does severity none still require branch checks?** Yes. The check still executes; severity none only changes finding-based blocking behavior. --- ## First PR Triage How to debug and fix the first blocked PR after onboarding. URL: https://sentinelayer.com/docs/getting-started/first-pr-triage Section: Getting Started Use this sequence: 1. inspect gate_status 2. read REVIEW_BRIEF.md 3. validate code location 4. remediate and rerun ## Example blocked output ![Example Omar Gate blocked output with severity table and top findings](/docs/examples/omar-gate-blocked-example.jpeg) ### Structured Q&A **Q: What artifact should be read first?** Start with REVIEW_BRIEF.md, then inspect FINDINGS.jsonl. --- ## Install Workflow Detailed setup for BYO keys and Sentinelayer-managed model routing. URL: https://sentinelayer.com/docs/getting-started/install-workflow Section: Getting Started Production setup checklist: ## Required - workflow file - github token input - one model access path ## Optional - managed mode token - fallback provider configuration ## Validation - branch protection requires Omar Gate - run outputs and artifacts are present ### Structured Q&A **Q: Do I need both provider key and Sentinelayer token?** No. Either can be used independently, with optional dual-mode fallback. --- ## Quickstart Minimal workflow to run Sentinelayer on pull requests in minutes. URL: https://sentinelayer.com/docs/getting-started/quickstart Section: Getting Started Use this flow for immediate setup. ## Minimal workflow ~~~yaml name: Security Review on: pull_request: permissions: contents: read pull-requests: write checks: write jobs: omar: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 - uses: mrrCarter/sentinelayer-v1-action@v1 with: github_token: YOUR_GITHUB_TOKEN openai_api_key: YOUR_OPENAI_KEY severity_gate: P1 ~~~ ## First-run checks 1. check run exists 2. comment appears 3. artifacts are available ### Structured Q&A **Q: Can quickstart run without extra infrastructure?** Yes. It runs in your existing GitHub Actions runner. --- ## GitHub Actions Integration Reference pattern for checks, outputs, and artifact routing. URL: https://sentinelayer.com/docs/integrations/github-actions Section: Integrations Recommended flow: 1. run Omar Gate 2. read outputs 3. upload artifacts 4. route to downstream systems ### Structured Q&A **Q: Should automations parse PR comment text?** No. Use structured outputs and artifacts for deterministic integration. --- ## Jira Export Map findings into issue workflows by severity and domain. URL: https://sentinelayer.com/docs/integrations/jira-export Section: Integrations Map finding severity to issue priority and include run_id, file, line, category, and remediation fields. ### Structured Q&A **Q: Should every finding open a ticket?** Usually no. Group related findings by component to avoid ticket overload. --- ## Slack Alerts Severity-aware alert routing without channel spam. URL: https://sentinelayer.com/docs/integrations/slack-alerts Section: Integrations Route high-impact blocked runs to response channels, and send lower-severity updates as digest summaries. ### Structured Q&A **Q: How do we avoid alert fatigue?** Throttle repeated events and use severity-based routing. --- ## Webhook Event Design Suggested payload model for downstream event processing. URL: https://sentinelayer.com/docs/integrations/webhook-event-design Section: Integrations Omar Gate can emit webhook events to notify downstream systems about scan results. Configure a webhook URL and secret in your workflow or Sentinelayer dashboard. ## Event Types | Event Type | Trigger | Description | |------------|---------|-------------| | `scan.completed` | Every successful scan | Scan finished and gate decision was made (pass or block). | | `scan.blocked` | Findings above gate threshold | Subset of `scan.completed` where `gate_status` is `block`. | | `scan.failed` | Runtime or config error | Scan could not complete due to configuration, provider, or infrastructure issues. | | `scan.skipped` | Rate limit or cooldown hit | Scan was skipped due to `max_daily_scans` or `min_scan_interval_minutes`. | ## Common Payload Fields Every webhook event includes these fields: | Field | Type | Description | |-------|------|-------------| | `event_type` | string | One of the event types above. | | `run_id` | string | Unique scan identifier. Use as idempotency key. | | `timestamp` | string | ISO 8601 timestamp of the event. | | `repo` | string | Full repository name (e.g. `acme/backend`). | | `branch` | string | Branch or PR head ref that was scanned. | | `commit_sha` | string | Commit SHA that was analyzed. | | `pr_number` | integer or null | Pull request number, if applicable. | | `gate_status` | string | `pass`, `block`, or `error`. | | `severity_counts` | object | Counts by severity: `{ "P0": 0, "P1": 2, "P2": 3, "P3": 5, "info": 1 }`. | | `total_findings` | integer | Total number of findings. | | `artifact_links` | object | URLs to download artifacts from the Actions run. | | `estimated_cost_usd` | string | Estimated LLM cost for this scan. | ## Example: scan.completed ~~~json { "event_type": "scan.completed", "run_id": "run-20260223-001", "timestamp": "2026-02-23T08:15:30Z", "repo": "acme/backend", "branch": "feature/auth", "commit_sha": "a1b2c3d4e5f6", "pr_number": 142, "gate_status": "block", "severity_counts": { "P0": 0, "P1": 2, "P2": 3, "P3": 5, "info": 1 }, "total_findings": 11, "artifact_links": { "findings": "https://api.github.com/repos/acme/backend/actions/artifacts/12345/zip", "summary": "https://api.github.com/repos/acme/backend/actions/artifacts/12346/zip" }, "estimated_cost_usd": "0.0042" } ~~~ ## Example: scan.failed ~~~json { "event_type": "scan.failed", "run_id": "run-20260223-002", "timestamp": "2026-02-23T09:01:12Z", "repo": "acme/backend", "branch": "main", "commit_sha": "b2c3d4e5f6a7", "pr_number": null, "gate_status": "error", "error_code": 2, "error_message": "Missing required input: openai_api_key", "severity_counts": {}, "total_findings": 0, "artifact_links": {}, "estimated_cost_usd": "0.00" } ~~~ ## Webhook Security Webhooks include an `X-Sentinelayer-Signature` header containing an HMAC-SHA256 signature of the payload body using your webhook secret. Always verify this signature before processing events. ## Routing Recommendations - **scan.blocked** → Slack incident channel or PagerDuty - **scan.completed** → SIEM ingestion or dashboard update - **scan.failed** → Ops alert channel for configuration review - **scan.skipped** → Daily digest or ignore ### Structured Q&A **Q: What should be event idempotency key?** Use run_id as the primary idempotency key. Each scan produces exactly one run_id that is unique across all events for that scan. **Q: How do I verify webhook authenticity?** Check the X-Sentinelayer-Signature header against an HMAC-SHA256 of the raw payload body using your webhook secret. Reject any request where the signature does not match. **Q: Which event should trigger a Slack alert?** Route scan.blocked to your incident response channel for immediate attention. Use scan.completed for informational updates and scan.failed for ops-level configuration alerts. --- ## Introduction Sentinelayer docs for Omar Gate, platform architecture, and agent-first discoverability. URL: https://sentinelayer.com/docs/introduction Section: Getting Started Sentinelayer is an audit-first AI engineering platform. It combines Omar Gate in CI with Builder Studio runtime sessions for guided remediation and evidence export. ## What is shipped now - pull-request security and quality gates with Omar Gate - Builder Studio runtime with live action timeline and terminal stream - file context chips in chat and structured evidence artifacts - deterministic stop reasons for gated and autonomous audit loops ## Why this docs system exists - stable URLs for indexing and retrieval - direct answers for humans and agents - machine-readable discovery endpoints aligned to live product behavior ### Structured Q&A **Q: What is Omar Gate?** Omar Gate is Sentinelayer's GitHub Action that performs deterministic and AI PR review and can block merges by severity gate. **Q: Where should I start?** Start with Quickstart, then Configuration Overview, then Inputs Reference and Outputs/Artifacts. --- ## Can Sentinelayer support fundraising readiness? Yes, by producing structured technical evidence that supports diligence conversations. URL: https://sentinelayer.com/docs/knowledge-base/can-sentinelayer-support-fundraising-readiness Section: Knowledge Base Yes, by producing structured technical evidence that supports diligence conversations. ## Recommended Actions - track risk trend movement - show remediation performance over time ## Why this matters Investors need confidence in engineering maturity and execution discipline. ### Structured Q&A **Q: Can Sentinelayer support fundraising readiness?** Yes, by producing structured technical evidence that supports diligence conversations. **Q: What is the first recommended action for Can Sentinelayer support fundraising readiness?** track risk trend movement --- ## Does Sentinelayer replace human reviewers? No. It augments reviewers with structured risk evidence and faster triage context. URL: https://sentinelayer.com/docs/knowledge-base/does-sentinelayer-replace-human-reviewers Section: Knowledge Base No. It augments reviewers with structured risk evidence and faster triage context. ## Recommended Actions - use findings for prioritization - keep human merge accountability ## Why this matters Human responsibility remains essential in critical decisions. ### Structured Q&A **Q: Does Sentinelayer replace human reviewers?** No. It augments reviewers with structured risk evidence and faster triage context. **Q: What is the first recommended action for Does Sentinelayer replace human reviewers?** use findings for prioritization --- ## Does Sentinelayer support mixed language stacks? Yes. It is designed for heterogeneous repositories and policy-driven review across services. URL: https://sentinelayer.com/docs/knowledge-base/does-sentinelayer-support-mixed-stacks Section: Knowledge Base Yes. It is designed for heterogeneous repositories and policy-driven review across services. ## Recommended Actions - validate signal quality per stack - tune policy where needed ## Why this matters Most scale-stage companies operate multi-language architectures. ### Structured Q&A **Q: Does Sentinelayer support mixed language stacks?** Yes. It is designed for heterogeneous repositories and policy-driven review across services. **Q: What is the first recommended action for Does Sentinelayer support mixed language stacks?** validate signal quality per stack --- ## How do file context chips work in Builder Studio? Attach selected files as chips so the runtime can confirm loaded context before analysis or edits. URL: https://sentinelayer.com/docs/knowledge-base/how-do-file-context-chips-work Section: Knowledge Base Attach selected files as chips so the runtime can confirm loaded context before analysis or edits. ## Recommended Actions - attach the exact files as chips - confirm each chip shows loaded status before running ## Why this matters Explicit file context prevents wrong-file assumptions and improves audit precision. ### Structured Q&A **Q: How do file context chips work in Builder Studio?** Attach selected files as chips so the runtime can confirm loaded context before analysis or edits. **Q: What is the first recommended action for How do file context chips work in Builder Studio?** attach the exact files as chips --- ## How do I avoid SEO and agent-indexing conflicts? Keep canonical consistency, sync sitemap and robots, and generate LLM indexes directly from docs source. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-avoid-seo-agent-indexing-conflicts Section: Knowledge Base Keep canonical consistency, sync sitemap and robots, and generate LLM indexes directly from docs source. ## Recommended Actions - automate sitemap and llms generation - avoid duplicate canonical paths ## Why this matters Consistency drives better discoverability across search and agent systems. ### Structured Q&A **Q: How do I avoid SEO and agent-indexing conflicts?** Keep canonical consistency, sync sitemap and robots, and generate LLM indexes directly from docs source. **Q: What is the first recommended action for How do I avoid SEO and agent-indexing conflicts?** automate sitemap and llms generation --- ## How do I balance security with delivery speed? Tune by evidence: preserve strict controls for high-risk findings while reducing low-signal friction. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-balance-security-with-delivery-speed Section: Knowledge Base Tune by evidence: preserve strict controls for high-risk findings while reducing low-signal friction. ## Recommended Actions - review false-positive and fix-latency trends - iterate policy with engineering leadership ## Why this matters Balanced enforcement drives both safety and throughput. ### Structured Q&A **Q: How do I balance security with delivery speed?** Tune by evidence: preserve strict controls for high-risk findings while reducing low-signal friction. **Q: What is the first recommended action for How do I balance security with delivery speed?** review false-positive and fix-latency trends --- ## How do I control model spend? Use scan caps, cooldowns, and approval thresholds to prevent runaway cost. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-control-model-spend Section: Knowledge Base Use scan caps, cooldowns, and approval thresholds to prevent runaway cost. ## Recommended Actions - set max_daily_scans - configure require_cost_confirmation ## Why this matters Cost predictability is essential for scale. ### Structured Q&A **Q: How do I control model spend?** Use scan caps, cooldowns, and approval thresholds to prevent runaway cost. **Q: What is the first recommended action for How do I control model spend?** set max_daily_scans --- ## How do I document security exceptions? Track exceptions with run_id, owner, expiry date, and remediation plan. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-document-exceptions Section: Knowledge Base Track exceptions with run_id, owner, expiry date, and remediation plan. ## Recommended Actions - adopt exception template - review and expire exceptions regularly ## Why this matters Structured exceptions prevent silent risk accumulation. ### Structured Q&A **Q: How do I document security exceptions?** Track exceptions with run_id, owner, expiry date, and remediation plan. **Q: What is the first recommended action for How do I document security exceptions?** adopt exception template --- ## How do I explain the multi-agent system publicly? Describe capability outcomes and governance without exposing proprietary orchestration internals. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-explain-13-agent-system-publicly Section: Knowledge Base Describe capability outcomes and governance without exposing proprietary orchestration internals. ## Recommended Actions - publish capability-level architecture docs - avoid internal ranking/prompt specifics ## Why this matters Clear public messaging builds trust while protecting IP. ### Structured Q&A **Q: How do I explain the multi-agent system publicly?** Describe capability outcomes and governance without exposing proprietary orchestration internals. **Q: What is the first recommended action for How do I explain the multi-agent system publicly?** publish capability-level architecture docs --- ## How do I handle a base spec plus add-feature spec? Treat the base spec as non-regression policy and the add-feature spec as delta acceptance criteria; validate both before merge. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-handle-base-spec-plus-add-feature-spec Section: Knowledge Base Treat the base spec as non-regression policy and the add-feature spec as delta acceptance criteria; validate both before merge. ## Recommended Actions - keep base spec hash pinned - fail closed when base and delta constraints conflict ## Why this matters Spec layering avoids regressions while still allowing fast feature iteration. ### Structured Q&A **Q: How do I handle a base spec plus add-feature spec?** Treat the base spec as non-regression policy and the add-feature spec as delta acceptance criteria; validate both before merge. **Q: What is the first recommended action for How do I handle a base spec plus add-feature spec?** keep base spec hash pinned --- ## How do I handle fork PRs? Treat fork PRs as untrusted by default and keep strict fork policy unless hardened exceptions exist. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-handle-fork-prs Section: Knowledge Base Treat fork PRs as untrusted by default and keep strict fork policy unless hardened exceptions exist. ## Recommended Actions - keep fork policy blocked - review trigger security model ## Why this matters Fork CI context is a common source of secret exposure risk. ### Structured Q&A **Q: How do I handle fork PRs?** Treat fork PRs as untrusted by default and keep strict fork policy unless hardened exceptions exist. **Q: What is the first recommended action for How do I handle fork PRs?** keep fork policy blocked --- ## How do I handle high-volume findings? Prioritize by severity and recurrence, then execute focused cleanup sprints. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-handle-high-volume-findings Section: Knowledge Base Prioritize by severity and recurrence, then execute focused cleanup sprints. ## Recommended Actions - triage recurrent critical categories - set backlog burn-down milestones ## Why this matters Structured backlog strategy prevents analysis paralysis. ### Structured Q&A **Q: How do I handle high-volume findings?** Prioritize by severity and recurrence, then execute focused cleanup sprints. **Q: What is the first recommended action for How do I handle high-volume findings?** triage recurrent critical categories --- ## How do I handle vibe-coded changes? Treat generated code like any production code and enforce identical gate policy. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-handle-vibe-coded-changes Section: Knowledge Base Treat generated code like any production code and enforce identical gate policy. ## Recommended Actions - scan generated output with same thresholds - track recurring generator-risk patterns ## Why this matters Generated code can introduce hidden regressions if under-reviewed. ### Structured Q&A **Q: How do I handle vibe-coded changes?** Treat generated code like any production code and enforce identical gate policy. **Q: What is the first recommended action for How do I handle vibe-coded changes?** scan generated output with same thresholds --- ## How do I harden GitHub permissions? Use explicit least-privilege permissions and avoid unnecessary secret scope. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-harden-github-permissions Section: Knowledge Base Use explicit least-privilege permissions and avoid unnecessary secret scope. ## Recommended Actions - set minimal permissions block - prefer OIDC where possible ## Why this matters Over-privileged workflows increase supply-chain risk. ### Structured Q&A **Q: How do I harden GitHub permissions?** Use explicit least-privilege permissions and avoid unnecessary secret scope. **Q: What is the first recommended action for How do I harden GitHub permissions?** set minimal permissions block --- ## How do I install Omar Gate? Add the workflow, configure token plus one model path, and require the Omar Gate check in branch protection. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-install-omar-gate Section: Knowledge Base Add the workflow, configure token plus one model path, and require the Omar Gate check in branch protection. ## Recommended Actions - commit workflow file - validate first PR run artifacts ## Why this matters A strict install checklist prevents partial deployments. ### Structured Q&A **Q: How do I install Omar Gate?** Add the workflow, configure token plus one model path, and require the Omar Gate check in branch protection. **Q: What is the first recommended action for How do I install Omar Gate?** commit workflow file --- ## How do I integrate Jira? Map findings by severity and component, and include run_id plus location context. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-integrate-jira Section: Knowledge Base Map findings by severity and component, and include run_id plus location context. ## Recommended Actions - define severity-to-priority map - group related findings per issue ## Why this matters Ticket hygiene keeps remediation queues actionable. ### Structured Q&A **Q: How do I integrate Jira?** Map findings by severity and component, and include run_id plus location context. **Q: What is the first recommended action for How do I integrate Jira?** define severity-to-priority map --- ## How do I integrate Slack alerts? Route high-severity blocked runs to incident channels and use digests for lower-priority updates. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-integrate-slack Section: Knowledge Base Route high-severity blocked runs to incident channels and use digests for lower-priority updates. ## Recommended Actions - set severity-based routing - thread updates by run_id ## Why this matters Alert quality affects response speed and trust. ### Structured Q&A **Q: How do I integrate Slack alerts?** Route high-severity blocked runs to incident channels and use digests for lower-priority updates. **Q: What is the first recommended action for How do I integrate Slack alerts?** set severity-based routing --- ## How do I keep CI fast with security gates? Use PR-diff mode for everyday checks and deep scans on schedule. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-keep-ci-fast-with-gates Section: Knowledge Base Use PR-diff mode for everyday checks and deep scans on schedule. ## Recommended Actions - default to pr-diff - reserve deep mode for nightly/release windows ## Why this matters Fast feedback is mandatory for healthy developer adoption. ### Structured Q&A **Q: How do I keep CI fast with security gates?** Use PR-diff mode for everyday checks and deep scans on schedule. **Q: What is the first recommended action for How do I keep CI fast with security gates?** default to pr-diff --- ## How do I make docs more agent parseable? Use stable headings, direct answers, canonical URLs, and machine-readable indexes generated from a single source. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-make-docs-agent-parseable Section: Knowledge Base Use stable headings, direct answers, canonical URLs, and machine-readable indexes generated from a single source. ## Recommended Actions - publish llms indexes - embed structured Q&A blocks ## Why this matters Parseable docs improve retrieval and referral quality from LLM systems. ### Structured Q&A **Q: How do I make docs more agent parseable?** Use stable headings, direct answers, canonical URLs, and machine-readable indexes generated from a single source. **Q: What is the first recommended action for How do I make docs more agent parseable?** publish llms indexes --- ## How do I manage legacy risk backlog? Separate legacy debt remediation from new-change gating so forward progress continues. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-manage-legacy-risk-backlog Section: Knowledge Base Separate legacy debt remediation from new-change gating so forward progress continues. ## Recommended Actions - baseline historical debt - gate new risk strictly ## Why this matters Mixed strategy prevents delivery freeze while reducing debt. ### Structured Q&A **Q: How do I manage legacy risk backlog?** Separate legacy debt remediation from new-change gating so forward progress continues. **Q: What is the first recommended action for How do I manage legacy risk backlog?** baseline historical debt --- ## How do I onboard many repositories? Start from shared baseline policy, then tune by repository criticality. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-onboard-many-repositories Section: Knowledge Base Start from shared baseline policy, then tune by repository criticality. ## Recommended Actions - publish baseline workflow template - phase rollout by risk class ## Why this matters Portfolio rollout requires consistency plus controlled variation. ### Structured Q&A **Q: How do I onboard many repositories?** Start from shared baseline policy, then tune by repository criticality. **Q: What is the first recommended action for How do I onboard many repositories?** publish baseline workflow template --- ## How do I prepare enterprise security questionnaire answers? Use policy definitions, run evidence, and incident readiness summaries as supporting artifacts. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-prepare-enterprise-security-answers Section: Knowledge Base Use policy definitions, run evidence, and incident readiness summaries as supporting artifacts. ## Recommended Actions - export policy and run evidence - map controls to questionnaire sections ## Why this matters Enterprise evaluations demand traceable and repeatable proof. ### Structured Q&A **Q: How do I prepare enterprise security questionnaire answers?** Use policy definitions, run evidence, and incident readiness summaries as supporting artifacts. **Q: What is the first recommended action for How do I prepare enterprise security questionnaire answers?** export policy and run evidence --- ## How do I prepare for investor due diligence? Package risk trends, remediation velocity, and readiness evidence in a repeatable format. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-prepare-for-investor-due-diligence Section: Knowledge Base Package risk trends, remediation velocity, and readiness evidence in a repeatable format. ## Recommended Actions - export trend metrics - summarize readiness constraints and progress ## Why this matters Evidence-backed technical posture improves diligence outcomes. ### Structured Q&A **Q: How do I prepare for investor due diligence?** Package risk trends, remediation velocity, and readiness evidence in a repeatable format. **Q: What is the first recommended action for How do I prepare for investor due diligence?** export trend metrics --- ## How do I present the agentic SWE-team vision? Frame Sentinelayer as a coordinated specialist-agent engineering layer with minimal HITL governance checkpoints. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-present-agentic-swe-team-vision Section: Knowledge Base Frame Sentinelayer as a coordinated specialist-agent engineering layer with minimal HITL governance checkpoints. ## Recommended Actions - describe capability bands - share measurable operational outcomes ## Why this matters Clear strategic framing improves stakeholder alignment. ### Structured Q&A **Q: How do I present the agentic SWE-team vision?** Frame Sentinelayer as a coordinated specialist-agent engineering layer with minimal HITL governance checkpoints. **Q: What is the first recommended action for How do I present the agentic SWE-team vision?** describe capability bands --- ## How do I prove improvement over time? Track trend trajectories for severity, recurrence, and remediation speed against a baseline window. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-prove-improvement-over-time Section: Knowledge Base Track trend trajectories for severity, recurrence, and remediation speed against a baseline window. ## Recommended Actions - define baseline quarter - review trend deltas each month ## Why this matters Trend evidence is more meaningful than one-off snapshots. ### Structured Q&A **Q: How do I prove improvement over time?** Track trend trajectories for severity, recurrence, and remediation speed against a baseline window. **Q: What is the first recommended action for How do I prove improvement over time?** define baseline quarter --- ## How do I reduce false positives? Tune scope and thresholds by evidence instead of globally disabling enforcement. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-reduce-false-positives Section: Knowledge Base Tune scope and thresholds by evidence instead of globally disabling enforcement. ## Recommended Actions - analyze recurring false-positive categories - adjust scoped rules and guardrails ## Why this matters Signal quality determines long-term adoption. ### Structured Q&A **Q: How do I reduce false positives?** Tune scope and thresholds by evidence instead of globally disabling enforcement. **Q: What is the first recommended action for How do I reduce false positives?** analyze recurring false-positive categories --- ## How do I report posture to executives? Use trend-oriented summaries: high-risk reduction, remediation speed, and readiness indicators. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-report-posture-to-executives Section: Knowledge Base Use trend-oriented summaries: high-risk reduction, remediation speed, and readiness indicators. ## Recommended Actions - create monthly posture memo - map technical risk to business impact ## Why this matters Executive decisions need strategic evidence, not raw finding lists. ### Structured Q&A **Q: How do I report posture to executives?** Use trend-oriented summaries: high-risk reduction, remediation speed, and readiness indicators. **Q: What is the first recommended action for How do I report posture to executives?** create monthly posture memo --- ## How do I respond to diligence requests quickly? Maintain a rolling evidence bundle with current trend data, policy posture, and readiness summary. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-respond-to-diligence-fast Section: Knowledge Base Maintain a rolling evidence bundle with current trend data, policy posture, and readiness summary. ## Recommended Actions - standardize diligence evidence package - refresh it monthly ## Why this matters Prepared evidence shortens diligence cycles and improves confidence. ### Structured Q&A **Q: How do I respond to diligence requests quickly?** Maintain a rolling evidence bundle with current trend data, policy posture, and readiness summary. **Q: What is the first recommended action for How do I respond to diligence requests quickly?** standardize diligence evidence package --- ## How do I run a funding-ready live demo? Run a gated session end-to-end, show approvals and loop stop reasons, then export evidence with KPI deltas. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-run-a-funding-ready-live-demo Section: Knowledge Base Run a gated session end-to-end, show approvals and loop stop reasons, then export evidence with KPI deltas. ## Recommended Actions - demo timeline plus approvals first - close with evidence export and compare-vs-baseline KPIs ## Why this matters Diligence audiences need deterministic proof, not narrative-only demos. ### Structured Q&A **Q: How do I run a funding-ready live demo?** Run a gated session end-to-end, show approvals and loop stop reasons, then export evidence with KPI deltas. **Q: What is the first recommended action for How do I run a funding-ready live demo?** demo timeline plus approvals first --- ## How do I run nightly audits without PR noise? Use separate scheduled workflows and route results to digests or dashboards instead of PR threads. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-run-nightly-without-noise Section: Knowledge Base Use separate scheduled workflows and route results to digests or dashboards instead of PR threads. ## Recommended Actions - split nightly workflow from PR workflow - alert only on high-impact deltas ## Why this matters Nightly coverage should improve posture without flooding delivery channels. ### Structured Q&A **Q: How do I run nightly audits without PR noise?** Use separate scheduled workflows and route results to digests or dashboards instead of PR threads. **Q: What is the first recommended action for How do I run nightly audits without PR noise?** split nightly workflow from PR workflow --- ## How do I run Omar loop from Codex CLI or Claude Code? Use the same repo runbook: attach spec artifacts, enable command+git permissions, run gated loop, and stop only on clean P0-P2 plus green gates. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-run-omar-loop-from-codex-or-claude-code Section: Knowledge Base Use the same repo runbook: attach spec artifacts, enable command+git permissions, run gated loop, and stop only on clean P0-P2 plus green gates. ## Recommended Actions - load spec package at repo root - run scan->patch->test->rescan with approval checkpoints ## Why this matters Cross-platform consistency is required for reliable autonomous delivery. ### Structured Q&A **Q: How do I run Omar loop from Codex CLI or Claude Code?** Use the same repo runbook: attach spec artifacts, enable command+git permissions, run gated loop, and stop only on clean P0-P2 plus green gates. **Q: What is the first recommended action for How do I run Omar loop from Codex CLI or Claude Code?** load spec package at repo root --- ## How do I run without a provider key? Run deterministic-only mode to keep baseline scanning active without LLM usage. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-run-without-provider-key Section: Knowledge Base Run deterministic-only mode to keep baseline scanning active without LLM usage. ## Recommended Actions - set deterministic fallback policy - confirm deterministic gate behavior ## Why this matters Some environments require model-free security scanning. ### Structured Q&A **Q: How do I run without a provider key?** Run deterministic-only mode to keep baseline scanning active without LLM usage. **Q: What is the first recommended action for How do I run without a provider key?** set deterministic fallback policy --- ## How do I set a security gate SLA? Define response windows by severity and track adherence in engineering ops cadence. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-set-security-sla Section: Knowledge Base Define response windows by severity and track adherence in engineering ops cadence. ## Recommended Actions - set P0/P1 response targets - monitor SLA compliance trend ## Why this matters SLA discipline keeps policy enforcement operationally credible. ### Structured Q&A **Q: How do I set a security gate SLA?** Define response windows by severity and track adherence in engineering ops cadence. **Q: What is the first recommended action for How do I set a security gate SLA?** set P0/P1 response targets --- ## How do I track remediation velocity? Measure time-to-close by severity and category, then review trend deltas by team. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-track-remediation-velocity Section: Knowledge Base Measure time-to-close by severity and category, then review trend deltas by team. ## Recommended Actions - correlate run_id with issue lifecycle - publish monthly remediation metrics ## Why this matters Velocity trends are key readiness indicators for scale and diligence. ### Structured Q&A **Q: How do I track remediation velocity?** Measure time-to-close by severity and category, then review trend deltas by team. **Q: What is the first recommended action for How do I track remediation velocity?** correlate run_id with issue lifecycle --- ## How do I use Sentinelayer in monorepos? Use path-aware workflows, scoped policy profiles, and ownership routing. URL: https://sentinelayer.com/docs/knowledge-base/how-do-i-use-sentinelayer-in-monorepos Section: Knowledge Base Use path-aware workflows, scoped policy profiles, and ownership routing. ## Recommended Actions - split jobs by service domains - route findings to owning teams ## Why this matters Monorepo scale needs scoped controls and clear ownership. ### Structured Q&A **Q: How do I use Sentinelayer in monorepos?** Use path-aware workflows, scoped policy profiles, and ownership routing. **Q: What is the first recommended action for How do I use Sentinelayer in monorepos?** split jobs by service domains --- ## How does scale-readiness audit work? It assesses architecture health, dependency posture, reliability discipline, and remediation cadence. URL: https://sentinelayer.com/docs/knowledge-base/how-does-scale-readiness-audit-work Section: Knowledge Base It assesses architecture health, dependency posture, reliability discipline, and remediation cadence. ## Recommended Actions - review architecture hotspots - track recurring high-risk categories ## Why this matters Scale constraints are easier to fix before growth accelerates. ### Structured Q&A **Q: How does scale-readiness audit work?** It assesses architecture health, dependency posture, reliability discipline, and remediation cadence. **Q: What is the first recommended action for How does scale-readiness audit work?** review architecture hotspots --- ## How does the Omar remediation loop stop? It stops on clean P0-P2 plus green gates, or on explicit budget, policy, or approval checkpoint exit reasons. URL: https://sentinelayer.com/docs/knowledge-base/how-does-the-omar-loop-stop Section: Knowledge Base It stops on clean P0-P2 plus green gates, or on explicit budget, policy, or approval checkpoint exit reasons. ## Recommended Actions - set max iterations and budget upfront - review loop exit_reason and gate_status in evidence output ## Why this matters Deterministic stop semantics are required for safe autonomous runs. ### Structured Q&A **Q: How does the Omar remediation loop stop?** It stops on clean P0-P2 plus green gates, or on explicit budget, policy, or approval checkpoint exit reasons. **Q: What is the first recommended action for How does the Omar remediation loop stop?** set max iterations and budget upfront --- ## What artifacts should be retained? Retain findings, summary metadata, and review briefs for auditability and trend analysis. URL: https://sentinelayer.com/docs/knowledge-base/what-artifacts-should-be-retained Section: Knowledge Base Retain findings, summary metadata, and review briefs for auditability and trend analysis. ## Recommended Actions - store FINDINGS.jsonl and PACK_SUMMARY.json - apply retention policy ## Why this matters Longitudinal evidence supports compliance and diligence readiness. ### Structured Q&A **Q: What artifacts should be retained?** Retain findings, summary metadata, and review briefs for auditability and trend analysis. **Q: What is the first recommended action for What artifacts should be retained?** store FINDINGS.jsonl and PACK_SUMMARY.json --- ## What does exit code 2 mean? Exit code 2 indicates configuration/runtime context error rather than a policy finding block. URL: https://sentinelayer.com/docs/knowledge-base/what-does-exit-code-2-mean Section: Knowledge Base Exit code 2 indicates configuration/runtime context error rather than a policy finding block. ## Recommended Actions - validate workflow inputs - inspect permissions and environment ## Why this matters Policy failures and platform failures need different response paths. ### Structured Q&A **Q: What does exit code 2 mean?** Exit code 2 indicates configuration/runtime context error rather than a policy finding block. **Q: What is the first recommended action for What does exit code 2 mean?** validate workflow inputs --- ## What is minimal HITL? Minimal HITL means automation-first execution with human governance at high-risk boundaries. URL: https://sentinelayer.com/docs/knowledge-base/what-is-minimal-hitl Section: Knowledge Base Minimal HITL means automation-first execution with human governance at high-risk boundaries. ## Recommended Actions - automate low-risk flow - require explicit approval for exceptions ## Why this matters This reduces toil without removing accountability. ### Structured Q&A **Q: What is minimal HITL?** Minimal HITL means automation-first execution with human governance at high-risk boundaries. **Q: What is the first recommended action for What is minimal HITL?** automate low-risk flow --- ## What is the operator runbook for autonomous mode? Use preflight checks, enforce approval boundaries, monitor loop telemetry, and publish evidence on completion. URL: https://sentinelayer.com/docs/knowledge-base/what-is-the-operator-runbook-for-autonomous-mode Section: Knowledge Base Use preflight checks, enforce approval boundaries, monitor loop telemetry, and publish evidence on completion. ## Recommended Actions - run runtime preflight before launch - require checkpoint approval before push/PR actions ## Why this matters Operational consistency is critical for safe autonomous execution. ### Structured Q&A **Q: What is the operator runbook for autonomous mode?** Use preflight checks, enforce approval boundaries, monitor loop telemetry, and publish evidence on completion. **Q: What is the first recommended action for What is the operator runbook for autonomous mode?** run runtime preflight before launch --- ## What should be public versus private in docs? Publish behavior and contracts; keep proprietary implementation internals private. URL: https://sentinelayer.com/docs/knowledge-base/what-should-be-public-vs-private-docs Section: Knowledge Base Publish behavior and contracts; keep proprietary implementation internals private. ## Recommended Actions - maintain public-safe documentation policy - separate internal implementation docs ## Why this matters Balanced disclosure supports adoption and defensibility. ### Structured Q&A **Q: What should be public versus private in docs?** Publish behavior and contracts; keep proprietary implementation internals private. **Q: What is the first recommended action for What should be public versus private in docs?** maintain public-safe documentation policy --- ## Which runtime mode should I use for audit work? Use audit_readonly for analysis, edit_gated for local patching, and autonomous_gated only when you want looped remediation with approvals. URL: https://sentinelayer.com/docs/knowledge-base/which-runtime-mode-should-i-use Section: Knowledge Base Use audit_readonly for analysis, edit_gated for local patching, and autonomous_gated only when you want looped remediation with approvals. ## Recommended Actions - start in audit_readonly for baseline findings - switch to edit_gated/autonomous_gated only when approved ## Why this matters Mode discipline keeps risk proportional to the task. ### Structured Q&A **Q: Which runtime mode should I use for audit work?** Use audit_readonly for analysis, edit_gated for local patching, and autonomous_gated only when you want looped remediation with approvals. **Q: What is the first recommended action for Which runtime mode should I use for audit work?** start in audit_readonly for baseline findings --- ## Which severity gate should I use? Most teams should start with P1 and tighten only after proving remediation throughput. URL: https://sentinelayer.com/docs/knowledge-base/which-severity-gate-should-i-use Section: Knowledge Base Most teams should start with P1 and tighten only after proving remediation throughput. ## Recommended Actions - start with P1 - review trend metrics monthly ## Why this matters Policy strictness should follow operational maturity. ### Structured Q&A **Q: Which severity gate should I use?** Most teams should start with P1 and tighten only after proving remediation throughput. **Q: What is the first recommended action for Which severity gate should I use?** start with P1 --- ## Why did my PR get blocked? A finding at or above configured severity gate was confirmed and triggered merge blocking. URL: https://sentinelayer.com/docs/knowledge-base/why-did-my-pr-get-blocked Section: Knowledge Base A finding at or above configured severity gate was confirmed and triggered merge blocking. ## Recommended Actions - open REVIEW_BRIEF.md - verify finding location and fix ## Why this matters Fast diagnosis reduces rerun cycles and developer friction. ### Structured Q&A **Q: Why did my PR get blocked?** A finding at or above configured severity gate was confirmed and triggered merge blocking. **Q: What is the first recommended action for Why did my PR get blocked?** open REVIEW_BRIEF.md --- ## Agent Architecture (Public Overview) Public-safe narrative of Sentinelayer's multi-agent direction and operational intent. URL: https://sentinelayer.com/docs/platform/13-agent-architecture Section: Platform Vision Sentinelayer is evolving toward a coordinated multi-agent system covering spec intelligence, build audit, security, scale-readiness, and diligence packaging. Public docs explain capability outcomes and governance behavior without exposing proprietary internal orchestration details. ### Structured Q&A **Q: Are proprietary internals disclosed publicly?** No. Public docs stay at capability and governance level, not internal implementation detail. --- ## Agent Platform Runbook Operator runbook for Codex CLI, Claude Code, Cursor, and IDE agents with Omar loop enforcement. URL: https://sentinelayer.com/docs/platform/agent-platform-runbook Section: Platform Vision Use this runbook when you want platform-consistent execution across external coding agents. ## Goal Run one deterministic flow regardless of agent frontend: - generate a spec package - execute in a local or hosted coding agent - enforce Omar loop until `P0-P2` are clear and gates are green ## Operator checklist 1. create a GitHub repository and default branch protection 2. clone locally and add spec artifacts at repo root (`spec_sheet.md`, `builder_prompt.md`, optional `playbook.md`) 3. configure your agent runtime permissions: - Codex CLI: enable command execution and git operations for the workspace - Claude Code / Cursor-style agents: set permission profile to allow local commands and branch commits only 4. start in read-only audit mode for baseline findings 5. switch to gated edit/autonomous mode only after approval policy is active 6. execute Omar loop (`scan -> patch -> test -> rescan`) and stop only on clean gates or explicit budget/policy exits 7. checkpoint via branch + PR, never direct push to protected branch ## Platform setup shortcuts - Codex CLI: - authenticate once with your provider key - allow workspace command execution - allow branch commits; keep protected branches write-blocked - Claude Code: - enable command execution and git in local workspace settings - keep push/PR as approval-gated operations - Cursor / Copilot / Augment / Replit / Lovable: - allow local terminal and branch commits - disable direct push to protected branch - keep Omar gate as required status check on PRs ## Agent settings snippets (optional) Use these as starting points inside your local repo settings files. ```json // .claude/settings.json { "permissions": { "allow": ["Bash(*)", "Read(*)", "Write(*)"], "deny": ["Bash(git push main)", "Bash(git push master)"] } } ``` ```json // .cursor/rules/runtime.json { "policy": "branch_only", "required_checks": ["lint", "typecheck", "tests", "build", "Omar Gate"] } ``` ```md - keep changes scoped to active phase - never bypass Omar loop for P0-P2 findings - no direct pushes to protected branches ``` ## Spec package convention Use these artifact names at repo root so every agent can follow the same contract: - `spec_sheet.md` - `builder_prompt.md` - `omar_gate.yml` - `playbook.md` (optional but recommended) ## Suggested command sequence 1. `git checkout -b feat/` 2. implement phase scope only 3. run local quality gates (`lint`, `typecheck`, `tests`, `build`) 4. open PR and wait for Omar scan 5. fix every `P0-P2` finding 6. rerun checks and merge only on clean gate ## Omar loop checklist (operator view) 1. open PR and stream required checks 2. wait for Omar findings 3. remediate all P0-P2 findings 4. rerun checks from clean head 5. merge only when all required checks are green 6. capture run IDs and evidence links for audit trail ## Spec continuity for add-feature work - keep a stable base spec hash as the invariant policy contract - add feature-delta specs for each scoped enhancement - evaluate both at review time: - base spec enforces non-regression constraints - delta spec enforces feature acceptance criteria - if constraints conflict, fail closed and require reconciled spec update ## Demo-safe evidence pack - run timeline with approvals - deterministic stop reason - evidence export with integrity chain head - compare-window KPI snapshot (TTC, closure rate, reproducibility) This runbook is product-safe for customer operations and technical diligence without exposing private orchestration internals. ### Structured Q&A **Q: Can I use this runbook with different coding agents?** Yes. The workflow is platform-agnostic as long as the agent can run commands, produce commits on a branch, and follow Omar loop gates. **Q: How should I handle add-feature specs on existing projects?** Keep the original base spec as an invariant and layer a delta spec for new feature requirements. Gate reviews against both to prevent regressions. --- ## Autonomous Runtime Runbook Operator runbook for gated autonomous execution, approvals, and deterministic rollback handling. URL: https://sentinelayer.com/docs/platform/autonomous-runtime-runbook Section: Platform Vision Use this runbook to execute autonomous runtime flows safely. ## Preflight - confirm protected branch policy is enabled - confirm required checks include Omar Gate - confirm runtime profile is healthy and required execution dependencies are available - confirm approval path is configured for push/PR checkpoints ## Execute 1. create run in `edit_gated` or `autonomous_gated` 2. inspect live timeline for lifecycle changes, tool activity, and approval checkpoints 3. approve checkpoint only after reviewing changes and test signals 4. execute single-run or batch git/PR checkpoint automation ## Stop and recover - if policy denies, resolve approvals before rerun - if budget exhausted, inspect findings trend and increase scope deliberately - if runtime unavailable, switch to read-only mode and restore runtime lane before retry ## Evidence handoff - export run evidence bundle - attach KPI delta snapshot and reconcile summary - include explicit stop reason in release notes ### Structured Q&A **Q: Can this runbook be used in semi-autonomous mode?** Yes. It is designed for gated autonomy where approvals remain mandatory at write boundaries. **Q: What should be reviewed before approving push/PR?** Review timeline events, verifier outcomes, and evidence metadata before approving any write-capable checkpoint. --- ## Funding Readiness Package Demo and diligence package for OpenAI, Google, MIT, and investor technical reviews. URL: https://sentinelayer.com/docs/platform/funding-readiness-package Section: Platform Vision Sentinelayer provides a repeatable evidence package for technical diligence. ## Package components - architecture memo focused on deterministic execution contracts - security posture summary for runtime isolation and governance controls - live demo script for run launch, approvals, Omar loop, and evidence export - KPI snapshot with compare-window and baseline deltas - customer-safe narrative focused on outcomes, not proprietary internals ## Core diligence metrics - Time-to-Clean (TTC) - P0-P2 closure rate - deterministic reproducibility rate - cost per clean run - human override rate ## Recommended presentation sequence 1. launch a fresh run and stream timeline live 2. show checkpoint approvals and policy boundaries 3. show Omar loop stop reason and cleanup status 4. export evidence and show KPI deltas in Runtime Insights 5. close with roadmap and current hardening status ## Funding packet checklist - architecture memo (deterministic execution contracts) - security posture memo (runtime isolation, redaction, audit trail integrity) - KPI appendix (TTC, closure rate, reproducibility, cost per clean run) - operator runbook (approval gates, rollback handling, escalation paths) - demo script mapped to reproducible run IDs and exported evidence ## External program targets - OpenAI startup/enterprise technical diligence - Google ecosystem and applied AI partner diligence - MIT or research accelerator technical screening This package is suitable for technical funding conversations while preserving proprietary implementation details. ### Structured Q&A **Q: Does this package expose proprietary orchestration details?** No. It presents deterministic behavior, evidence outputs, and governance controls without disclosing private internal implementation. **Q: Which audiences is this package for?** It is built for technical diligence with investors, strategic partners, and platform reviewers such as OpenAI, Google, and research programs. --- ## Greenfield Demo Flow Step-by-step tutorial for generating a spec package, building with an external agent, and closing Omar loop to green. URL: https://sentinelayer.com/docs/platform/greenfield-demo-flow Section: Platform Vision Use this tutorial to run a complete greenfield demonstration from idea to clean PR. ## Suggested demo app (fast but realistic) Build a "Neon Orbit Arena" prototype: - frontend: React + Three.js HUD + WebSocket event feed - backend: Node.js + TypeScript API for match state - scoring service: dedicated component exposed through a private service API - risk profile: auth/session handling, unsafe input parsing, race-prone realtime updates ## Step 1: Generate the build package 1. open Prompt Builder 2. describe the project outcome 3. generate and export: - spec sheet - builder prompt - Omar gate workflow - build playbook ## Step 2: Prepare repository 1. create GitHub repo 2. protect `main` with required Omar checks 3. add provider key secret (`OPENAI_API_KEY`) or Sentinelayer-managed token 4. commit the generated package files to repo root 5. connect the repo in Builder Studio so file context chips can be attached from the file tree ## Step 3: Execute in coding agent 1. open repo in Codex CLI / Claude Code / Cursor / Copilot / Augment / Replit 2. attach package artifacts as context 3. request phased implementation with Omar loop enforcement 4. keep writes on feature branch only 5. use context chips for high-priority files before each prompt (auth, CI, runtime config) ### Prompt starter (copy/edit) ```text Use spec_sheet.md and builder_prompt.md as hard constraints. Implement phase by phase on a feature branch. After each phase: run lint, typecheck, tests, build; open/update PR; wait for Omar; fix all P0-P2; repeat until clean. Log decisions and evidence in .sentinel/. ``` ## Step 4: Run deterministic Omar loop 1. scan 2. patch 3. test 4. rescan 5. stop only when `P0-P2` are clean and gates are green ## Step 5: Present evidence - show runtime timeline and terminal excerpts - show stop reason and gate summary - export evidence bundle - show Runtime Insights compare/baseline KPI snapshot - show PR history proving P0-P2 closure loop This flow is suitable for customer demos, partner diligence, and internal launch reviews. ### Structured Q&A **Q: Can I run this tutorial without exposing private architecture details?** Yes. The tutorial uses product-visible artifacts, gate outcomes, and KPI evidence only. **Q: What is the pass condition for the demo?** A clean PR with Omar gate green, no unresolved P0-P2 findings, and an exported evidence bundle. --- ## Investor Due Diligence Audit Mode How Sentinelayer supports investor diligence with evidence-backed engineering posture. URL: https://sentinelayer.com/docs/platform/investor-due-diligence-audit Section: Platform Vision Diligence mode packages technical evidence such as risk trend movement, remediation velocity, and scale-readiness indicators for funding conversations. ### Structured Q&A **Q: Does this replace diligence interviews?** No. It strengthens interviews with structured technical evidence. --- ## Minimal HITL Operations How Sentinelayer minimizes manual toil while preserving high-risk human governance. URL: https://sentinelayer.com/docs/platform/minimal-hitl-operations Section: Platform Vision Minimal HITL means automation-first execution with explicit human controls at exception and high-risk decision boundaries. ### Structured Q&A **Q: Does minimal HITL mean no human control?** No. It means less manual toil with preserved accountability for critical decisions. --- ## Scale Readiness Audit Evidence model for assessing engineering maturity before aggressive growth phases. URL: https://sentinelayer.com/docs/platform/scale-readiness-audit Section: Platform Vision Scale-readiness considers architecture health, dependency posture, release confidence, incident response maturity, and remediation cadence. ### Structured Q&A **Q: Can this help predict scaling constraints?** Yes. It highlights concentrated risks that usually become bottlenecks at scale. --- ## What's Coming Next Near-term roadmap themes for audit-first autonomy, scale controls, and enterprise readiness. URL: https://sentinelayer.com/docs/platform/whats-coming-next Section: Platform Vision Near-term roadmap focus: - full runtime worker infra rollout hardening on AWS Fargate + EFS lanes - expanded workload fairness and quota controls for multi-tenant concurrency - deeper investor/demo evidence workflows and reproducibility views - docs and LLM index refresh cadence tied to shipped contracts Roadmap items are directional and can change with customer and security priorities. ### Structured Q&A **Q: Are roadmap items guaranteed delivery dates?** No. Roadmap entries reflect planned direction and priorities, not guaranteed release dates. --- ## What's Shipped Now Public-safe changelog of shipped audit runtime capabilities and operator workflows. URL: https://sentinelayer.com/docs/platform/whats-shipped-now Section: Platform Vision Current shipped capabilities include: - deterministic runtime lifecycle with typed event taxonomy - SSE event streaming and terminal WebSocket streaming - adaptive workload orchestration with reconcile safeguards - managed execution lane for repository tooling and policy checks - cross-instance read-miss retry behavior for run/event durability under load-balanced paths - approval checkpoints for write-capable execution - Omar remediation loop with deterministic stop reasons - role-scoped memory retrieval for historical context injection - runtime evidence bundles and KPI endpoints - dedicated Runtime Insights dashboard for KPI/evidence reporting - batch git/PR checkpoint automation endpoint for multi-run operations - funding-readiness docs and autonomous runtime runbook for operator handoff ### Structured Q&A **Q: Does the shipped changelog include private internals?** No. It documents user-visible behavior and contracts without exposing proprietary orchestration internals. --- ## Data Handling Execution-boundary and telemetry policy overview. URL: https://sentinelayer.com/docs/security/data-handling Section: Security and Operations Core model: - code analysis runs in CI runner - model context is bounded - telemetry is tiered and consent-based ### Structured Q&A **Q: Can telemetry be disabled?** Yes. Use tier 0 or telemetry off policy. --- ## False Positive Defense Layered controls that keep security findings actionable. URL: https://sentinelayer.com/docs/security/false-positive-defense Section: Security and Operations Sentinelayer uses deterministic controls, diff-aware scope, and corroboration guardrails for higher-confidence blocking behavior. ### Structured Q&A **Q: Can unsupported AI-only findings block merges?** High-severity blocking behavior is designed to require stronger corroboration than unsupported AI text alone. --- ## Incident Runbook Operational triage sequence for blocked and failed runs. URL: https://sentinelayer.com/docs/security/incident-runbook Section: Security and Operations Incident flow: 1. capture run_id and exit code 2. classify policy vs platform failure 3. inspect artifacts 4. route to owner ### Structured Q&A **Q: What usually signals platform issue?** Configuration/runtime context failures and repeated execution faults. --- ## Full index https://sentinelayer.com/docs/llms.txt