Outputs and Artifacts

Stable outputs and artifact names for automation.

  • outputs
  • artifacts
  • jsonl

Workflow Outputs

These are available via `steps.<step-id>.outputs` in subsequent workflow steps.

| Output | Type | Description |

|--------|------|-------------|

| `gate_status` | string | Final gate result: `passed`, `blocked`, or `error`. |

| `run_id` | string | Unique identifier for this scan run. Use as correlation key. |

| `scan_mode` | string | Effective scan mode selected for this run. |

| `severity_gate` | string | Effective severity threshold used for merge blocking. |

| `p0_count` | integer | Number of P0 findings. |

| `p1_count` | integer | Number of P1 findings. |

| `p2_count` | integer | Number of P2 findings. |

| `p3_count` | integer | Number of P3 findings. |

| `playwright_status` | string | Browser gate status: `skipped`, `passed`, or `failed`. |

| `playwright_mode` | string | Browser gate mode: `off`, `baseline`, or `audit`. |

| `sbom_status` | string | SBOM gate status: `skipped`, `passed`, or `failed`. |

| `sbom_mode` | string | SBOM gate mode: `off`, `baseline`, or `audit`. |

Artifacts

Uploaded to the GitHub Actions artifact store. Download via the Actions UI or `actions/download-artifact`.

| Artifact | Format | Description |

|----------|--------|-------------|

| `FINDINGS.jsonl` | JSON Lines | One finding per line. Primary machine-readable output. |

| `PACK_SUMMARY.json` | JSON | Scan metadata, severity counts, and gate decision. |

| `AUDIT_REPORT.md` | Markdown | Human-readable narrative report for full-repo scans. |

| `REVIEW_BRIEF.md` | Markdown | Condensed review summary posted as PR comment. |

| `.sentinelayer/sbom/*.cdx.json` | CycloneDX JSON | Generated when `sbom_mode` is enabled in workflow configuration. |

| `.sentinelayer/sbom/*.cdx.xml` | CycloneDX XML | Optional expanded output from `sbom_mode: audit`. |

FINDINGS.jsonl Schema

Each line is a self-contained JSON object:


{

  "finding_id": "f-abc123",

  "run_id": "run-20260223-001",

  "severity": "P1",

  "category": "security",

  "subcategory": "sql-injection",

  "title": "Unsanitized user input in SQL query",

  "description": "User-supplied value is interpolated directly into a SQL string without parameterization.",

  "file": "src/db/queries.py",

  "line_start": 42,

  "line_end": 42,

  "snippet": "cursor.execute(f\"SELECT * FROM users WHERE id = {user_id}\")",

  "remediation": "Use parameterized queries: cursor.execute(\"SELECT * FROM users WHERE id = %s\", (user_id,))",

  "confidence": 0.95,

  "agent": "security-scanner",

  "spec_reference": "security-checklist.input-validation"

}

Finding Fields

| Field | Type | Description |

|-------|------|-------------|

| `finding_id` | string | Unique identifier for this finding. |

| `run_id` | string | Matches the workflow output `run_id`. |

| `severity` | string | `P0`, `P1`, `P2`, `P3`, or `info`. |

| `category` | string | Top-level domain: `security`, `quality`, `spec-compliance`, `supply-chain`. |

| `subcategory` | string | Specific issue type (e.g. `sql-injection`, `unused-import`). |

| `title` | string | One-line summary of the finding. |

| `description` | string | Detailed explanation. |

| `file` | string | Relative path to the affected file. |

| `line_start` | integer | Starting line number. |

| `line_end` | integer | Ending line number. |

| `snippet` | string | Relevant code excerpt. |

| `remediation` | string | Suggested fix or action. |

| `confidence` | number | 0.0 to 1.0. Model confidence for AI findings; 1.0 for deterministic. |

| `agent` | string | Which review agent produced this finding. |

| `spec_reference` | string | Dot-path into the project spec this finding relates to. |

PACK_SUMMARY.json Schema


{

  "run_id": "run-20260223-001",

  "repo": "acme/backend",

  "branch": "feature/auth",

  "commit_sha": "a1b2c3d",

  "gate_status": "block",

  "severity_counts": {

    "P0": 0,

    "P1": 2,

    "P2": 3,

    "P3": 5,

    "info": 1

  },

  "total_findings": 11,

  "scan_mode": "pr-diff",

  "estimated_cost_usd": "0.0042",

  "duration_ms": 8420,

  "agents_used": ["security-scanner", "spec-compliance", "quality-reviewer"],

  "timestamp": "2026-02-23T08:15:30Z"

}

Accessing Artifacts in CI


- name: Download findings

  uses: actions/download-artifact@v4

  with:

    name: omar-gate-findings



- name: Count critical findings

  run: |

    CRITICAL=$(grep -c '"severity": "P1"' FINDINGS.jsonl || true)

    echo "Critical findings: $CRITICAL"

Structured Answers

What should SIEM ingest first?

Ingest FINDINGS.jsonl and correlate by run_id. Each line is a self-contained finding object with severity, category, file, line, and remediation fields.

How do I parse FINDINGS.jsonl?

Each line is a valid JSON object. Read line-by-line and parse each as JSON. In Python: [json.loads(line) for line in open('FINDINGS.jsonl')]. In Node: fs.readFileSync('FINDINGS.jsonl','utf8').split('\n').filter(Boolean).map(JSON.parse).

What is the difference between AUDIT_REPORT.md and REVIEW_BRIEF.md?

AUDIT_REPORT.md is a detailed narrative for full-repo scans. REVIEW_BRIEF.md is a condensed summary used as the PR comment body for pull request reviews.