AI Verification
AI verification reads the surrounding code context plus the rule that fired and returns a structured verdict — true_positive, false_positive, or uncertain — with a confidence score, written rationale, and a suggested fix when appropriate. Run it on a single finding, in bulk, or automatically as new findings arrive.
Running verification
Section titled “Running verification”On a single finding. Open the finding detail page and click Verify with AI. The job is queued; the verdict appears on the same page within seconds.
In bulk. From the Findings list, select multiple findings and choose AI Verify Selected from the bulk-actions menu. Each finding is queued as a separate job (no batching today; Anthropic Batch API support is planned).
Automatically. Enable Auto-verify in Settings → AI. Every new finding triggers verification automatically. Verdicts are advisory by default — they don’t auto-change finding status unless you also enable Auto-suppress AI false positives.
What the verdict contains
Section titled “What the verdict contains”| Field | Description |
|---|---|
verdict | true_positive / false_positive / uncertain |
confidence | low / medium / high |
reasoning | A short paragraph explaining the conclusion |
suggested_fix | (true positives only) Concrete code change |
The verdict, reasoning, and suggested fix are visible on the finding detail page. They also appear inline in PR comments, Slack/Teams notifications, and CSV/SARIF exports.
How verdicts are scoped
Section titled “How verdicts are scoped”Verdicts are cached by fingerprint. When a verdict is computed for one finding, every other finding in the same project with the same fingerprint inherits the same verdict — no redundant LLM calls. If you change the underlying code such that the fingerprint changes (different file, different line, different normalized snippet), the new finding gets a fresh verification.
What context goes into the prompt
Section titled “What context goes into the prompt”Each verification call sends Claude:
- The rule that fired (description, tags, severity).
- A snippet of code around the finding (trimmed to fit token budget).
- The finding metadata — file path, line number, package version (for SCA).
- Per-scan-type guidance — e.g. for SAST, “consider whether the sink receives attacker-controlled input”; for Secrets, “check if the secret is real or a documented placeholder”.
- Your organizational memory entries (see Organizational Memory).
- For SCA findings: EPSS (exploit probability) and CISA KEV (actively exploited) signals when known.
Human override feedback
Section titled “Human override feedback”When a human triages a finding in a way that contradicts the AI verdict — for example, AI said false_positive but a human marks it acknowledged and writes “this is real” — Vygl logs the contradiction as feedback for future model tuning. AI verdicts and human triage state coexist; the human action wins for downstream behavior (severity gates, suppression, etc.).