audit
Project-wide health audit pipeline that fans out to all analysis skills in parallel, evaluates findings, and produces a unified report at .turbo/audit.md. Use when the user asks to "audit the project", "run a full audit", "project health check", "audit my code", "codebase audit",
What it does
Audit
Project-wide health audit. Fans out to all analysis skills, evaluates findings, and writes .turbo/audit.md and .turbo/audit.html. Analysis-only — does not apply fixes.
Task Tracking
At the start, use TaskCreate to create a task for each phase:
- Scope and partition
- Threat model
- Run analysis skills
- Run
/evaluate-findingsskill - Generate markdown report
- Generate HTML report
Step 1: Scope and Partition
If $ARGUMENTS specifies paths, use those directly (skip the question).
Otherwise, use AskUserQuestion to confirm scope:
- All source files — audit everything
- Specific paths — user provides directories or file patterns
- Critical paths — heuristically identify high-risk areas (entry points, auth, data handling, payment processing)
Once scope is determined:
- Glob for source files in the selected scope. Exclude generated and vendored directories (
node_modules/,dist/,build/,vendor/,__pycache__/,.build/,DerivedData/,target/,.tox/, and others appropriate to the project). - Partition files by top-level source directory. Cap at 10 partitions. If more than 10 top-level directories exist, group related directories or use
AskUserQuestionto narrow scope. If a single directory contains 50+ files, sub-partition it by its immediate subdirectories.
Step 2: Threat Model
Check if .turbo/threat-model.md exists. If it does, continue to Step 3.
If missing, use AskUserQuestion to ask whether to create one before proceeding. The security review benefits from threat model context, but creating one adds time.
- Yes — launch an Agent tool call (
model: "opus", do not setrun_in_background) whose prompt instructs it to invoke the/create-threat-modelskill via the Skill tool. Wait for completion before continuing. - No — continue without a threat model.
Step 3: Launch All Analysis Skills
Run all analysis skills in parallel.
Partitioned Skills
For each skill below, run one instance per partition with the partition's file list. Pass (skip peer review) annotations through to /review-code as an opt-out so it runs internal reviews only — /peer-review is scheduled as its own row to avoid duplicate codex runs.
| Skill | Scope |
|---|---|
/review-code with correctness (skip peer review) | File list |
/review-code with security (skip peer review) | File list |
/review-code with api-usage (skip peer review) | File list |
/review-code with consistency (skip peer review) | File list |
/review-code with simplicity (skip peer review) | File list |
/peer-review | File list |
Project-Wide Skills
| Skill | Notes |
|---|---|
/review-code with coverage (skip peer review) | Project-wide |
/review-dependencies | Project-wide |
/review-tooling | Project-wide |
/review-agentic-setup | Project-wide |
/find-dead-code | Has its own partitioning |
Step 4: Run /evaluate-findings Skill
Aggregate all findings from all agents. Run the /evaluate-findings skill once on the combined set.
Step 5: Generate Markdown Report
Write .turbo/audit.md using the template below. Populate the dashboard by counting findings per category and applying health thresholds. Output the dashboard as text before writing the file.
Report Template
# Audit Report
**Date:** <date>
**Scope:** <what was audited>
## Dashboard
| Category | Health | Findings | Critical |
|---|---|---|---|
| Correctness | <Pass/Warn/Fail> | <N> | <N> |
| Security | <Pass/Warn/Fail> | <N> | <N> |
| API Usage | <Pass/Warn/Fail> | <N> | <N> |
| Consistency | <Pass/Warn/Fail> | <N> | <N> |
| Simplicity | <Pass/Warn/Fail> | <N> | <N> |
| Test Coverage | <Pass/Warn/Fail> | <N> | <N> |
| Dependencies | <Pass/Warn/Fail> | <N> | <N> |
| Tooling | <Pass/Warn/Fail> | <N> | <N> |
| Dead Code | <Pass/Warn/Fail> | <N> | <N> |
| Agentic Setup | <Pass/Warn/Fail> | <N> | <N> |
| Threat Model | <Present/Missing> | — | — |
### Health Thresholds
- **Pass** — zero P0/P1 findings in this category
- **Warn** — P1 findings present but no P0
- **Fail** — P0 findings present
## Detailed Findings
### Correctness
<findings from /review-code correctness>
### Security
<findings from /review-code security>
### API Usage
<findings from /review-code api-usage>
### Consistency
<findings from /review-code consistency>
### Simplicity
<findings from /review-code simplicity>
### Test Coverage
<findings from /review-code coverage>
### Dependencies
<findings from /review-dependencies>
### Tooling
<findings from /review-tooling>
### Dead Code
<findings from /find-dead-code>
### Agentic Setup
<findings from /review-agentic-setup>
### Threat Model
<status and summary>
Step 6: Generate HTML Report
Convert the markdown report into a styled, interactive HTML page.
- Run the
/frontend-designskill to load design principles. - Read
.turbo/audit.mdfor the full report content. - Write a self-contained
.turbo/audit.html(single file, no external dependencies beyond Google Fonts) that presents all findings from the markdown report with:- Dashboard health grid with severity color-coding (red=Fail, amber=Warn, green=Pass)
- Severity summary bar (P0/P1/P2/P3 counts)
- Sticky navigation between report sections
- Collapsible category sections
- Finding tables with file, line, and description columns
- Severity badges and color-coded group labels
- Entrance animations and hover states
- Print-friendly styles via
@media print - Responsive layout for mobile
Rules
- If any skill is unavailable or fails, proceed with findings from the remaining skills and note the failure in the report.
/peer-reviewcovers all concerns (correctness, security, api-usage, consistency, simplicity, coverage). Distribute its findings into their matching category sections. Deduplicate findings that overlap with the specialized reviewers.- Does not modify source code, stage files, or commit.
Capabilities
Install
Quality
deterministic score 0.59 from registry signals: · indexed on github topic:agent-skills · 280 github stars · SKILL.md body (6,016 chars)