Rubric Versioning
Trends & Metrics
How to monitor rubric performance over time.
Both the interviewer and scorer prompt pages have a trends section in the sticky header.
Scorer Trends
| Metric | What It Shows |
|---|---|
| Average score | Mean weighted score across all interviews (1-5 scale) |
| Adverse impact rate | % of interviews flagged for review |
| Recommendation breakdown | % Advance / Hold / Decline |
Interviewer Trends
| Metric | What It Shows |
|---|---|
| Avg duration per question | Normalized by posting depth — shows interview efficiency |
| Completion rate | % of started interviews that complete successfully |
| Downstream scorer outcomes | Avg score and recommendation breakdown from interviews using this version |
Reading the Indicators
- Green ↑ = improvement (higher scores, higher completion, fewer flags)
- Red ↓ = regression (lower scores, more flags, more declines)
- Arrow direction follows the color, not raw delta (a decrease in adverse impact rate shows green ↑)
How to Use Trends
- After a rubric change: Monitor for 5-10 interviews to see if the change had the intended effect
- Score distribution shift: If average scores change significantly, investigate whether the rubric change or candidate pool is responsible
- Recommendation drift: If the Advance rate is too high or low, consider adjusting thresholds
- A/B observation: Compare trends across scorer versions to validate methodology changes