Talent Systems — Science Team
Rubric Versioning

Trends & Metrics

How to monitor rubric performance over time.

Both the interviewer and scorer prompt pages have a trends section in the sticky header.

MetricWhat It Shows
Average scoreMean weighted score across all interviews (1-5 scale)
Adverse impact rate% of interviews flagged for review
Recommendation breakdown% Advance / Hold / Decline
MetricWhat It Shows
Avg duration per questionNormalized by posting depth — shows interview efficiency
Completion rate% of started interviews that complete successfully
Downstream scorer outcomesAvg score and recommendation breakdown from interviews using this version

Reading the Indicators

  • Green ↑ = improvement (higher scores, higher completion, fewer flags)
  • Red ↓ = regression (lower scores, more flags, more declines)
  • Arrow direction follows the color, not raw delta (a decrease in adverse impact rate shows green ↑)
  1. After a rubric change: Monitor for 5-10 interviews to see if the change had the intended effect
  2. Score distribution shift: If average scores change significantly, investigate whether the rubric change or candidate pool is responsible
  3. Recommendation drift: If the Advance rate is too high or low, consider adjusting thresholds
  4. A/B observation: Compare trends across scorer versions to validate methodology changes

On this page