Your Immediate Workflow
Ongoing Responsibilities
What the science team owns on an ongoing basis after initial setup.
After the initial review and configuration, the science team has ongoing responsibilities:
Every Sprint
- Review sprint log: Check Sprint Log for what shipped. Flag any scoring/rubric changes that need review.
- Monitor scorer quality: Periodically run test interviews and review scorecards. Are scores consistent with expectations?
- Review rubric trends: On the scorer prompt page, check the trends section for drift — average scores, recommendation breakdown, adverse impact rates.
When Rubrics Change
- Every save creates a versioned snapshot (see Rubric Versioning)
- Use descriptive version names (e.g., "Adjusted ownership rule threshold", "Expanded coached response criteria")
- Choose the right change type:
- Revision (major) — fundamental methodology change
- Refinement (minor) — moderate adjustment
- Calibration (patch) — fine-tuning
When New Competencies Are Needed
- Employers can create custom competencies (with AI generation or manually)
- Platform competencies (visible to all employers) should only be added/modified by the science team
- Each competency needs complete 5-level behavioral anchors
When New Interview Types Ship
Each new type (situational, technical, post-hire) needs:
- Type-specific rubric defaults
- Type-specific scorer rubric
- Review of scorer output quality after first batch of interviews
When Validity Data Accumulates
- Monitor outcome reporting rates
- Run reliability checks when N is sufficient
- Begin criterion-related validity analysis
- Document content validity evidence per instrument