Talent Systems — Science Team
Scorer System

Scorer Prompt Anatomy

Every piece of data fed into the AI scorer after an interview.

The scorer runs after every interview, analyzing the transcript against the competency framework to produce a scorecard.

Built by: lib/scorer-prompt.tsbuildScorerPrompt()
Model: Claude Haiku (dev) / Claude Sonnet (prod)
Admin page: /admin/rubrics

What the Scorer Receives

Science-Owned Fields (editable via admin UI)

FieldConfig KeyPurpose
System identitysystem_identityHow the scorer identifies its role. Token: {role}
Scoring philosophyscoring_philosophyAnchoring principles ("Score conservatively...")
Ownership ruleownership_ruleHow to handle collective vs. individual language
Specificity rulespecificity_ruleHow to handle vague answers
Quantification bonusquantification_bonusHow metrics affect borderline scores
Coached response flagcoached_response_flagHow to detect rehearsed answers
Adverse impact guidanceadverse_impact_guidanceWhat to flag for review
Scoring rulesscoring_rulesAdd/remove list of additional rules
Advance thresholdadvance_thresholdWeighted score ≥ this → "Advance" (default: 3.5)
Hold thresholdhold_thresholdWeighted score ≥ this → "Hold", else "Decline" (default: 2.5)

Dynamic Fields (per posting/interview)

FieldSourceIdentity Visible?
Competency namespostings.required_competenciesNo
Competency weightspostings.required_competenciesNo
Competency anchors (1-5)competency_framework.anchorsNo
Integrity flagsintegrity_flagsNo identity — just flag metadata
TranscriptRetell, name-scrubbedNo — [CANDIDATE]

Engineering-Owned (not editable via UI)

FieldPurpose
Output format (JSON schema)Defines the scorecard structure — scores, evidence, strengths, concerns, recommendation

What the Scorer Does NOT Receive

  • Candidate name, email, or demographics
  • Resume text
  • Job description
  • Which interview style or mode was used
  • Which questions were scripted vs. adaptive

On this page