Talent Systems — Science Team
Competency Framework

AI Generation Pipeline

How AI generates competencies, questions, and JDs — and what you control.

All AI generation features use configurable prompts stored in the rubrics table (type: ai_generation). You can edit these via the admin portal at /admin/ai-generation.

Competency Generation

Prompt structure:

  • Persona (system prompt): rubrics.config.competency_system_prompt — defines the AI's role (e.g., "You are an I-O psychologist analyzing a job description...")
  • Task instructions (user prompt): rubrics.config.competency_user_prompt — what to do, with tokens for JD text, context, document, and count
  • Output format: Engineering-managed JSON schema appended after task instructions

What you control: The system and user prompts determine anchor quality. If AI-generated anchors are too vague or poorly calibrated, update these prompts.

Question Spine Generation

Prompt structure:

  • Persona: rubrics.config.question_system_prompt
  • Task instructions: rubrics.config.question_user_prompt with tokens: {jd_text}, {competencies}, {count}, {context}, {document}
  • Output format: Engineering-managed JSON schema

What you control: The prompts determine question quality — whether questions use proper behavioral framing, target distinguishable anchor levels, etc.

JD Generation / Enhancement

Prompt structure:

  • Generate: rubrics.config.jd_system_prompt + rubrics.config.jd_user_prompt
  • Enhance: rubrics.config.jd_enhance_prompt + rubrics.config.jd_enhance_user_prompt

What you control: Whether generated JDs map cleanly to assessable competencies, include appropriate KSAs, etc.

Token System

Prompts use {tokenName} placeholders that are replaced at runtime with actual data. Available tokens vary by prompt type — the admin UI shows clickable token pills you can insert at your cursor position. Invalid tokens (typos) are flagged with a warning.

On this page