Skip to main content

Training & Skill Development

AI-assisted development is a skill, not an instinct. Giving developers access to AI tools without training is like giving a team a new programming language and expecting production code by Monday. This section defines the training curricula, skill assessment frameworks, learning paths, and certification programs that build AI-assisted development capabilities systematically across all engineering roles.

Training Philosophy

The AEEF training approach is built on four principles:

  1. Progressive Depth: Training moves from literacy to proficiency to mastery, with each level building on the previous one
  2. Role-Specific: Different roles require different skills; one-size-fits-all training wastes time and misses critical needs
  3. Practice-Based: AI-assisted development is a practical skill. Training MUST include hands-on exercises, not just theory
  4. Continuously Updated: AI tools evolve rapidly. Training materials MUST be reviewed and updated quarterly
info

Training is not a one-time event. It is a continuous program. Organizations MUST budget for ongoing training, not just initial onboarding. The AI tools and best practices that are current today will be outdated within 12 months.

Skill Assessment Framework

Before designing training, organizations MUST assess current skill levels to identify gaps and prioritize investment.

AI-Assisted Development Skill Matrix

Skill DomainBeginner (1)Competent (2)Proficient (3)Expert (4)
AI Tool OperationCan access and use basic featuresUses all standard features effectivelyConfigures tools for optimal performanceEvaluates and customizes tools for team needs
Prompt EngineeringWrites basic prompts with simple instructionsConstructs structured prompts with contextDesigns multi-step prompts with constraints and examplesCreates reusable prompt templates, mentors others
Context PreparationIncludes minimal contextSelects relevant code and documentationCurates optimal context packs for different scenariosDesigns context strategies for the team
Output EvaluationAccepts/rejects based on obvious errorsEvaluates correctness, style, and edge casesAssesses security, performance, and architectural fitDefines evaluation criteria for the team
Iterative RefinementMakes one attempt, then codes manuallyRefines 2-3 times with improved promptsSystematically refines using the DCRI modelOptimizes refinement workflows for efficiency
Quality & SecurityAware that AI code needs reviewApplies standard review practices to AI codeApplies enhanced review for AI-specific risksDesigns quality gates and review processes
Workflow DesignUses AI ad-hoc, no consistent patternFollows team workflow patternsDesigns and improves workflow patternsArchitects organizational workflow standards
Mentoring & TeachingN/ACan explain basic AI tool usageMentors team members in AI-assisted practicesTrains across the organization, contributes to curriculum

Assessment Process

  1. Self-Assessment: Developers complete the skill matrix self-assessment (15 minutes)
  2. Peer Validation: AI Champions validate self-assessments through brief observation (optional but RECOMMENDED)
  3. Gap Analysis: Compare individual assessments to role-level expectations
  4. Training Prescription: Assign learning path modules based on identified gaps

Assessment Cadence:

  • Initial assessment: During onboarding or program launch
  • Reassessment: Every 6 months
  • Targeted assessment: After completing a training module or certification

Training Curricula

Tier 1: AI Literacy Fundamentals (All Engineering Staff)

Duration: 4 hours (can be split across 2 sessions) Format: Instructor-led with hands-on exercises Prerequisites: None

Curriculum:

ModuleDurationContent
1.1 AI-Assisted Development Landscape45 minWhat AI tools do, how they work (conceptually), industry adoption trends, organizational strategy
1.2 Quality and Risk Awareness45 minDocumented quality risks (1.7x issues, 2.74x vulnerabilities), why review matters, the AEEF quality framework
1.3 Hands-On Tool Introduction90 minBasic tool usage, simple code generation, accepting/rejecting suggestions, basic prompting
1.4 Organizational Standards60 minApproved tools, prompt repository, workflow standards, quality gates, where to get help

Tier 2: Practitioner Skills (All Developers)

Duration: 12 hours (delivered over 2-3 weeks) Format: Blended (instructor-led workshops + self-paced exercises + pair programming) Prerequisites: Tier 1 completion

Curriculum:

ModuleDurationContent
2.1 Prompt Engineering2 hoursStructured prompting, context inclusion, constraint specification, prompt patterns
2.2 Context Preparation2 hoursContext types, preparation checklist, context packs, context window management
2.3 The DCRI Workflow2 hoursTask decomposition, context preparation, iterative refinement, integration practices
2.4 Output Evaluation2 hoursCode review for AI output, security checks, quality assessment, common AI failure modes
2.5 Test Generation2 hoursAI-assisted test creation, assertion validation, coverage analysis, test strategy design
2.6 Practical Workshop2 hoursEnd-to-end feature development using AI-assisted workflow, peer review

Tier 3: Advanced Practitioner (Senior Engineers, AI Champions)

Duration: 8 hours (delivered over 2 weeks) Format: Workshop-based with real project exercises Prerequisites: Tier 2 completion + 3 months of AI-assisted development experience

Curriculum:

ModuleDurationContent
3.1 Advanced Prompt Design2 hoursMulti-step prompts, chain-of-thought, few-shot patterns, domain-specific prompting
3.2 Workflow Architecture2 hoursDesigning team workflows, automation library development, prompt repository curation
3.3 Quality Gate Design2 hoursDesigning AI-specific quality gates, static analysis configuration, security scanning
3.4 Mentoring & Knowledge Transfer2 hoursTeaching AI-assisted development, peer coaching, community of practice facilitation

Tier 4: Leadership & Strategy (Engineering Managers, CoE Staff)

Duration: 6 hours Format: Seminar with case studies Prerequisites: Tier 1 completion + management role

Curriculum:

ModuleDurationContent
4.1 Managing AI-Augmented Teams2 hoursEstimation changes, capacity planning, performance evaluation
4.2 Change Management2 hoursChange management strategies, resistance management, communication
4.3 Metrics & ROI2 hoursMetrics framework, ROI calculation, stakeholder reporting

Learning Paths

Path A: Developer (IC)

Tier 1 → Tier 2 → [3 months practice] → Tier 3 (optional) → Ongoing quarterly updates

Path B: Engineering Manager

Tier 1 → Tier 4 → Tier 2 (abbreviated, hands-on awareness) → Ongoing quarterly updates

Path C: AI Champion

Tier 1 → Tier 2 → [3 months practice] → Tier 3 → Champion-specific coaching → Ongoing monthly CoP

Path D: QA Engineer

Tier 1 → Tier 2 (with QA-specific exercises) → [3 months practice] → Tier 3 Module 3.3 → Ongoing updates

Path E: New Hire Onboarding

Tier 1 (Week 1) → Tier 2 (Weeks 2-4) → Buddy pairing with AI Champion (Months 1-3)
tip

New hires who have AI-assisted development experience from previous roles SHOULD take a placement assessment and may skip Tier 1 if they score at the Competent level or above on the skill matrix. They MUST still complete the Organizational Standards module (1.4) regardless of prior experience.

Certification Program

Certification Levels

LevelNameRequirementsValidity
FoundationAEEF AI-Assisted DeveloperTier 1 + Tier 2 completion + skills assessment at Competent (2) or above1 year
PractitionerAEEF AI-Assisted Development PractitionerFoundation + 6 months experience + skills assessment at Proficient (3) or above1 year
ExpertAEEF AI-Assisted Development ExpertPractitioner + Tier 3 completion + demonstrated contribution to prompt repository and workflows2 years
ChampionAEEF AI ChampionExpert + Tier 3 Module 3.4 + active Champion role for 6+ months2 years

Certification Maintenance

  • Foundation and Practitioner certifications require annual renewal through a brief reassessment and completion of quarterly update modules
  • Expert and Champion certifications require biennial renewal plus demonstrated ongoing contribution
  • Certifications automatically expire if the holder does not complete required renewal activities
  • Expired certifications can be reinstated through reassessment without repeating full training
warning

Certification MUST NOT be used as a gatekeeping mechanism that prevents developers from using AI tools. All developers with Tier 1 training may use approved AI tools. Certification is a recognition of skill level, not a prerequisite for tool access.

Training Delivery Standards

Instructor Requirements

  • Tier 1 and Tier 2 training MAY be delivered by AI Champions who have completed Tier 3
  • Tier 3 training MUST be delivered by CoE staff or certified external trainers
  • Tier 4 training MUST be delivered by CoE leadership or senior engineering leadership
  • All instructors MUST have active, hands-on experience with AI-assisted development (not just theoretical knowledge)

Training Environment

  • All hands-on training MUST use the organization's approved AI tools in a sandboxed training environment
  • Training environments MUST NOT have access to production code or data
  • Training exercises SHOULD use realistic but fictional codebases that represent the organization's technology stack
  • Training environments MUST be available for self-study after formal sessions

Training Effectiveness Measurement

Training effectiveness MUST be measured and reported to the CoE:

MetricTargetMethod
Training completion rate95% within first 6 monthsLMS tracking
Knowledge assessment pass rate85% first attemptPost-training assessment
Skill level improvementAverage +1 level within 3 monthsSkill matrix reassessment
Developer satisfaction with training4.0+ out of 5.0Post-training survey
On-the-job application rate80% using learned techniques within 2 weeksFollow-up survey + telemetry

Continuous Learning

Beyond formal training, organizations MUST support continuous learning:

  • Monthly AI Tool Updates: Brief sessions (30 minutes) covering new features, updated best practices, and lessons learned
  • Community of Practice: Regular forums where developers share techniques, prompts, and experiences
  • External Learning Budget: Allocate budget for developers to attend AI-related conferences and courses
  • Internal Knowledge Base: Maintain a searchable repository of AI-assisted development tips, tricks, and case studies, linked to the prompt repository
  • Pair Programming with AI: Encourage regular pair programming sessions where developers collaborate on AI-assisted tasks, learning from each other's approaches