Skip to main content

Maturity Assessment & Certification

This section defines the formal maturity assessment and certification process for AI-assisted engineering. Certification provides an objective, evidence-based evaluation of each team's and the organization's capability in AI-assisted development. It validates that the investment in the transformation has produced genuine, measurable capability — not just tool adoption. The certification process draws on the governance framework from Phase 2, the metrics infrastructure from Expanded Metrics, and the standards established across all three phases of the transformation.

Assessment Methodology

Assessment Approach

The maturity assessment uses a structured evaluation methodology with three components:

  1. Evidence review — Examination of artifacts, metrics, and documentation against defined criteria
  2. Practitioner interviews — Structured interviews with developers, Tech Leads, and managers to assess understanding and application of standards
  3. Process observation — Direct observation of AI-assisted development practices, code reviews, and governance processes

Assessment Scope

Assessments are conducted at two levels:

LevelScopeFrequencyAssessors
Team assessmentIndividual engineering teamSemi-annually1 internal assessor + 1 peer team representative
Organizational assessmentEnterprise-wide aggregationAnnually2 internal assessors + 1 external assessor (RECOMMENDED)

Assessment Process

StepDurationActivities
1. Preparation1 weekAssessor reviews team metrics, documentation, and artifacts; schedules interviews
2. Evidence collection1 weekAssessor gathers and reviews required evidence (see Evidence Requirements below)
3. Interviews2-3 daysStructured interviews with 3-5 team members at different levels
4. Process observation2-3 daysObserve code reviews, sprint ceremonies, governance processes
5. Scoring2-3 daysAssessor scores against criteria and prepares report
6. Review1 dayFindings shared with team lead for factual accuracy check
7. Report delivery1 dayFinal report delivered with certification level and recommendations

Evidence Requirements

Each assessment MUST include evidence from the following categories:

Documentation Evidence

Evidence ItemDescriptionRequired
Team Acceptable Use Policy acknowledgmentsSigned AUP for all team membersREQUIRED
Training completion recordsProof that all team members completed required trainingREQUIRED
AI attribution metadata complianceSample of recent commits showing proper attributionREQUIRED
Code review recordsSample of 10+ AI-assisted PR reviews showing review qualityREQUIRED
Prompt library contributionsTeam contributions to the organizational prompt libraryRECOMMENDED
Lessons learned submissionsTeam submissions to the lessons-learned repositoryRECOMMENDED

Metrics Evidence

Evidence ItemDescriptionRequired
Velocity trend dataSprint velocity data showing trend since baselineREQUIRED
Defect density dataDefect density trend since baselineREQUIRED
Vulnerability density dataSecurity finding trend since baselineREQUIRED
Gate pass rate dataCI/CD governance gate pass/fail ratesREQUIRED
Developer satisfaction survey resultsMost recent survey scores for the teamREQUIRED
AI-attributed defect trackingData on defects traced to AI-generated codeRECOMMENDED

Process Evidence

Evidence ItemDescriptionRequired
Governance complianceEvidence that governance gates are followed consistentlyREQUIRED
Exception handlingRecords of any governance exceptions with proper documentationREQUIRED (if applicable)
Risk tier assignmentEvidence that the team's projects have appropriate risk tier assignmentsREQUIRED
Community participationAttendance and participation records for CoP and showcasesRECOMMENDED
Continuous improvementEvidence of improvement actions taken based on feedbackRECOMMENDED

Scoring Criteria

The assessment evaluates six domains, each scored on a 1-5 maturity scale.

Maturity Scale

LevelNameDescription
1InitialAI tools are used ad hoc with no consistent process or governance
2DevelopingBasic processes are in place but inconsistently applied; governance is reactive
3DefinedProcesses are documented, consistently applied, and governed; metrics are collected
4ManagedProcesses are measured and controlled; decisions are data-driven; continuous improvement is active
5OptimizingProcesses are continuously refined; AI-first workflows are embedded; the team contributes to organizational improvement

Assessment Domains

DomainWeightKey Criteria
Governance Compliance25%Adherence to governance framework, gate compliance, audit trail completeness, exception handling
Code Quality20%Defect density trend, AI-attributed defect rate, code review thoroughness, test coverage
Security Practices20%Vulnerability density trend, data classification compliance, security review practices, incident handling
Developer Competency15%Training completion, prompt engineering skill, understanding of AI limitations, critical evaluation of AI output
Process Maturity10%Workflow integration, AI attribution practices, metrics collection, Community participation
Continuous Improvement10%Feedback contribution, improvement action participation, A/B test involvement, knowledge sharing

Scoring Rubric Example: Governance Compliance

ScoreCriteria
1No governance framework applied; AI tools used without oversight
2Governance framework exists but compliance is inconsistent; some gates are skipped
3Governance framework consistently applied; all gates are followed; exceptions are documented
4Governance is data-driven; gate effectiveness is measured; false positives are tracked and reduced
5Governance is continuously refined; the team proactively identifies governance improvements; exception rate near zero

Certification Levels

Based on the weighted assessment score, teams and organizations are awarded one of four certification levels:

Certification LevelScore RangeDescription
Not Certified1.0 - 1.9Does not meet minimum standards for AI-assisted development; remediation required
Bronze: Foundational2.0 - 2.9Meets basic requirements; processes are in place but need strengthening
Silver: Proficient3.0 - 3.9Solid, consistent practices; governance is effective; metrics-driven
Gold: Advanced4.0 - 4.5High maturity across all domains; strong continuous improvement; contributing to organizational excellence
Platinum: Exemplary4.6 - 5.0Best-in-class practices; active innovator; model for other teams

Minimum Requirements by Level

  • Bronze — All Governance Compliance and Security Practices criteria score >= 2; no domain scores 1
  • Silver — All domains score >= 2; Governance and Security score >= 3; training 100% complete
  • Gold — All domains score >= 3; at least 3 domains score >= 4; active in continuous improvement
  • Platinum — All domains score >= 4; active contributor to organizational prompt library and knowledge sharing

Certification Outcome Actions

CertificationAction
Not CertifiedMandatory remediation plan within 30 days; re-assessment within 90 days; governance restrictions may apply
BronzeImprovement plan recommended; reassessment in 6 months
SilverTarget level for Phase 3 completion; eligible for self-service governance
GoldEligible for reduced governance oversight (automated gates sufficient); team members eligible to become assessors
PlatinumRecognized as center of excellence; team members eligible to mentor other teams and contribute to standards

Renewal Process

Certification is not permanent. It MUST be renewed to ensure sustained capability:

  • Team certification — Valid for 12 months; renewal requires re-assessment
  • Organizational certification — Valid for 12 months; renewal requires full organizational assessment
  • Renewal simplification — Teams maintaining Gold or Platinum status MAY undergo a streamlined renewal (evidence review only, no interviews) if no significant process changes have occurred
  • Downgrade triggers — A team's certification MAY be downgraded between renewal periods if: a Critical security incident occurs, sustained quality degradation is observed (3+ months), or systematic governance violations are identified

Annual Renewal Calendar

MonthActivity
Month 1-2Team self-assessments using the certification rubric
Month 2-3Internal assessor reviews and interviews
Month 3-4Scoring and report preparation
Month 4Certification decisions and communication
Month 5-6Remediation activities for teams below target
Month 7Re-assessment for remediated teams

Maturity certification is the accountability mechanism that ensures the transformation delivers lasting value. Without it, practices naturally degrade over time as teams face deadline pressure and staff turnover. With it, excellence in AI-assisted engineering becomes a measured, recognized, and sustained organizational capability.