Skip to main content

Post-Implementation Review

The Post-Implementation Review (PIR) is the sixth and final stage of the Operating Model Lifecycle. It captures lessons learned, measures outcomes against the business intent defined in Stage 1, identifies improvement opportunities, and feeds insights back into the AI-assisted development process. The PIR closes the feedback loop that makes the AEEF a self-improving system. Without it, the organization deploys code but never learns whether it achieved its goals or how the process could be improved. PIRs are REQUIRED for all standard and large initiatives; lightweight PIRs are RECOMMENDED for small tasks.

Outcomes Measurement

Measuring Against Business Intent

The primary purpose of the PIR is to determine whether the initiative achieved the outcomes defined in the Business Intent Document. Every success criterion defined in Stage 1 MUST be evaluated.

EvaluationCriteriaAction
MetSuccess criterion achieved within defined timelineDocument as achieved; capture contributing factors
Partially metSuccess criterion partially achieved or achieved outside timelineDocument gap; analyze root cause; determine if further action is needed
Not metSuccess criterion not achievedDocument failure; conduct root cause analysis; determine corrective action
Not measurableInsufficient data to evaluate criterionDocument data gap; improve measurement for future initiatives

Measurement Timeline

Not all outcomes are immediately measurable. The PIR SHOULD be conducted in two phases:

PhaseTimingFocus
Technical PIR1-2 weeks after deploymentCode quality, security findings, operational stability, deployment smoothness
Business PIR4-8 weeks after deploymentBusiness outcomes, user adoption, performance against KPIs

Outcomes Measurement Template

Success CriterionTargetActualStatusNotes
[Criterion from Business Intent][Target value][Measured value]Met / Partially Met / Not Met[Analysis]

Lessons Learned

Lesson Categories

Lessons learned MUST be captured across four dimensions:

1. AI Effectiveness Lessons

QuestionPurpose
How effective was AI assistance for this initiative?Assess overall AI value
Which tasks benefited most from AI? Which benefited least?Identify optimal AI use cases
What prompt patterns were most effective?Feed into the Prompt Library
Where did AI-generated code require the most hardening effort?Improve hardening focus areas
Were there any AI-generated issues not caught until production?Identify governance gaps

2. Process Lessons

QuestionPurpose
Did the time box for AI exploration feel appropriate?Calibrate future time boxes
Was the hardening effort proportional to the exploration effort?Validate hardening expectations
Did the governance gate process work smoothly?Identify governance friction
Was the deployment strategy appropriate for the risk level?Validate deployment approach
Were there any process steps that felt unnecessary or missing?Refine the operating model

3. Quality Lessons

QuestionPurpose
What was the defect rate for this initiative vs. baseline?Track quality trends
Were security findings during hardening typical or unusual?Identify emerging patterns
Was test coverage adequate to catch issues?Validate testing strategy
Did any production issues stem from AI-generated code?Measure AI quality impact

4. Team Lessons

QuestionPurpose
Did the team feel well-prepared for AI-assisted development?Assess training effectiveness
Were there skill gaps that affected outcomes?Identify training needs
How did team collaboration change with AI tools?Understand team dynamics
What would the team do differently next time?Capture practical insights

Lesson Documentation Format

Each lesson MUST be documented with sufficient context for others to benefit:

### Lesson: [Title]

**Category:** AI Effectiveness / Process / Quality / Team
**Initiative:** [Name and ID]
**Date:** [Date]
**Severity:** High / Medium / Low

**Observation:**
[What happened or was observed]

**Impact:**
[How this affected the initiative's outcome]

**Root Cause:**
[Why this happened]

**Recommendation:**
[What should be done differently in the future]

**Action Item:**
[Specific action, owner, and timeline]

Improvement Recommendations

Generating Recommendations

Each PIR MUST produce at least two actionable improvement recommendations. Recommendations SHOULD be specific, measurable, and assignable:

Recommendation TypeExampleOwner
Process improvement"Add a dependency validation step to the hardening checklist"Governance Lead
Training improvement"Add a module on handling async patterns in AI-generated code"Training Lead
Tool improvement"Configure AI tool to prefer parameterized queries by default"Platform Engineering
Prompt improvement"Create a domain-specific prompt for payment processing code"Prompt Engineering Specialist
Governance improvement"Reduce Tier 2 governance review turnaround from 2 days to 1 day"Governance Lead
Metric improvement"Add AI-attributed defect tracking to the sprint dashboard"Metrics Analyst

Recommendation Prioritization

Recommendations MUST be prioritized using the impact/effort framework from Continuous Improvement:

PriorityImpactEffortAction
Quick winHighLowImplement within 2 weeks
StrategicHighHighSchedule for next quarter's improvement backlog
Fill-inLowLowInclude in the next batch improvement cycle
ConsiderLowHighEvaluate whether the ROI justifies the effort

Feedback Integration

Where PIR Outputs Go

PIR outputs MUST be integrated into the following organizational processes:

OutputDestinationResponsible
Lessons learnedLessons-Learned Repository per Knowledge SharingPIR facilitator
Effective promptsOrganizational Prompt LibraryInitiative developers
Process improvementsContinuous Improvement backlogAI Engineering Excellence team
Training recommendationsTraining curriculum update queueTraining Lead
Governance recommendationsGovernance review agendaGovernance Lead
Metrics recommendationsDashboard improvement backlogMetrics Analyst
Tool recommendationsTool evaluation pipelinePlatform Engineering Lead

Feedback Loop Verification

The AI Engineering Excellence team MUST verify quarterly that PIR recommendations are being actioned:

  • Recommendation tracking — All recommendations are tracked in the improvement backlog with status updates
  • Implementation rate — Target: > 70% of recommendations implemented within 90 days
  • Impact verification — Implemented recommendations are measured for actual impact
  • Stale recommendation review — Recommendations older than 90 days without action are reviewed and either prioritized or closed with justification

PIR Meeting Structure

Participants

RoleAttendancePurpose
Initiative Tech LeadREQUIREDPresent technical findings
Initiative developersREQUIREDShare hands-on experience
Product OwnerREQUIRED (Business PIR)Evaluate business outcomes
Security reviewer (if involved)RECOMMENDEDShare security observations
Governance LeadRECOMMENDEDAssess governance effectiveness
Team ChampionRECOMMENDEDCapture insights for the community

Agenda (60-90 minutes)

TimeActivityFacilitator
10 minReview business intent and success criteriaProduct Owner
15 minPresent outcomes measurement resultsTech Lead
20 minDiscuss lessons learned (all four dimensions)Facilitator (rotating role)
15 minGenerate improvement recommendationsAll
10 minPrioritize recommendations and assign ownersTech Lead
10 minSummarize action items and next stepsFacilitator

PIR Ground Rules

  • Blameless — The PIR focuses on process and outcomes, not individual performance
  • Evidence-based — Claims are supported by data from metrics, logs, or documented observations
  • Forward-looking — The focus is on what to improve, not what went wrong
  • Time-boxed — PIRs MUST NOT exceed 90 minutes; extended discussion is taken offline
  • Documented — The PIR report is published within 3 business days of the meeting

The Post-Implementation Review completes the Operating Model Lifecycle and connects it back to the beginning. The insights captured here improve future Business Intent definitions, refine AI Exploration practices, sharpen Human Hardening checklists, and calibrate Governance Gate criteria. This feedback loop is what transforms the AEEF from a static framework into a living, improving system.