Skip to main content

Risk Assessment Scaling

This section covers how to scale risk assessment processes as AI-assisted development expands beyond pilot teams. In Phase 1, risk assessment was manual and focused on pilot project selection. In Phase 2, the volume of AI-assisted projects requires structured risk categorization, automated scoring, defined escalation procedures, and active risk register management. Without scalable risk processes, governance becomes either a bottleneck that slows adoption or a rubber stamp that fails to protect the organization.

Risk Categorization

All AI-assisted development projects and activities MUST be categorized into one of four risk tiers. The risk tier determines the level of governance oversight applied.

Risk Tier Definitions

TierNameDescriptionGovernance Level
Tier 1Low RiskInternal tools, non-production code, documentation, test code. Public/Internal data only.Standard automated governance gates
Tier 2Moderate RiskProduction services with limited customer impact. Internal data with some Confidential elements.Automated gates + Tech Lead review
Tier 3Elevated RiskCustomer-facing services, services processing Confidential data, or services under standard regulatory compliance (SOC 2).Automated gates + Security review + Governance Lead sign-off
Tier 4High RiskServices processing Restricted data (PII/PHI/PCI), services under heavy regulation (HIPAA, PCI-DSS), or critical infrastructure.Automated gates + Security review + CISO approval + enhanced monitoring

Tier Assignment Criteria

Risk tiers are determined by evaluating four risk dimensions:

DimensionTier 1 (1 point)Tier 2 (2 points)Tier 3 (3 points)Tier 4 (4 points)
Data sensitivityPublic/InternalSome ConfidentialPrimarily ConfidentialRestricted (PII/PHI/PCI)
Production impactNo production impactLimited customer impactDirect customer impactRevenue-critical or safety-critical
Regulatory scopeNoneStandard complianceRegulated industryHIPAA/PCI-DSS/FedRAMP
Blast radiusSingle service, isolatedMultiple services, limited dependencyCross-service, significant dependencyOrganization-wide or external dependency

Tier Calculation: Sum all dimension scores. Tier 1: 4-6 points. Tier 2: 7-9 points. Tier 3: 10-12 points. Tier 4: 13-16 points. If any single dimension scores 4, the minimum tier is Tier 3 regardless of total score.

Automated Risk Scoring

As the number of AI-assisted projects grows, manual risk assessment creates bottlenecks. Phase 2 MUST implement automated risk scoring that evaluates projects against defined criteria and assigns risk tiers with minimal manual intervention.

Automated Scoring Inputs

The automated scoring system SHOULD consume data from the following sources:

Data SourceSignals ExtractedUpdate Frequency
Project management toolProject classification, team size, deadline pressurePer sprint
Data catalog / classification systemData types accessed by the projectOn change
CI/CD pipelineSecurity scan results, gate pass/fail history, deployment frequencyPer build
VCSAI-assisted PR ratio, code change velocity, review patternsPer PR
Incident managementHistorical incident rate for the serviceDaily
Compliance registryApplicable regulations and compliance requirementsOn change

Scoring Algorithm

The automated scoring system MUST implement the following logic:

  1. Base score calculation — Sum the four risk dimension scores based on project metadata
  2. Dynamic adjustments — Modify the base score based on runtime signals:
    • +1 if the team has been using AI tools for less than 30 days
    • +1 if the AI-assisted PR ratio exceeds 70% for the project
    • +1 if the team has had a governance gate failure in the last 30 days
    • -1 if the team has zero governance violations in the last 90 days
    • -1 if all team members have completed advanced training
  3. Tier assignment — Map the adjusted score to risk tiers using the thresholds above
  4. Override capability — The Governance Lead or CISO MAY manually override an automated tier assignment with documented justification

Implementation Approach

The automated scoring system SHOULD be implemented as:

  • A scheduled job (daily) that recalculates risk scores for all active AI-assisted projects
  • A webhook-triggered assessment when new projects are registered or existing projects change classification
  • A dashboard integration that displays current risk tier for each project in the Expanded Metrics KPI dashboard

Escalation Procedures

Clear escalation procedures ensure that identified risks are addressed by the appropriate level of authority within defined timeframes.

Escalation Matrix

TriggerEscalation LevelResponse TimeAction Required
Automated score crosses tier boundary upwardGovernance Lead2 business daysReview and confirm/override tier change; adjust governance
Security scan finds Critical vulnerability in AI-assisted codeSecurity Lead + Tech Lead4 hoursStop deployment; investigate; remediate
Data leakage incident involving AI toolCISO1 hourActivate incident response; assess scope; notify stakeholders
Repeated governance gate failures (3+ in 30 days)Governance Lead + Engineering Manager2 business daysRoot cause analysis; remediation plan; possible team re-training
AI tool vendor security incidentCISO + Platform Engineering Lead4 hoursAssess impact; consider tool suspension; notify teams
Risk register item unmitigated beyond due datePhase Lead5 business daysRe-evaluate risk; escalate to Steering Committee if needed

Escalation Process

  1. Detection — The triggering condition is detected (automated alert or human observation)
  2. Notification — The appropriate escalation level is notified via the organization's alerting system
  3. Assessment — The notified party assesses severity, scope, and impact within the response time
  4. Action — Corrective action is taken and documented
  5. Resolution — The triggering condition is resolved and verified
  6. Documentation — The incident, action taken, and resolution are documented in the risk register
  7. Lessons learned — Significant escalations are reviewed in the next Community of Practice session

Risk Register Management

The risk register is the single source of truth for all identified risks related to AI-assisted development. It MUST be actively maintained throughout Phase 2 and beyond.

Risk Register Structure

Each risk register entry MUST contain:

FieldDescriptionRequired
Risk IDUnique identifierYes
TitleBrief description of the riskYes
CategoryTechnical, Security, Compliance, Operational, PeopleYes
Risk tierAssociated risk tier (Tier 1-4)Yes
LikelihoodLow / Medium / HighYes
ImpactLow / Medium / High / CriticalYes
Risk scoreLikelihood x Impact (1-16 scale)Yes
OwnerPerson accountable for managing this riskYes
Mitigation strategyDescription of how the risk is being mitigatedYes
Mitigation statusNot Started / In Progress / Implemented / VerifiedYes
Due dateTarget date for mitigation completionYes
Last reviewedDate of last reviewYes
Residual riskRisk level after mitigationYes

Common AI-Assisted Development Risks

The following risks SHOULD be pre-populated in the risk register at Phase 2 launch:

RiskCategoryTypical TierSuggested Mitigation
AI tool vendor data breachSecurityTier 3-4Vendor security assessment; contractual requirements; DLP controls
AI-generated code introduces vulnerabilitySecurityTier 2-3SAST scanning; mandatory security review; training
Developers bypass governance for speedOperationalTier 2Pipeline enforcement; audit monitoring; culture building
AI model update changes behaviorTechnicalTier 2-3Pin model versions; test before updating; configuration management
Over-reliance on AI reduces developer skillsPeopleTier 1-2Balanced usage guidelines; skill development programs; metrics monitoring
Licensing issues with AI-generated codeComplianceTier 2-3License scanning; vendor agreements; legal review
AI tool availability impacts productivityOperationalTier 1-2Fallback procedures; multi-tool strategy; offline capabilities

Risk Register Review Cadence

Review TypeFrequencyParticipantsActions
Risk register updateWeeklyRisk LeadUpdate status, add new risks, close resolved risks
Risk review meetingBi-weeklyRisk Lead + Security Lead + Governance LeadDiscuss high-priority risks, review mitigations
Steering Committee risk reportMonthlyRisk Lead presents to Steering CommitteeStrategic risk decisions, resource allocation
Comprehensive risk reassessmentQuarterlyAll stakeholdersFull reassessment of all risks; tier recalibration

Scalable risk assessment is what allows the organization to expand AI adoption confidently. The structures defined here — automated scoring, clear escalation, and active register management — ensure that increasing adoption does not mean increasing risk. These processes evolve further in Phase 3 where they become embedded in the Organization-Wide Policy.