Skip to main content

The COMPEL Lifecycle

Enterprise AI transformation unfolds across six stages. Each stage has a clear purpose, mandatory artifacts, exit criteria, and handoffs to the next stage. Together, they form a continuous improvement loop from baseline assessment through operational learning.

COMPEL Stage Cycle The six COMPEL stages arranged as a continuous loop: Calibrate, Organize, Model, Produce, Evaluate, Learn. Learn feeds back into Calibrate forming the continuous improvement cycle. COMPEL Continuous Loop C Calibrate O Organize M Model P Produce E Evaluate L Learn
Hexagonal diagram of the six COMPEL stages forming a continuous cycle.

The COMPEL Stage Cycle arranges six transformation stages in a continuous hexagonal loop: Calibrate assesses readiness, Organize structures teams and sponsorship, Model designs governance and architecture, Produce executes deployment and controls, Evaluate measures effectiveness and value, and Learn extracts insights for evolution. The Learn stage feeds directly back into Calibrate, creating the continuous improvement cycle that distinguishes COMPEL from linear implementation methodologies. Each stage includes defined inputs, activities, outputs, and quality gate criteria.

See the full Big Picture →

  1. C

    Calibrate

    Secure executive sponsorship, align AI ambition with organizational risk appetite, and frame the use-case portfolio that drives transformation value. Operationally, produce a validated maturity baseline, shadow AI inventory, risk appetite statement, and prioritized use-case backlog with a measurable value thesis for every initiative across all 18 maturity domains and 10 operational readiness dimensions.

    12
    activities
    12
    outputs
    32
    articles
  2. O

    Organize

    Design the AI operating model, establish role-based governance structures, and build the organizational muscle for sustainable AI transformation. Operationally, deliver an approved operating model with a complete RACI, CoE charter, policy baseline, workforce readiness plan, and embedded agent-governance authority with defined HITL thresholds and escalation paths.

    12
    activities
    10
    outputs
    21
    articles
  3. M

    Model

    Classify AI models and systems by risk, define human validation rules and explainability requirements, and establish the control framework that governs AI behavior including autonomous agents and foundation models. Operationally, produce validated system classifications, explainability specifications, control-to-risk traceability, agent autonomy classifications, foundation-model selection scorecards, model cards, and a fine-tuning governance policy for every registered AI system.

    19
    activities
    15
    outputs
    49
    articles
  4. P

    Produce

    Execute workflow redesign, validate deployment readiness through quality gates, and activate monitoring, training, and control systems for production AI operations. Operationally, implement redesigned workflows with embedded AI, configure telemetry and kill-switches, activate all specified controls, complete training and adoption, and initiate regulatory compliance evidence collection across EU AI Act, NIST AI RMF, and ISO 42001.

    15
    activities
    10
    outputs
    47
    articles
  5. E

    Evaluate

    Conduct comprehensive reviews of KPIs, control performance, adoption, incident and risk indicators, and ROI to determine transformation effectiveness and pinpoint improvement areas. Operationally, execute KPI reviews, control-effectiveness audits, adoption surveys, incident analysis, ROI calculations, and cross-framework conformity assessments — producing gate review decision records for every active AI operation.

    17
    activities
    13
    outputs
    42
    articles
  6. L

    Learn

    Extract actionable insights from evaluation findings to update policies, capture reusable patterns, recalibrate benchmarks, and make informed scaling, retirement, or redesign decisions. Operationally, produce a policy update register, pattern library entries, recalibrated benchmark targets, scaling decision records, and a continuous-improvement backlog that feeds the next Calibrate cycle.

    9
    activities
    9
    outputs
    23
    articles

How Do the COMPEL Lifecycle Stages Connect?

Each stage produces artifacts that become prerequisites for the next. Gates between stages enforce quality and governance. The Learn stage feeds back into Calibrate, creating a continuous improvement cycle. Start with the Calibrate stage and follow the handoffs forward, or jump directly to the stage that matches your current work.