AI Readiness Assessment: How to Evaluate Whether Your Organization Is Prepared for AI

Run a scored AI readiness assessment your CEO will trust with a behavioral rubric, maturity model, and complete checklist across all five pillars.

Posted April 17, 2026

Your executive team announced an AI mandate last month. Someone in the C-suite said the words "AI readiness" in an all-hands, and now you're personally accountable for measuring something that spans multiple domains, most of which you don't control. You've been searching for "AI readiness assessment" because you need a framework you can actually use.

This article provides a complete AI readiness assessment framework you can run with your leadership team this week. It covers the five standard pillars, yes. But it also includes a scored, behavioral rubric for people readiness with the same evaluative rigor applied to data quality and infrastructure. It's a decision-making document you can present to your CEO with a quantified gap analysis and a prioritized action plan for 2026 and beyond.

Read: AI Change Management: How to Lead Your Organization Through the AI Transition

What an AI Readiness Assessment Actually Measures (And What Most Get Wrong)

An AI readiness assessment is a scored evaluation across organizational dimensions that produces a specific readiness level, identifies prioritized gaps, and maps those gaps to interventions. The output is a decision-making document. Done well, it tells leadership exactly where the organization stands, which gaps matter most, and what the next 90 days should look like.

The standard framework covers five pillars:

  1. Strategy and leadership - Whether AI initiatives have executive sponsorship, documented use cases, and allocated budget, or just a slide deck.
  2. Data foundations - Whether your data is accurate, accessible, governed, and structured for AI consumption.
  3. Technology infrastructure - Whether your cloud, compute, and integration architecture can support AI workloads without months of provisioning.
  4. People and skills - Whether your workforce can actually use AI tools to do their jobs differently.
  5. Governance and ethics - Whether you have policies, accountability structures, and risk frameworks for AI deployment.

Every assessment framework on the market covers these five. The question is how rigorously they cover them.

Here's the pattern: data quality gets concrete benchmarks. Infrastructure gets technical checklists. Strategy gets evaluation questions with clear yes/no answers. Then you reach the people pillar, and the framework goes vague. "Assess AI skills across the organization." "Evaluate workforce readiness." "Consider training needs." No scoring criteria. No behavioral indicators. No way to distinguish between a team that completed an AI course and a team that actually changed how they work.

This matters because the people dimension is where most AI deployments fail. McKinsey's research is unambiguous: of all organizational changes linked to generative AI success, fundamental workflow redesign ranks highest in correlation with business impact. Yet only 21% of organizations using generative AI have redesigned at least some workflows. Nearly 80% are layering AI on top of existing processes without rethinking how work actually flows. The technology gets deployed. The dashboards look good. Adoption stalls because employees default to their old workflows, and the AI investment sits unused.

The framework that follows gives the people dimension the same evaluative rigor as data and infrastructure. You'll score your workforce readiness against concrete behavioral descriptors, not vague prompts. And you'll be able to present that score to your leadership as a quantified gap.

What the AI Readiness Index Data Tells Us About Where Organizations Actually Stand

Before running your own assessment, it's worth understanding the broader landscape because the AI readiness index data from 2025 reveals patterns that apply directly to your internal evaluation.

8,000+ business leaders across 30 markets introduced a new concept: AI Infrastructure Debt. the silent accumulation of shortcuts, deferred upgrades, and underfunded architecture that erodes AI value over time.

Key findings that should calibrate your assessment:

  • Only 13% of organizations are "Pacesetters" fully positioned to capture AI's business value. This figure has not changed in three years.
  • Just 15% of organizations have networks fully ready for AI workloads. Among Pacesetters, the figure is 71%.
  • Only 19% of organizations have a fully centralized data infrastructure. Among Pacesetters: 76%.
  • Pacesetters are 4× more likely to move AI pilots into production and 40% more likely to report measurable value.
  • 75% of Pacesetters report AI proficiency across their staff. Among all other organizations: 16%.

The readiness index data tells you which gaps to prioritize: data centralization, skills proficiency, and governance are the three dimensions most consistently separating high performers from everyone else. Use these benchmarks when you run your internal readiness assessment. They give your scores external validity.

The Five Pillars of AI Readiness: A Complete Assessment Framework

Before diving into the people-readiness rubric, you need a working assessment of the other four pillars. What follows is enough depth to evaluate each dimension competently, not exhaustively, but enough to know where you stand and what to prioritize.

Pillar 1: Strategy & Leadership

This pillar measures whether AI has executive commitment that translates to action, or just rhetorical support that produces no change. AI high performers are 3× more likely than peers to have senior leaders who demonstrate ownership of and active commitment to AI initiatives. That gap in leadership engagement is one of the strongest predictors of whether your AI strategy delivers business value.

Assessment questions:

  • Does your AI strategy name specific use cases with projected ROI, or does it say "explore AI opportunities"?
  • Has your CEO communicated the AI mandate to the full organization, or only to direct reports?
  • Is there a dedicated budget for AI initiatives, or are they competing for discretionary funds?
  • Are AI KPIs tied to any executive's performance review?

What good looks like: A documented AI strategy tied to 2-3 specific business outcomes, named executive sponsors for each initiative, and a budget allocated for the current fiscal year. 99% of Pacesetters have a well-defined AI strategy, compared to 58% of all other organizations. You can't execute what you haven't planned.

Common failure mode: A business strategy document exists, was presented to the board, and lives in a SharePoint folder nobody has opened in four months. No operational funding. No timeline. No owner below the C-suite.

Pillar 2: Data Foundations

This pillar measures whether your data can actually feed AI systems, whether that data is accurate, accessible, governed, and structured. Data governance is the load-bearing foundation for every AI initiative you want to run.

Data is also the single biggest weakness across organizations: 81% of respondents admitted their data exists in silos, and only 19% have a fully centralized data infrastructure. This is the gap where AI projects go to die.

Assessment questions:

  • What is the documented accuracy rate of your production datasets? (If you don't know, that's your answer.)
  • Is data governance centralized with clear ownership, or siloed by department?
  • Can the teams that need data access do so without filing IT tickets and waiting days?
  • Do you have documented data lineage? Can you trace where the data came from and how it was transformed?

What good looks like: ≥85% data accuracy in production datasets, documented data lineage, a centralized data catalog, and self-service access for authorized users. Strong data infrastructure is the prerequisite for integrating AI into any business process at scale.

Common failure mode: The data science team has a clean, well-governed data infrastructure. The sales team works from spreadsheets exported from Salesforce last quarter. The marketing team has a different customer list from finance.

Pillar 3: Technology Infrastructure

This pillar measures whether your technical architecture can support AI workloads at the speed the business demands. As AI systems grow more sophisticated, particularly with the rise of agentic AI, infrastructure requirements are accelerating faster than most organizations anticipated.

83% of organizations plan to deploy AI agents within a year, and nearly 40% expect these agents to work alongside employees in the near term. Yet only 15% have networks flexible enough to support current AI workloads, let alone the demands of autonomous machine learning systems.

Assessment questions:

  • Can your cloud infrastructure provision ML training environments within 24 hours without IT tickets?
  • Do your core systems (CRM, ERP, HRIS) have API layers that allow data integration with AI solutions?
  • Is your security architecture designed for AI workloads, including model access controls, data encryption, and audit trails?
  • Can you scale compute capacity without multi-week procurement cycles?

What good looks like: Cloud infrastructure that scales ML workloads on demand, an existing API layer connecting core systems, and security protocols that don't treat every AI initiative as a novel risk requiring a six-month review.

Common failure mode: Legacy on-premise systems with no API access. Every AI project requires custom data exports, manual file transfers, and IT involvement at every step. The technology exists to support AI, and it just takes eight weeks to provision anything. This is the definition of AI Infrastructure Debt: the gap between where your infrastructure is and where it needs to be.

Pillar 4: Governance & Ethics

This pillar measures whether you have the policies and accountability structures to deploy AI responsibly. AI ethics is an operational risk management function. 47% of organizations have already experienced at least one negative consequence from generative AI. The organizations that hadn't built governance frameworks ahead of deployment were the ones caught without a plan.

In 2026, governance is equally important from a regulatory standpoint. The EU AI Act is fully in enforcement. Organizations operating globally need compliance frameworks that cover both existing regulations and the emerging requirements taking shape across North America and Asia.

Assessment questions:

  1. Do you have a documented AI use policy that specifies acceptable and prohibited uses?
  2. Has legal counsel reviewed your AI policies within the last 12 months?
  3. Is there a named owner for AI ethics and risk within the organization?
  4. Do you have a bias testing protocol for any customer-facing AI applications?
  5. If you operate in the EU, have you assessed compliance with the EU AI Act?

What good looks like: A documented AI use policy reviewed by legal, a named AI ethics owner with individual accountability (not just a committee), and a bias audit process for customer-facing applications.

Common failure mode: No policy exists. Or a policy exists but was drafted by IT without legal input. Or a policy was created two years ago for a specific project and hasn't been updated since. Risk management after the fact costs significantly more than governance built in from the start.

Pillar 5: People & Skills

This is the most consequential pillar and the most under-assessed. It requires its own framework, which follows in the next section.

The People Readiness Rubric: Scoring Your Organization's AI Capability

This is the section that doesn't exist in any other framework on the market. What follows is a scored rubric covering six dimensions of people readiness, each with a 1-5 scale and concrete behavioral descriptors at every level.

The rubric distinguishes between theoretical knowledge (someone completed a course) and applied capability (someone changed how they work). This distinction is the whole point. Most AI training programs produce the former. Organizational AI readiness requires the latter. It's the difference between team members who know what generative AI can do and teams that have redesigned their workflows around what it actually does.

McKinsey found that organizations with strong AI adoption demonstrate a specific set of management practices: role-based capability training, senior leaders who model AI use, embedded AI in business processes, and clearly tracked KPIs for AI solutions. The rubric below operationalizes these practices at the team level so you can assess where your organization actually stands.

Read: AI Training for Employees: How to Build a Program That Actually Changes How Your Team Works

People Readiness Rubric (1-5 Scale)

Dimension 1: Leadership AI Fluency

ScoreBehavioral Descriptor
1Leadership mentions AI in all-hands but cannot name a specific use case or explain how AI applies to business objectives
2Leadership can name AI as a priority and has assigned someone to "look into it," but no documented strategy or budget exists
3Leadership has identified 2-3 AI use cases with projected impact and allocated budget; the executive sponsor has been named
4Leadership actively reviews AI project progress, can evaluate proposals critically, and has incorporated AI metrics into planning cycles
5Leadership regularly uses AI tools themselves, can distinguish good AI proposals from hype, and has tied AI KPIs to executive performance reviews

Dimension 2: Team-Level AI Tool Adoption

ScoreBehavioral Descriptor
1No team members regularly use AI tools in their work
2A few individuals experiment with AI tools (ChatGPT, Copilot, etc.) on their own initiative, often on personal accounts, because corporate hasn't approved tools
330-50% of team members use AI tools weekly, primarily for ad hoc tasks (drafting emails, summarizing documents) without workflow integration
4AI tools are used by most team members for recurring tasks; some workflows have been formally redesigned around AI capabilities
5AI tools are embedded in core workflows across 80%+ of the team, with documented processes for when and how to use them and quality-control checkpoints

Dimension 3: Applied AI Skills vs. Theoretical Knowledge

ScoreBehavioral Descriptor
1No AI training has been provided
2Employees have access to AI courses (Coursera, LinkedIn Learning), but completion rates are low and voluntary
3Most employees have completed AI awareness training, but cannot demonstrate a specific workflow they've changed using AI
4Employees in multiple functions can demonstrate specific tasks they now complete with AI, with anecdotal time or quality improvements
5Employees across functions can demonstrate specific AI-enabled workflows with measured improvements (time saved, error reduction, output quality), and regularly identify new applications

Dimension 4: Change Readiness and Resistance

ScoreBehavioral Descriptor
1Active resistance - employees or managers have pushed back on AI tools, raised concerns about job security, or explicitly refused to use approved tools
2Skepticism dominates - employees express doubt about AI usefulness, try tools once, and abandon them; "this doesn't work for what I do" is common
3Passive acceptance - employees will use AI tools if directed, but revert to old methods when not monitored; no organic adoption
4Cautious experimentation - employees try new AI applications when suggested and provide feedback; some champions emerge organically
5Proactive experimentation - employees independently identify new AI applications, share findings with colleagues, and advocate for expanded AI capabilities

Dimension 5: Learning Infrastructure Maturity

ScoreBehavioral Descriptor
1No dedicated AI learning budget or program
2AI training exists (a few courses bookmarked, occasional lunch-and-learns) but is ad hoc, optional, and not tracked
3A structured AI training program exists with curated content, but participation is voluntary, adoption isn't measured beyond completion, and content is generic across roles
4Role-specific AI learning pathways exist with applied projects; completion and adoption are tracked; some teams have received hands-on coaching
5Comprehensive AI learning infrastructure with role-specific pathways, applied projects, regular skills assessments, coaching support for teams transitioning to AI-integrated workflows, and measured behavior change

Dimension 6: Cross-Functional AI Collaboration

ScoreBehavioral Descriptor
1AI initiatives exist in one department (usually IT or data science) with no involvement from business units
2IT/data science has proposed AI use cases; business units are aware but not engaged in prioritization or design
3A cross-functional AI committee or working group exists, meets regularly, but business units participate passively.
4Business units actively identify AI use cases in their domains and collaborate with data/engineering as partners; joint ownership of pilot projects
5Business units own AI use cases in their domains, contribute to enterprise AI prioritization, and have embedded AI capabilities within their teams

Interpreting Your People Readiness Score

Calculate your average across the six dimensions.

  • Average 1.0-2.0 (Pre-Ready) - The people gap will stall any AI deployment, regardless of your data or infrastructure investment. Don't buy more technology. Don't launch another pilot. Close the people gap first. Among organizations outside the Pacesetter group, only 16% report AI proficiency among staff. This is what pre-ready looks like at scale.
  • Average 2.5-3.5 (Foundation Exists, Applied Capability Missing) - This is the most common profile for organizations running their first AI readiness assessment, and it's also where organizations get stuck. The pattern: leadership approved budget for AI training, employees completed courses, completion dashboards look good, and nothing changed. The problem isn't more content. Platform-based training produces score level 3 (awareness, completion, theoretical knowledge). Getting to score level 4 or 5 requires applied coaching that works with specific teams on specific workflows and measures behavioral outcomes.
  • Average 4.0-5.0 (Deployment-Ready) - Your organization has the people readiness to pursue AI deployment with confidence. Your bottleneck is elsewhere, probably in the technology, data, or governance pillars. Begin with your lowest-scoring pillar rather than continuing to invest in people readiness.

The distinction between score level 3 and score level 5 is the strategic insight that this rubric exists to surface. Score 3 is what you get from investment in awareness. Score 5 is what you get from investment in the application. The interventions are different. The timelines are different. And most organizations don't realize they're stuck at 3 until they've spent a year wondering why their AI initiatives don't scale.

AI Maturity Levels: Where Your Organization Stands Today

Your pillar scores combine into a composite picture. The maturity model below maps that picture to a named level so you can communicate clearly with leadership about where you are and what comes next.

Level 1: Unaware (Composite Score 1.0-1.5)

No formal AI strategy. Data governance is nonexistent or siloed. No AI tools in sanctioned use. No skills development beyond what individuals seek on their own.

You know you're here when the AI conversation is happening in the C-suite but has not reached team leads or individual contributors. If you asked a random manager, "What's our AI strategy?" they'd say, "I don't think we have one" or "I heard something at the all-hands."

Priority actions: Appoint an AI readiness owner with cross-functional authority. Conduct this baseline assessment. Identify one low-risk pilot use case with a willing team.

Timeline to Level 2: 3-6 months with focused effort.

Level 2: Exploring (Composite Score 1.5-2.5)

An AI strategy exists on paper. Some employees are experimenting individually, often using personal accounts because the company hasn't approved tools. Data quality is uneven. IT has governance concerns that haven't been resolved. This is the "shadow AI" stage.

You know you're here when your most AI-forward employee is using ChatGPT on their personal account because the company hasn't approved any tools. According to McKinsey, employees often use AI more than their leaders realize, which means shadow adoption is likely already happening in your organization.

Priority actions: Formalize an AI use policy (what's permitted, what's not). Audit data quality in the departments most likely to pilot AI. Approve and provision a set of AI solutions with appropriate security controls. Launch a structured pilot with one team.

Timeline to Level 3: 6-9 months.

Level 3: Developing (Composite Score 2.5-3.5)

Documented strategy with pilot AI projects underway. AI training deployed across the organization. Data governance is improving. But AI use is concentrated in 1-2 departments and hasn't scaled. Pilots succeed technically but fail to achieve adoption.

You know you're here when the data science team delivers a successful proof-of-concept, but the business unit that requested it hasn't adopted it into their workflow. The model works. Nobody uses it. This is what McKinsey calls "pilot purgatory," and two-thirds of organizations are currently here.

Priority actions: Close the adoption gap through applied coaching. Expand governance frameworks to cover new use cases before they launch. Document and share pilot learnings cross-functionally; make success visible. Invest specifically in the People & Skills pillar.

Timeline to Level 4: 9-12 months. This is the hardest transition because the blockers are behavioral, not technical.

Level 4: Operationalizing (Composite Score 3.5-4.5)

AI is deployed in multiple business functions with measurable ROI. Cross-functional collaboration on AI is routine. The organization can point to specific dollar values created by AI-driven decisions. Skills development is continuous.

You know you're here when AI-driven decisions are being made in production, and someone in the organization can tell you the dollar value those decisions have generated this quarter. You're beginning to leverage AI as a strategic capability.

Priority actions: Build AI centers of excellence to scale expertise. Implement continuous technical skills and applied skills assessment to identify emerging gaps. Begin advanced capability building like fine-tuning models, building custom AI solutions, and evaluating build-vs-buy decisions. Begin evaluating agentic AI frameworks for high-volume, multi-step workflows.

Timeline to Level 5: 12-18 months.

Level 5: Transforming (Composite Score 4.5-5.0)

AI is embedded in strategic decision-making, competitive differentiation, and organizational culture. The organization attracts talent because of its AI capability. Business value from AI is measured and reported at the enterprise level.

Examples in practice: Netflix attributes over 80% of its streamed content to AI-driven personalized recommendations, a business value from AI that is built directly into the customer experience. JPMorgan Chase's COiN platform processes commercial credit agreements in seconds, work that previously required 360,000 lawyer-hours annually.

Priority actions: Maintain innovation velocity. Explore agentic AI use cases where autonomous, multi-step AI systems can take on entire workflows. Share learnings externally through publications and case studies. Compete for AI specialists through employer brand.

Read: How to Become an AI Specialist

A note on where most organizations actually land in 2026: If you're conducting your first AI readiness assessment, you will probably score between Level 2 and Level 3. Only 13% of organizations qualify as truly AI-ready. The goal of the assessment is not to achieve Level 5 immediately. It's to know exactly where you are and what the next 90 days should look like.

The Complete AI Readiness Checklist: Score Every Dimension

What follows is a complete, scorable AI readiness checklist across all five pillars. Each item uses a 0-1-2 scale. Work through it with your cross-functional team, tally the results, and map your total to the maturity levels above.

Strategy & Leadership (4 items, max 8 points)

ItemScore 0Score 1Score 2
Our AI strategy names specific use cases tied to business outcomes with projected ROINo documented AI strategy existsA strategy document exists, but use cases are vague ("explore AI for efficiency")Strategy names 2+ specific use cases with projected impact and timelines
AI initiatives have named executive sponsors with accountabilityNo executive ownership; AI is "everyone's job."A senior leader has been assigned AI generally, but without specific initiative ownershipEach AI initiative has a named executive sponsor who reviews progress monthly
Budget has been allocated specifically for AI initiativesNo dedicated AI budget; AI projects compete for discretionary fundsSome budget exists, but it is insufficient for stated goalsThe budget allocated is sufficient to fund the priority initiatives in the strategy
AI progress is reviewed at the executive level regularlyAI is not on the executive agendaAI is discussed occasionally when specific issues ariseAI progress is a standing agenda item in executive meetings (monthly or more frequent)

Data Foundations (5 items, max 10 points)

ItemScore 0Score 1Score 2
Our production data has documented quality scores (accuracy, completeness, timeliness)We don't measure data quality systematicallyQuality is measured for some datasets, but not standardizedQuality scores documented for production datasets; average accuracy ≥85%
Data governance is centralized with clear ownershipNo formal data governance; each department manages its own dataGovernance exists, but is inconsistent across departmentsCentralized data governance with documented ownership, policies, and stewardship
Teams can access the data they need without extended IT processesData access requires IT tickets and takes days or weeksSelf-service exists for some data; other data requires IT involvementSelf-service data access for authorized users across priority datasets
Data lineage is documented (we can trace where data came from and how it was transformed)No documentation of data lineageLineage documented for some critical datasetsLineage is documented comprehensively in a data catalog
Data from different systems can be integrated for AI use casesData lives in silos that can't be easily connectedIntegration is possible but requires custom engineering for each use caseData integration infrastructure exists (data lake, warehouse, or mesh) with established pipelines

Technology Infrastructure (4 items, max 8 points)

ItemScore 0Score 1Score 2
Our cloud infrastructure can provision ML environments quicklyNo cloud infrastructure or on-premise onlyCloud exists, but ML provisioning takes weeks and requires IT involvementML environments can be provisioned within 24-48 hours without tickets
Core systems have APIs that enable AI integrationLegacy systems with no API accessAPIs exist for some systems; others require custom integrationThe API layer exists across core systems (CRM, ERP, HRIS) with documentation
Security architecture supports AI workloads and machine learning pipelinesNo AI-specific security protocolsThe security team has concerns about AI that haven't been resolvedAI security protocols documented (model access controls, data encryption, audit trails)
Compute capacity can scale with demandFixed capacity; scaling requires procurement cyclesCan scale, but the process takes weeksOn-demand scaling available for AI workloads

People & Skills (6 items, max 12 points)

ItemScore 0Score 1Score 2
Leadership can articulate how AI applies to business objectives with specific examplesLeadership mentions AI but cannot name specific applicationsLeadership can name AI as a priority and have general awarenessLeadership demonstrates AI fluency: can evaluate proposals, name specific use cases, and explain business impact
More than 50% of team members use AI tools weekly in their actual workflowsMinimal AI tool usage across the organizationSome employees experiment with AI tools, primarily for ad hoc tasksAI tools are used weekly by the majority of employees with documented workflows
Employees can demonstrate specific workflows they've changed using AINo evidence of workflow change from AIAnecdotal examples of AI-enabled workflow changes existMultiple teams can demonstrate specific AI-enabled workflows with measured improvements
The organization shows proactive experimentation rather than resistance to AI adoptionActive resistance or widespread skepticismPassive acceptance; employees use AI when directed, but don't seek it outOrganic experimentation; employees identify new AI applications and share learnings
AI training includes applied projects with role-specific pathwaysNo AI training or ad hoc courses onlyStructured AI training exists, but it is generic and completion-focusedRole-specific AI learning with applied projects, coaching support, and measured adoption
Business units actively participate in AI initiative prioritization and designAI is siloed in IT/data scienceBusiness units are consulted on AI initiativesBusiness units co-own AI initiatives in their domains and collaborate cross-functionally

Read: AI Upskilling: Why It’s Necessary & How to Get Started and AI Upskilling: The Best Firms, Platforms, and Programs for Training Your Workforce

Governance & Ethics (4 items, max 8 points)

ItemScore 0Score 1Score 2
We have a documented AI use policy specifying acceptable and prohibited usesNo AI use policy existsPolicy exists, but is informal or not widely communicatedDocumented AI use policy, reviewed by legal, communicated across the organization
A named individual owns AI ethics and risk management (not just a committee)No designated AI ethics ownershipA committee exists, but no individual accountabilityNamed AI ethics owner with clear accountability and authority
We have bias testing protocols for customer-facing AI applicationsNo bias testing processBias is considered on an ad hoc basisDocumented bias testing protocol applied before any customer-facing AI deployment
Our AI governance has been reviewed within the last 12 months (or aligns with current regulatory requirements, including the EU AI Act if applicable)Governance is outdated or has never been reviewedSome reviews have been conducted, but not comprehensivelyGovernance reviewed within 12 months; regulatory compliance assessed

Scoring Interpretation

Score RangeMaturity LevelInterpretation
0-12Level 1 (Unaware)Foundational gaps across most pillars. Start with strategy and governance basics before any technology investment
13-22Level 2 (Exploring)Early-stage readiness. formalize policy, improve data quality, launch controlled pilots with willing teams
23-32Level 3 (Developing)The foundation exists, but adoption lags. Focus on people's readiness and scaling what works before expanding
33-40Level 4 (Operationalizing)Strong position; build centers of excellence and a continuous improvement infrastructure
41-46Level 5 (Transforming)AI is mature. Maintain velocity, develop AI specialists, and invest in advanced agentic AI capabilities

Which gaps to prioritize first:

People & Skills and Strategy & Leadership are the dimensions most correlated with successful AI deployment. Weight your pillar scores accordingly. If your People & Skills score is your lowest pillar, address it before investing further in technology. The technology will sit unused without the people's capability to use it. The skills gap is the number one reason AI projects don't produce business value.

How to Run an AI Readiness Assessment: The Cross-Functional Process

You have the framework. You have the readiness checklist. Now you need to actually run the assessment inside your organization, which means navigating stakeholder dynamics, synthesizing divergent perspectives, and producing a result that leadership will trust.

Who Must Be in the Room

Five stakeholder groups are essential. If any is absent, your assessment will have a blind spot that undermines its credibility:

  • IT/Engineering - Owns infrastructure, data architecture, and technical feasibility. Will score the technology and data pillars.
  • HR/L&D - Owns people readiness, training programs, and change management. Will score the people pillar.
  • Legal/Compliance - Owns governance, risk management, and regulatory compliance. Will validate the governance pillar and identify risks others miss.
  • Business Unit Leaders (2-3 from priority areas) - Own use cases and adoption. Will ground-truth whether AI is actually being used in their teams.
  • Finance - Owns budget validation and ROI accountability. Will pressure-test whether proposed investments make business sense.

The Three-Step Process

Step 1: Pre-work (1 week before assessment session)

Distribute the readiness checklist to each stakeholder group. Ask them to score independently before meeting. The goal is to surface divergent perspectives before you're in the room together.

Send the checklist with this framing: "Score each item based on what you directly observe, not what you believe should be true or what another department has told you. Where you're uncertain, note your assumption. We'll reconcile differences in the assessment session."

One additional step that pays enormous dividends: run a brief 5-question survey to the broader organization before the session. Ask: How often do you use AI tools in your work? Name one task you complete with AI that you previously did manually. What's your biggest barrier to using AI more? The responses will give you behavioral data on the people pillar that HR often lacks — and will calibrate your readiness checklist scoring with direct evidence.

Read: How to Use AI to Automate Tasks & Be More Productive

Step 2: Assessment session (2-3 hours)

Bring stakeholders together with the pre-work scores visible. The agenda:

  • 0:00-0:15: Review purpose and process. Emphasize that the goal is an accurate current-state assessment, not consensus for its own sake. Disagreement is data.
  • 0:15-1:15: Walk through each pillar. For each item, the surface where scores diverge. When IT scores data quality at 2, and the sales team scores data access at 0, dig in. The explanation is often more valuable than the score.
  • 1:15-1:45: Align on final scores. For items where perspectives genuinely differ, document both. Example: "IT scores data quality at 2 based on the data warehouse. Sales scores data quality at 0 based on CRM accessibility. Final score: 1, with a note that data quality is inconsistent across systems."
  • 1:45-2:15: Identify the 2-3 highest-priority gaps. Use this question: "If we could only close one gap before our next AI initiative, which would have the greatest impact?"
  • 2:15-2:30: Assign action items for synthesis and next steps.

Step 3: Synthesis (within 48 hours of session)

Compile the composite score by pillar and overall. Map to maturity level. Draft the action plan with this structure:

  • Current state: One-paragraph summary of maturity level and what it means for your organization's AI journey.
  • Priority gaps: The 2-3 gaps identified in the session, with specific evidence.
  • Recommended actions: For each gap, 1-3 specific interventions with owner, timeline, and resource requirements.
  • What we're not recommending (and why): Explicitly name the investments that are premature given current readiness. This builds credibility. Leadership sees that you're sequencing investment to where it will produce returns.

Presenting to Leadership

Your assessment needs to drive decisions, not just discussion. Structure your presentation around three questions:

  1. Where are we? Lead with the maturity level. "We are at Level 2, Exploring. This means we have early-stage capabilities but significant gaps that will prevent AI initiatives from scaling."
  2. What's blocking us? Name the 2-3 priority gaps with specificity. Don't say "people's readiness is low." Say "Our People & Skills pillar scored 4 out of 12. Fewer than 30% of employees use AI tools in their workflows, and no team can demonstrate an AI-enabled process with measurable improvements."
  3. What should we do? Lead with the highest-priority intervention. Be specific about resources, timeline, and expected outcomes. "We recommend a 12-week applied AI coaching engagement with the marketing and operations teams, focused on embedding AI tools into three specific workflows. Expected outcome: demonstrated workflow changes that can be documented and scaled to other teams."

Common Process Failures and How to Avoid Them

  • Failure mode 1: Stakeholders score only their own domain. Solve by requiring every stakeholder to score every pillar based on what they directly observe.
  • Failure mode 2: The session produces averaged scores that mask important disagreements. Solve by documenting divergent perspectives as findings, not errors to resolve.
  • Failure mode 3: The assessment becomes a planning exercise with no follow-through. Solve by ending with named owners, specific timelines, and a follow-up review date (typically 30 days out).
  • Failure mode 4: The people pillar gets scored generically because HR doesn't have behavioral data on AI tool usage. Solve by running the pre-session survey described above.

The Strategic Insight No Framework Tells You

The assessment framework is complete. The checklist is scorable. The maturity model is clear. What remains is the strategic reality that separates organizations that successfully transform with AI from those that perpetually experiment with it.

The tools exist. The data can be cleaned. The infrastructure can be provisioned. The AI systems are better than they've ever been. None of that is the bottleneck. The bottleneck is whether your workforce will change how it works because they completed a course, but because they've been shown exactly how AI applies to the specific workflows they run every day, and have been supported in making the transition.

That right there is a coaching problem. And it's the problem your AI readiness assessment exists to surface before you spend another dollar on AI solutions that will sit unused.

The organizations that win the AI transition in 2026 and 2027 are the ones that assessed honestly, closed their foundation gaps deliberately, and invested in the human capability to actually use what they built. Start with the assessment. The rest follows.

Work 1:1 with an AI strategy and transformation coach. Leland connects you with vetted AI experts who work directly with you and your team on the specific workflows, skill gaps, and adoption challenges your assessment surfaces. No generic courses. No completion dashboards. Just applied coaching that produces the behavior change your AI investment requires. Browse AI Strategy & Transformation Coaches on Leland.

Prefer to learn before you commit? The Leland AI Builder Program gives you a structured path to build real AI capabilities from the ground up. Or join one of Leland's free live AI strategy events led by practitioners currently working inside AI transformations to get actionable insights before your next move!

Top Coaches

Read next:


FAQs About AI Readiness Assessments

Can a small or mid-sized business actually use an AI readiness assessment, or is this only for enterprises?

  • Any organization deploying or planning to deploy AI, regardless of size, benefits from an assessment, because the gaps that kill AI projects (unclear strategy, poor data access, untrained staff) exist at every scale.

We already use AI tools like Copilot and ChatGPT across the company. Does that mean we're AI-ready?

  • Tool adoption is the starting point of AI readiness. What matters is whether those tools have changed how your team actually works, not just what's installed on their laptops.

How often should we reassess our AI readiness once we've done it the first time?

  • Most organizations benefit from reassessing every six months, especially during active, successful AI implementation, since readiness gaps shift quickly as technology, regulation, and workforce capability all evolve simultaneously.

Who should own the AI readiness assessment inside the organization? IT, HR, or someone else?

  • Ownership works best with a cross-functional lead, often a Chief AI Officer, Head of Digital Transformation, or a senior HR or operations leader, because no single department controls all five readiness pillars.

What's the difference between being AI-ready and having an AI strategy?

  • An AI strategy defines where you want to go. An AI readiness assessment tells you honestly whether your organization has what it takes to get there.

Find your coach today.

Browse Related Articles

Sign in
Free events
Bootcamps