Choose Right

The 'Right' Next AI Project

December 31, 20257 min read

Collaborative Evaluation + Decision-Making With Your Leadership Team

Choosing the Next Right AI Project: How to Evaluate Options Without Politics, Panic, or Shiny Objects Slowing You Down

(Jeff Richardson, Founder, Empowered Alliances)

______________________________________________

You did the hard work. You got the leadership team aligned on a stuck metric. You mapped the system. You surfaced real friction in the process.

Now you’ve got a list of possible AI opportunities… and the meeting goes sideways.

  • The CFO wants the quickest ROI and lowest risk.

  • The COO/VP Ops wants something that won’t disrupt the quarter.

  • The CIO is worried about data quality, security, and integration.

  • Someone saw a vendor demo and now wants “the thing that looks like ChatGPT.”

This is the moment where mid-market AI efforts tend to drift into one of two failure modes:

  • Decision paralysis: “Let’s come back to this next month” (repeat for six months).

  • The shiny-object pick: “Let’s pilot this!” (and it dies quietly 90 days later).

The issue usually isn’t intelligence. It’s the lack of a shared, practical decision method that respects mid-market constraints and still moves the business forward.

______________________________________________

What’s really going on (root causes + human factors)

When leaders can’t choose the “next right” AI project, it’s typically because of a few forces happening at once:

1) Competing definitions of value

Value means different things depending on your seat:

  • Finance: margin, cash flow, risk reduction

  • Operations: cycle time, throughput, fewer errors

  • CX: response time, consistency, satisfaction

  • IT: stability, security, scalability, supportability

If you don’t reconcile those lenses, your evaluation turns into polite debate, or political positioning.

2) AI ambiguity creates fear (and overconfidence)

AI triggers a few predictable reactions:

  • Fear: “If we get this wrong, we’ll create risk or waste money.”

  • Overconfidence: “This tool is amazing; it will solve everything.”

  • Pilot scar tissue: If a previous AI experiment fizzled, leaders get gun-shy and default to safer, smaller bets (or no bet at all).

All of these instincts are normal. None of them produces good prioritization on its own.

3) Leaders skip the “option definition” step

Teams evaluate vague ideas instead of well-formed options:

  • “Use AI in customer service” (not an option)

  • “Implement Copilot” (a tool decision, not a business option)

  • “Automate intake” (closer, but still needs scope and constraints)

A good option is specific enough that you can estimate effort, adoption impact, risk, and success measures.

4) Mid-market constraints punish the wrong pick

Big enterprises can absorb a failed pilot. Mid-market companies feel it immediately:

  • Limited change capacity

  • Siloed systems and messy data

  • Budget scrutiny

  • Small teams wearing multiple hats

So the “next” project needs to be both valuable and doable, with governance proportional to risk.

______________________________________________

A practical method readers can use next week (steps, not theory)

This evaluation is part of the AI Opportunity Assessment. We develop options collaboratively with internal experts and our consulting team, using a customized template so it’s easy to compare and contrast 3–5 creative (and realistic) paths forward.

You can run the leadership evaluation meeting in 90 minutes if you prep properly.

Step 1: Turn ideas into 3–5 concrete options

For each option, force clarity with a one-page Option Card:

  • What stuck metric does it move?

  • Which part of the system/process does it touch?

  • What is the user workflow change? (Who does what differently?)

  • What data does it require? (Where does it live? How good is it?)

  • What’s the smallest viable version? (2–6 weeks, not 6 months)

  • What are the potential risks? (operational, compliance, customer impact, change fatigue)

  • What are the estimated time and/or cost savings? (even a directional range)

We also do a preliminary AI evaluation for each option (feasibility, data readiness, and adoption considerations) so leaders have more signal, not just opinions.

If an idea can’t fit on one page, it’s not ready to prioritize.

Step 2: Score options using 6 criteria that mid-market leaders actually care about

Avoid 20-criteria scorecards. Keep it tight and real.

I recommend these six:

  • Metric impact: How directly will this move the stuck metric?

  • Speed to value: How quickly can we prove or disprove value?

  • Feasibility: Can we realistically deliver with our systems/talent?

  • Data readiness: Is the data accessible, usable, and trustworthy enough?

  • Adoption risk: Will people use it, or fight it (quietly or loudly)?

  • Operational risk: What happens if it’s wrong? (Safety, compliance, customer harm)

Use a simple 1–5 scale, but treat it as an input, not the verdict. In Mural, we run multiple rounds of voting on each criterion with facilitated discussion between rounds, so diverse leadership perspectives actually shape the decision. We then transcribe the conversation into clear guidance for design and implementation teams (rationale, constraints, and success intent).

Step 3: Pick one “Now,” one “Next,” and one “Later”

Mid-market leaders need focus, not a wish list.

  • Now: best combination of impact + speed + feasibility

  • Next: valuable but needs prerequisite work (data, process, change readiness)

  • Later: high potential but high complexity/risk

This keeps momentum while acknowledging reality.

Step 4: Define success measures and “kill criteria” upfront

This is the discipline most teams skip, and it’s how you avoid zombie pilots.

For your “Now” option, define:

  • Success measures: what moves, by how much, by when

  • Adoption measures: usage, cycle time reduction, error rates, satisfaction

  • Kill criteria: what would cause you to pause or stop the initiative

This protects budget, trust, and morale.

______________________________________________

Airline Customer Service Example:

A airline company we worked with had a stuck metric: delays in addressing customer complaints. By the time issues were logged and routed, the flight was over and the moment to recover trust had passed.

Their opportunity statement became: How might we address specific types of complaints in near real time when they arise during travel experiences to turn those frowns upside down?

After mapping the system, we realized the signal was already there, just scattered across sources:

  • Customer service cases + agent notes

  • Member history (status, preferences, prior disruptions)

  • Operational events (gate changes, delays, missed connections)

  • Even public social posts when incidents spiked

Instead of debating “which AI is best,” we evaluated options against the six criteria.

What the team realized:

  • The biggest delay wasn’t “lack of effort,” it was slow issue identification and handoff across channels.

  • AI could accelerate triage by classifying complaints by type and severity, using sentiment analysis, then suggesting preferred resolution options for different member segments.

  • The real leverage came when those recommendations were routed to the right frontline moment.

The decision:

  • Now: real-time complaint detection + classification, with prioritized alerts to in-flight and ground teams (handoff acceleration)

  • Next: guided resolution playbooks personalized by member history (value-added support for frontline teams)

  • Later: proactive prediction of high-risk experiences to prevent complaints before they happen (higher complexity + governance needs)

Many recommendations were communicated directly to flight attendants so they could address issues while the customer was still in the air. The result wasn’t just faster response, it was visible recovery moments that built loyalty.

______________________________________________

Simple tool/template: Evaluation Agenda + Option Card

Leadership Evaluation Session (90 minutes)

Prep (before the meeting):

  • Bring 3–5 Option Cards (one page each)

  • Bring your system/process map

  • Pre-assign a facilitator and a timekeeper

Agenda:

  • (10 min) Reconfirm the stuck metric + why it matters

  • (15 min) Review system map + constraints (data, capacity, risk)

  • (30 min) Walk through each Option Card (6–8 minutes each)

  • (20 min) Score collaboratively using the 6 criteria

  • (10 min) Decide Now / Next / Later

  • (5 min) Define success measures + kill criteria for “Now”

Option Card (one page)

  • Option name:

  • Metric moved:

  • Workflow change:

  • Primary opportunity lens: (handoff / multi-step / value-added)

  • Data required + source:

  • Effort estimate: (smallest viable version)

  • Risks + mitigation:

  • Success measures:

What I see in the mid-market…

Most teams don’t need more ideas.

They need a shared method that:

·respects limited capacity,

·prioritizes adoption, and

·keeps governance proportional to risk.

When you evaluate options with the people who own the metric and the people doing the work, the “right” choice tends to get obvious.

______________________________________________

Key takeaways

·A good AI decision process is a facilitation problem before it’s a technology problem.

·Define options clearly, then score them against a small set of criteria you truly care about.

·If you can’t articulate the workflow change, you’re not ready to prioritize the option.

·Pick Now/Next/Later to keep momentum without ignoring prerequisites.

If you want help getting aligned and choosing the next right project, Book Your AI Leadership Alignment Workshop.

Jeff is a master facilitator with over 30 years of experience leading strategic planning workshops and change initiatives for 100+ teams from executive to project team level.

Jeff Richardson

Jeff is a master facilitator with over 30 years of experience leading strategic planning workshops and change initiatives for 100+ teams from executive to project team level.

LinkedIn logo icon
Youtube logo icon
Back to Blog