Design AI with people in mind

Design for Adoption

January 12, 20267 min read

Designing the Solution AND the Adoption: How Teams Make AI Actually Stick

Leaders don’t struggle to find AI ideas. They struggle to make one of them stick.

You choose the project. The build starts. The demo looks promising.

Then the real questions show up in the hallway conversations and Slack threads:

  • Where does the data come from—really?

  • Who approves the output?

  • What happens when AI is wrong?

  • Who owns it after launch?

  • Why are people quietly doing it the old way again?

Here’s the mid-market reality check: you don’t have a spare team to figure this out. Your best operators are already at capacity. IT is juggling a backlog. Compliance (if you have it) is stretched thin.

So adoption work tends to get postponed until “after go-live.” And that’s when the initiative drifts.

AI that works but isn’t used = no value.

Why adoption collapses in the mid-market

When adoption collapses, it usually looks quiet. No dramatic failure. Just a slow fade back to old habits.

These are the most common root causes I see in mid-market AI initiatives:

  • Trust gap: people fear being blamed when AI is wrong—so they avoid using it where it matters.

  • Workflow mismatch: AI gets bolted on like an extra step instead of embedded where decisions actually happen.

  • Exception overwhelm: the happy path works, but edge cases weren’t designed—so the tool breaks at the exact moments that matter most.

  • Governance extremes: you either get none (“just ship it”) or a bureaucracy that slows everything down.

  • Training ≠ enablement: people need updated SOPs and decision rules, not a one-time demo.

  • Ownership unclear after go-live: process/product/tech/risk responsibilities blur, and the system stops improving.

What I see in the mid-market: teams don’t reject AI because they’re anti-technology. They reject ambiguity. Design for exceptions + ownership, or adoption collapses quietly.

A practical method you can use next week

The best mid-market AI programs treat technical design and change design as the same workstream.

Below is a practical method you can run next week with a small cross-functional group (60–90 minutes). It’s workflow-first, human-in-the-loop proportional to risk, minimal on governance overhead, and measurable from day one.

Step 1 — Design the workflow first

Before you touch tools or vendors, get specific about the workflow. If you can’t describe how work changes, you’re not ready to build.

  • Where AI enters: What triggers AI? (claim submitted, ticket created, work order opened, email received)

  • What AI produces: Summary, extraction, classification, recommendation, draft, alert

  • Validation checkpoint (before AI acts): What key information must a person confirm before AI executes anything? (recipient/customer, amounts, dates, policy flags, attachments, compliance triggers)

  • What the human does next: Approves, edits, routes, escalates, rejects

  • Exception paths: What happens when AI is uncertain, conflicting, or missing context?

  • Definition of done: What outcome means “the work is complete,” not “the model ran”

Step 2 — Set human-in-the-loop proportional to risk

Human-in-the-loop isn’t a lack of confidence. It’s responsible design. The goal is to reduce friction where errors are cheap—and add control where errors are expensive.

  • Low risk (internal summaries/drafts): lightweight review; track edits/overrides

  • Medium risk (customer comms, ops prioritization): required human approval before external impact; escalation for uncertainty

  • High risk (clinical/financial/safety/compliance): strict approval, audit trails, clear “AI is advisory only,” incident response plan

Practical rule: if the AI output can change money, safety, compliance status, or a customer relationship, put a validation checkpoint in front of execution.

Step 3 — Minimal Viable Governance (MVG)

Governance doesn’t need to be heavy. It needs to be clear. MVG is the smallest set of roles and rhythms that keep the system owned, safe, and improving.

Define owners:

  • Business owner (owns the business metric)

  • Process/Product owner (owns the workflow + adoption + SOP updates)

  • Technical owner (owns reliability, security, integrations, monitoring)

  • Risk owner (owns controls, auditability, escalation rules, incident response)

Define operating rhythms:

  • Weekly AI Ops check (30 minutes): exceptions, failures, user friction, top corrections

  • Monthly performance review: business impact + adoption + quality metrics

  • Quarterly model/workflow review: adjust thresholds, update SOPs, refine exception handling

  • Incident pathway: who is paged, who approves rollback, how users are notified, and what gets documented

Suggested 30-minute AI Ops agenda:

  • 5 min - Adoption signal: usage rate + drop-offs in the workflow

  • 10 min - Exceptions: top 3 reasons claims/tasks are blocked or escalated

  • 10 min - Quality: override rate, error themes, time-to-correction

  • 5 min - Decisions: what to tweak this week (rules, prompts, UI, SOP)

Step 4 — Instrument measurement from day one

If you can’t see it, you can’t manage it. Track a small set of metrics that connect business value to real behavior.

  • Business metric: the stuck metric you’re trying to move (cycle time, rework, delays, throughput)

  • Process metrics: handoff time, queue size, time-to-decision, rework loops

  • Adoption metrics: active users, usage frequency in the target step, drop-off points, time-to-first-use

  • Quality metrics: override rate, escalation rate, error types, time-to-correction, audit findings (if applicable)

Step 5 — Capability building plan

Mid-market companies don’t need a giant AI Center of Excellence to win. They need a small operating model that can run the rhythm and improve the system over time.

  • One internal owner (process/product) who can run weekly Ops and drive SOP updates

  • Targeted partner support for integration/model iteration (as needed)

  • Enablement that changes behavior: updated SOPs, role-based job aids, and a clear “what to do when…” playbook

  • A communications owner who reinforces the why, the guardrails, and the wins

A question to ask your team this week: “If this AI output is wrong, who gets blamed today—and how do we change that?”

If you don’t address that fear directly, adoption will always be fragile.

Vignette: Dental Insurance Claims Processing (a high-value “Now” project)

I worked with a dental insurance company that initially wanted to implement AI-powered X-ray analysis to accelerate claims processing. From a leadership perspective, the opportunity seemed obvious: review images faster, reduce manual effort, and lower costs.

But when we mapped the full system—from dental practices submitting claims through adjudication and payment—we uncovered a different set of levers.

The biggest sources of delay weren’t the X-ray reviews themselves. They were upstream: inconsistent submission formats, incomplete claims that triggered manual rework, and handoffs between systems that required human intervention.

So the “Now” project became simpler—and immediately valuable: validate every claim submission before adjudication so it can be processed without delay.

Workflow-first design

  • Streamline the practice submission interface and clarify requirements (what’s required vs. optional, and why).

  • Run an AI pre-adjudication validation check for completeness and consistency: required fields, attachments, provider/patient details, procedure codes, tooth/surface details, dates of service, eligibility/policy flags, and narrative requirements when needed.

  • Insert a validation checkpoint: before the claim advances, a person at the practice confirms key items when the system flags uncertainty (e.g., mismatched patient data, missing attachments, ambiguous codes).

  • If anything is missing or incorrect, automatically reply to the initiating practice with a concise list of corrections needed—then hold the claim in a “needs info” state.

  • If the claim passes validation, route it to adjudication without additional manual touchpoints.

Human-in-the-loop choices

  • Auto-clear low-risk validation failures (e.g., missing non-critical fields) via structured practice prompts and resubmission.

  • Route ambiguous, high-dollar, or policy-sensitive claims to a human reviewer before adjudication.

  • Log what the AI flagged, what was corrected, and time-to-correction (for auditability and continuous improvement).

Minimal Viable Governance + measurement

  • Define owners (business, process/product, technical, risk) and run a weekly 30-minute AI Ops check focused on exceptions and adoption friction.

  • Measure first-pass yield, average cycle time, pended/returned claim rate, rework hours, correction turnaround time, escalation rate, and payment delays.

By addressing those friction points first, the organization unlocked immediate cost savings and cycle-time reductions.

That groundwork made it possible to pursue the more complex X-ray AI solution in parallel—with far greater confidence it would deliver value. The system map revealed where AI could help now and where it made sense to invest later, avoiding a costly bet on technology before the foundations were ready.

Tool: AI Solution + Adoption One-Pager (printable)

Use the one-pager below to align stakeholders before build, during pilot, and at go-live.

It’s designed to force clarity on workflow, validation, exception handling, ownership, and measurement—without turning governance into a bureaucracy.

solution - adoption template

Key takeaways

·Workflow-first beats model-first.

·Human-in-the-loop is responsible design, not reluctance.

·Minimal viable governance prevents both chaos and bureaucracy.

·If you don’t measure adoption and exceptions, you can’t manage the initiative.

Next step

If you want help getting aligned and choosing the next right project…

Book Your AI Leadership Alignment Workshop https://empoweredalliances.com/ai-leadership-sprint-1591

Jeff is a master facilitator with over 30 years of experience leading strategic planning workshops and change initiatives for 100+ teams from executive to project team level.

Jeff Richardson

Jeff is a master facilitator with over 30 years of experience leading strategic planning workshops and change initiatives for 100+ teams from executive to project team level.

LinkedIn logo icon
Youtube logo icon
Back to Blog