Most companies now have an “AI story”. A slide in the board pack. A Copilot license rollout. A few experiments with ChatGPT that were “interesting” but never made it past the first month.

Meanwhile they read about competitors “rewiring their operating model with AI” and feel like they are watching a different sport.

They are not failing because they picked the wrong tool. They are failing because they skipped the levels.

AI does not arrive as a single transformation. It shows up as a series of operating levels you have to climb, in order. Ignore that, and every initiative feels disconnected, fragile, and underwhelming.

Think of those levels like camps on AI Everest. Each camp is a higher altitude: more exposure, more coordination, more risk. You don’t helicopter to the summit. You acclimatise your way up.

We’ve used this exact framework at Overdose to help teams climb their own version of AI Everest: aligning leaders, automating key workflows, and building real adoption habits that last.

Self-Diagnosis: Where Are You, Really?

Before you climb, be honest about where you’re standing. You don’t need a survey, just a gut check.

  1. If your team still argues about what “AI” means or which tools are allowed, you’re at Base Camp: Align. You’re not behind; you’re just untuned.

  2. If people use ChatGPT or Copilot occasionally but no one shares what works, you’re at Camp 1: Augment. You’ve got sparks, not systems.

  3. If a few individuals have clever automations that only they understand, you’re at Camp 2: Automate. Local hacks, zero leverage.

  4. If one department has an AI-driven process that’s starting to run across a whole function, you’re at Camp 3: Alliance. You’re redesigning how a slice of the business operates, but this won't move the P&L.

  5. And if multiple functions are sharing data and decisions through AI without endless meetings, you’re at Summit: Ascend. That’s rare air.

Most organisations live between Augment and Automate, pretending they’re at Alliance. That’s fine. Clarity is progress. The only failure is not knowing which level you’re on.

Base Camp: Align (Level 0)

Everyone agrees what game they’re playing.

This is where most companies go wrong. They jump straight to talk of “agents running our business” before they’ve even agreed what AI means. Start here:

  • Automation is rules and triggers. It removes effort.

  • AI is reasoning and pattern recognition. It removes guesswork.

  • Agents are goal-driven systems that plan and act across tools with minimal input.

Most organisations want agents and transformation. What they actually need first is fluency. People who know how to work with AI as an assistant, then as an automation engine, and only then as a teammate.

On the mountain, Base Camp is where you test the gear, agree the route, and decide what “success” even means before anyone climbs. Align is that stage for AI.

Without it, you get what most AI roadmaps quietly become: scattered pilots, enthusiastic pockets, confused risk conversations, and a lot of duplicated effort that never compounds.

What this looks like

  • Executives agree on how AI fits into the business.

  • Teams have freedom to test, experiment, and learn: failure is part of the process.

  • A simple measurement framework tracks adoption and usage.

  • Governance exists for AI discussions, tool access, and data risk.

  • Everyone can explain automation, AI, and agents in one sentence.

  • Leaders use AI tools themselves and share what worked.

How to measure Align

  • Policy adoption: every team has a clear AI guideline.

  • Active sponsors: leaders share at least one AI example each quarter.

  • Tool fragmentation: unapproved tools dropping quarter to quarter.

If you can’t measure it, you haven’t aligned it.

Camp 1: Augment (Level 1)

Tools to help people do their own work better (Individual AI tools).

This is where AI stops being a headline and starts being a habit. On AI Everest, Camp 1 is your first climb above Base Camp. You are still close to safety, but this is where people learn how their body responds at altitude.

Nothing is automated here. Nothing touches core systems. That is deliberate. Augment is controlled exposure. People build comfort, intuition, and evidence that this is not a toy.

Think about individuals using ChatGPT, Claude, Gemini as an assistant, or embedded copilots inside Office, Google Workspace, IDEs and CRMs. The work still flows through humans. AI is just a very capable assistant sitting beside them.

At this level, you notice a quiet shift:

  • Work that used to take an hour now takes ten minutes.

  • Reports sound more structured. Emails are clearer.

  • People have more options on the table when they make decisions.

Most of the leverage comes from a simple skill: how to talk to the model properly through improved prompt and context engineering.

In almost every organisation that actually moves forward, we see the same patterns show up. They:

  • Protect time for experimentation, so people can practise without feeling guilty. Level 0 is your building block for creating this time.

  • Teach practical prompting, rooted in real work, not abstract tricks. External training support is critical in this area.

  • Capture good prompts and workflows in a shared library, instead of letting them stay buried in random chats.

  • Start with micro use cases that actually matter to individuals: better briefs, faster research, tighter summaries.

How to measure Augment

You are measuring behaviour and basic efficiency here, not full business impact.

  • Adoption. Percentage of staff who used an AI assistant over a period.

  • Depth of use. How many people used AI on three or more distinct tasks each week.

  • Time saved estimates. Ask teams to log rough time saved per recurring task.

  • Prompt library usage. Not how many prompts you have, but how often they are used.

  • Quality uplift. Occasionally compare outputs before and after AI support.

If you cannot point to visible changes in how individuals work, you are not done with Augment.

Camp 2: Automate (Level 2)

People start automating their own recurring workflows. (Individual processes)

Once people trust AI as an assistant, the next question arrives on its own: “Why am I still doing this monthly report by hand?”. “Why am I copying notes into three different tools?”.

On the mountain, Camp 2 is when you start hauling real loads. You are not just walking up and down for practice. That is when weak routines and bad gear start to show.

Two questions suddenly run the show:

  • How integrated is AI this with our systems. Can it read from the CRM, documents, spreadsheets, or ticketing tools?

  • How automated is it. Does every step need a human click, or can parts run without anyone watching.

At Automate, people begin to stitch actions together:

  • Meeting notes flow into a standard summary and then into the CRM.

  • Invoices trigger checks, updates, and notifications without anyone dragging files around.

  • Weekly performance dashboards write themselves on a schedule.

You can have a clever workflow living entirely inside ChatGPT that never touches your data. You can also have deep integration that still requires manual nudges. Both can be useful. The point is knowing which you are building.

Underneath, most GenAI workflows do a small handful of things very well:

  • They generate content.

  • They summarise long material.

  • They extract specific fields.

  • They categorise or tag.

If a task does not lean heavily on at least one of those, it is probably not your best automation candidate.

The organisations that handle this level well usually:

  • Choose accessible tools for non technical staff, like Relay.app or Autohive for first builds, while letting more technical teams play with n8n or custom code. See here a rundown on the pilots we ran at Overdose to evaluate AI Agent / Automation tooling.

  • Use a work product audit to list recurring artefacts (reports, decks, updates, emails) and ask “what here could generate, summarise, extract, or categorise”.

  • Decide early whether to centralise building in a small team, or let departments build locally with light governance.

  • Track hours saved and errors reduced, even before big P&L impact, so the value is visible instead of anecdotal.

Automate is still mostly local. One person or one team owns the workflow. The impact is real, but the blast radius is controlled.

How to measure Automate

This is where you can finally put numbers in front of finance without bluffing.

  • Hours saved per month. Honest estimates for each workflow.

  • Cycle time. Before and after measures for key processes, like days to issue an invoice.

  • Error rate. Where it makes sense, compare mistakes, rework, or missed steps before and after automation.

  • Human touch ratio. How many steps still need manual intervention. If every flow still needs five approvals, you are not automating much.

  • Utilisation. How often each workflow actually runs. A clever flow used once a month is not a strong candidate for scaling.

If you cannot show improvements in time, errors, or cycle time, you are still playing with prototypes, not operating at Automate.

Camp 3: Alliance (Level 3)

A function connects tools and workflows around shared processes. (Functional systems)

On AI Everest, Camp 3 is when the rope really matters. You move as one group. One person’s move affects everyone else. Coordination is key.

This is the point where it stops feeling like “some neat automations” and starts feeling like “this is how we work now”. You are no longer just improving single tasks. You are connecting entire processes across people and systems, with AI in the middle of the flow.

Typical examples:

  • A sales process where AI enriches leads for BDR, drafts tailored outreach for SDRs, updates the CRM for sales managers, and nudges account executives when deals stall.

  • A support process where AI categorises incoming tickets for triage specialists, pulls context from multiple databases for agents, drafts responses for supervisors to review, and escalates critical issues to the head of support.

  • A marketing process where AI generates creative variants for designers, pushes campaigns live for channel managers, analyses performance for analysts, and recommends budget reallocations to the marketing lead.

These flows no longer belong to one person. They sit across roles inside a function. Sales, marketing, service, operations, finance. That is the step change.

To make Alliance work, a few things have to appear:

  • End to end process owners who understand the whole journey, not just one tool or one step.

  • Cross role squads that treat each orchestrated workflow like a product, with scope, prototypes, tests, and releases.

  • Better data plumbing, because this is where “we will fix our integration later” comes back to bite.

  • Transparent AI behaviour, so people can see what the system did, why it did it, and where humans still decide.

This is usually when “agent platforms” show up. Tools that can hold a goal in mind and figure out the sequence of actions across apps, instead of following one rigid flow.

The label matters less than the mindset shift. You are no longer bolting AI onto old processes. You are redesigning processes with AI baked into them.

How to measure Alliance

Now you measure end to end, not just steps.

  • End to end cycle time. For example, lead created to opportunity closed, ticket opened to resolved, idea briefed to campaign live. This is what executives actually care about.

  • Throughput and capacity. Deals handled, tickets closed, campaigns shipped per period at the same headcount.

  • Exception rate. Percentage of cases that still need manual rescue or heavy intervention. A high exception rate tells you where the system is brittle.

  • Cross team handoff time. How long work sits between roles or teams. AI should reduce the “waiting in someone’s inbox” part.

If your Alliance story sounds exciting but your end to end metrics have not moved, you have decorated the process, not changed it.

Summit: Ascend (Level 4)

AI is woven into how the organisation coordinates, decides, and executes. (Cross functional AI system).

At the top of AI Everest, the air is thin. Every lazy compromise you made on the way up shows up here as fragility, risk, or slow reactions. Very few organisations reach this level on purpose. Some stumble into something that looks like it, realise nothing really joins up, then quietly roll parts of it back.

Ascend is what most board decks hint at when they talk about “AI first operating models”.

Here, workflows stop at department borders only on an org chart. In practice:

  • Signals from support feed into product and pricing.

  • Competitive shifts ripple into budgets, roadmaps, and messaging automatically.

  • Customer behaviour flows back into how teams schedule work and where they put attention.

AI is not a separate thing anymore. It is the coordination layer, the early warning system, and the first pass at many decisions. It is orchestrated across your business and your operating system. That takes more than technology.

  • You need proof from Augment, Automate, and Alliance that AI actually works in your context.

  • You need data governance that lives in reality, not policy slides. Clean, connected data, not slogans.

  • You need org design that accepts some roles will change shape, some will disappear, and new ones will appear around AI operations and governance.

  • You need leaders who are willing to let go of some control and focus more on steering outcomes than approving every click.

Ascend is not a dashboard. It is the result of years of small, cumulative decisions to treat AI as part of your infrastructure instead of a project with a logo.

How to measure Ascend

Here you shift measurement from local efficiency to structural advantage.

  • Core business KPIs. Margin, revenue per FTE, time to market, customer retention, NPS or similar, tracked over time against your own pre AI baseline.

  • AI contribution view. Rough but honest attribution of which AI enabled changes link to which shifts in those KPIs. You are aiming for a cause and effect story, not “AI was somewhere in the mix”.

  • Rework and override rate. How often AI suggested actions or decisions are reversed by humans later. A falling override rate with stable or improved results is a strong maturity signal.

  • Decision latency. Time from signal to meaningful response. For example, competitor move to counter move, new signal to pricing change, insight to product tweak.

  • Portfolio health. A list of AI powered processes with owners, last update date, and measured impact. Stale automations at Ascend are risk, not value.

If you call yourself “AI first” but cannot show how it changed the shape of your business, you are still at Alliance with better marketing.

The Only Progression Path That Actually Holds

When you step back, the climb looks like a staircase up the mountain.

  • Align gives you a shared mental model and permission to experiment.

  • Augment builds personal fluency with AI as a partner using individual tools.

  • Automate builds the habit of turning repeat work into workflows at the individual or role level.

  • Alliance stretches that habit across teams and systems inside a function.

  • Ascend reshapes the organisation so AI sits inside the way you coordinate and decide.

The mistake is trying to teleport.

You jump to Alliance when you have not automated anything yet. You talk about Ascend when you have not even aligned on definitions. You buy “agent platforms” while people are still manually pasting into ChatGPT twice a week.

In theory you can deploy advanced tools on day one. In practice they sit under used because the organisation has not climbed the earlier levels.

A smarter sequence looks like this:

  • Start with low integration and low automation where the learning value is high and the risk is low.

  • Move to medium integration and growing automation on safe, repetitive work.

  • Only then attempt high integration and high automation on shared processes.

  • Treat full Ascend as a destination, not a pilot.

Each step de-risks the next. You can stack measurement on top of this staircase:

  • At Align and Augment, track behaviour and fluency.

  • At Automate and Alliance, track process and throughput.

  • At Ascend, track strategic and financial outcomes.

When your metrics do not match your level, your story will always look better or worse than reality.

Where Most Organisations Really Are

If you strip away the slideware and the vendor decks, the same patterns show up.

  • Many are stuck at Align, arguing about tools and policy with no shared language and no clear owner.

  • Plenty sit in Augment, with a handful of power users, some early Copilot fans, and no real shift in how work actually flows.

  • Some are grinding in Automate, building random automations with no backlog, no benefits tracking, and no bigger plan.

  • A few tried to jump straight to Alliance, spent a lot of money, and now quietly wonder why nothing really feels different.

Almost nobody sustains Ascend without having earned it.

The gap is not a lack of frameworks. It is the refusal to admit which level they are actually at.

Want to See How It Works in Practice?

We’re now running AI Innovation Sessions with other businesses to share what we learned: what worked, what didn’t, and how to get unstuck fast.

If you’d like us to present the journey to your leadership team and show how to apply it to your own business, get in touch and we'll set up a time to talk.

Written by Mike

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.

Subscribe to The AI Corner

The fastest way to keep up with AI in New Zealand, in just 5 minutes a week. Join thousands of readers who rely on us every Monday for the latest AI news.

Keep Reading

No posts found