
Credit: Madrona
Most teams are stuck arguing definitions while the work sits undone. I too have fallen into this camp trying to articulate exactly where the lines start and stop for these different applications of AI in the workforce.
The practical question is simpler: what jobs can AI take off your plate today, safely, repeatedly, measurably?
The three modalities that matter:
Chatbots (conversational AI)
Copilots (in-artefact assistance, think Gemini in Google Workspace or ChatGPT in Microsoft 365)
Agents (work without you)
Get these right, and the labels take care of themselves.
1) Chatbots: open-ended conversation
If you’re just getting value from AI today, this is usually where it starts. Chatbots are the fastest way to think, explore, and iterate without changing your existing tools.
You go to an app (ChatGPT, Claude, Gemini), ask questions, iterate on drafts, and think out loud with an AI.
It’s live, synchronous, and text-forward.
Best for: research, brainstorming, rewrites, stress-testing arguments.
Mini playbook (chatbots):
Define the task and outcome (“Draft a 1‑page brief with pros/cons and a recommendation”).
Seed context (audience, tone, constraints, examples).
Iterate with meta-prompts (“What’s missing? Where are the risks?”).
Summarise actions and open questions.
Export artefacts to your system of record.
2) Copilots: AI embedded in your artefacts
When you’re past blank-page thinking and into making, copilots are the smoothest way to add speed without changing your workflow. The artefact stays centre stage; AI sits alongside it.
You’re creating something: code, a doc, a deck, and AI lives alongside it (side panel, inline autocomplete).
Gemini in Google Workspace or ChatGPT in Microsoft 365 are the obvious examples, alongside Cursor’s autocomplete; Grammarly with its autocomplete and spellcheck is another.
You still drive; AI accelerates where you already work.
Mini playbook (copilots):
Open the artefact and set rules (style, voice, structure).
Accept/reject suggestions live; add inline prompts for tricky sections.
Checkpoint versions before big changes.
Run consistency checks (headings, references, tone).
Final human QA for risk and accuracy.
3) Agents: work that happens without you
Once you know what “good” looks like and where the drudgery lives, agents take the wheel. They shift work from “I’m doing it with AI” to “AI is doing it for me”.
Tool reality: most teams might start with workflow-first platforms (Relay.app, Zapier), they might bolt on more technical builder tools (n8n), and explore agent-first systems (Autohive) as confidence grows.
Once configured, AI acts on your behalf behind the scenes.
It can label emails, log contacts in your CRM, draft follow-ups, post updates, and run on schedules or triggers.
Think “delegated teammate”, not “assistant you have to talk to.” It wakes up, does the job, ships an output, and goes back to sleep.
Mini playbook (agents/workflows):
Define the outcome and constraints (what good looks like; lines it must not cross).
Grant tool access and permissions (read/write scopes, audit).
Set triggers, schedules, and inputs (events, inboxes, webhooks, etc.).
Enable logs and traceability (who/what/when/why).
Add approval thresholds (human-in-the-loop for high-risk actions).
Track weekly metrics (cost per task, accuracy, exceptions).
The bottom line: chatbots and copilots augment how you work in the moment; agents do the work without you being there.
The bottom line: chatbots and copilots augment how you work in the moment; agents do the work without you being there.
The “workflow vs agent” distinction
Before we get lost in labels, it's best to align on how the work actually gets done. The difference isn’t mystical, it’s about who designs the path and when.
Workflow: the human designs the flowchart upfront. “When a new lead arrives, look up LinkedIn, extract X, score by Y, write to CRM, draft email". AI applies judgement inside your lanes, but you specify the lanes.
Agent: you set a goal, give tools and access to knowledge and memory. “Add new leads to CRM and enrich them; you can search LinkedIn, the web, and internal data". AI figures out the flowchart on the fly.
My favourite analogy:
Workflows are trains: fast and reliable on fixed tracks. Relay.app, Zapier and n8n are more centred towards train end of the spectrum
Agents are cars: destination selected, the route is chosen on the fly. Autohive fits more like a car.
Is the debate relevant? Some think so, some don't. Reasons why it shouldn't matter:
In your operating reality, both paths wake up, do work, ship an output, and go back to sleep.
It’s a spectrum of autonomy, just like hiring a junior vs senior teammate. Juniors get prescriptive checklists; seniors get objectives and boundaries.
In reality, workflows and agents overlap. Workflows often let AI make some decisions, and agents still follow a few “do this, then that” rules. The real difference is who sets the steps. With workflows, a human maps the flow upfront. With agents, a human sets the goal and guardrails, and the AI figures out the steps from there.
Quick chooser for tools:
Non-technical teams, fast wins → Relay.app (workflows with AI steps and Agents mixed in).
Technical teams, custom logic → n8n (API-level control, steeper learning).
Agentic delegation and multi-agent orchestration → Autohive.
Deep enterprise retrieval and assistants-in-flow → Glean (workflows with emerging agentic actions).
Google-native shops exploring agents inside Workspace → AgentSpace (fast-evolving, ecosystem-first).
A 4‑question filter to choose the right modality for your work
Think of this as a traffic light for each task. Answer these before you build.
1. Ambiguity: Is the path to the outcome predictable?
Low ambiguity → workflow.
High ambiguity → agentic.
2. Risk: What happens if it’s wrong?
High risk → more prescription + human-in-loop. Workflows might be more suitable.
Low risk → more autonomy. More agentic.
3. Volume and variance: How often, and how different each time?
High volume + low variance → workflow first.
High variance → agentic with guardrails.
4. Connectivity: Do you have the data/tools/permissions?
If not, fix that first. Agents without the right tools are just slow workflows.
Examples in Action
To make this real, map one task you run weekly into the modality that fits best.
Agent-suited (high ambiguity, variable paths). Open-ended research example:
Question: “What’s a good churn rate for SaaS?”
You might need one Google query, or 25 searches across Google, Reddit, YouTube, LinkedIn.
Tools like Perplexity’s Deep Research demonstrate the pattern: goal + tools, autonomous exploration, packaged answers with citations.
Workflow-suited (repeatable, auditable steps). Meeting follow-up example:
After meeting ends, retrieve transcript
Decide if follow-up is needed
Extract action items
Draft email
Human approve
Building confidence: two safety levers
If you’re uneasy handing work to AI, that’s healthy. Confidence comes from design, not optimism.
1. Define more of the flow upfront
Break tasks into discrete steps, inputs/outputs, decision points.
Reserve AI judgement for scoped steps (classification, extraction, drafting).
2. Keep a human in the loop at key moments
Require review/approval before high-impact actions (customer emails, CRM writes, finance changes).
Start with “drafts for review", then graduate to “auto-run below risk thresholds”.
As models improve (e.g., GPT‑5 significantly reduced hallucinations in many doc-heavy use cases for me), you can relax both: slightly less prescription, slightly fewer approval gates.
Choosing tools through the modalities lens
Don’t pick tools by logo familiarity. Pick them by the job they need to do in your stack.
Chatbots: pick for reasoning quality, context handling, and your data boundaries (privacy, redaction, retention).
Copilots: pick for ergonomics in your primary artefacts (docs, code, slides), inline control, versioning, and recoverability.
Agents/workflows: evaluate on adoption, adaptability, and depth: Can non-technical users build and maintain it in under two hours? Will it still be relevant in 12 months (short lock-ins, bring-your-own-model options)? Does it connect deeply into your stack (read + write + meaningful actions)?
NZ relevance
For NZ teams (often small, multi-hat crews, tight budgets) the path is clear (I share more in detail here):
Master the chatbots and conversational AI tools first (not just chatting away, legitimately building Custom GPTs/Projects/Gems, leveraging Projects, and utilising custom connectors). Also leverage copilots more and more within existing tools.
Being building automations and workflows to remove 80% of the drudgery across repetitive processes.
Then layer agents in where ambiguity and payoff justify autonomy (research, outbound, enrichment).
That’s how Tiny Teams get big leverage without scaling headcount. The way teams can achieve outsized gains is the edge to build in organisational redesign in the age of AI.
Parting thoughts
Teams that stay stuck in chat and copilot land will feel productive until competitors staff their first AI teammates. That’s the real gap opening now between those that build agents and workflows, and those that sit playing around with ChatGPT.
Written by Mike ✌

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.
Subscribe to The AI Corner
