Two scenarios for 2028 landed on the same weekend.

The same starting conditions, same technology shock, opposite outcomes. Both are worth reading in full.

Citrini's scenario:

  • unemployment hits 10.2%

  • the S&P drops 38% from its highs

  • white-collar workers flood the gig economy, and

  • the $13 trillion mortgage market starts cracking in San Francisco, Seattle, and Austin.

The mechanism is a negative feedback loop with no natural brake - AI capability improves, companies cut headcount, savings fund more AI, capability improves again. Evident Ghost GDP shows up: output that appears in the national accounts but never circulates through the real economy.

Bloch's scenario:

  • unemployment peaks at 5.8% then declines

  • real median household purchasing power rises 18% in three years, and

  • 7.2 million new businesses form in a single year.

Both pieces are rigorous, internally consistent, and start from the same premise: agentic coding tools take a step function jump in late 2025, a competent developer can replicate mid-market SaaS in weeks, and the CIO reviewing a $500k renewal starts asking "what if we just built this ourselves?"

The divergence comes down to one question: what do displaced workers do next? Citrini assumes they stay displaced. Bloch assumes they adapt. Everything else flows from there.

Where Bloch lands that Citrini misses

Bloch makes four points that the bear case either ignores or gets wrong.

  • The bears confused the repricing of one sector with the collapse of the economy. Software margins compressed, enterprise spending got renegotiated, and the long tail of SaaS got repriced. The bears tracked the dollars leaving software budgets but didn't follow where those dollars went. Bloch's procurement manager story captures it: a Fortune 500 division negotiated 30% off a SaaS contract, then hired three people to enter a market they'd been eyeing for two years. The dollars didn't disappear. They moved.

  • Software spending is an input, not an output. This is the point most commentary misses entirely. Software is a cost that businesses pay in order to generate revenue - when the cost of that input drops, the business has more resources to deploy toward expansion, R&D, new hires, capex in new markets. That's how productivity growth has always translated into economic growth. Citrini models the input cost dropping and stops there. Bloch follows the money to the output.

  • When every purchase is optimised, people buy more, not less. Citrini models agent-led commerce as the destruction of the intermediation layer. Bloch models it as a transfer from rent-extractors to consumers. When Mastercard reported agent-led price optimisation compressing per-transaction value, the market panicked. What actually happened: total transaction volume accelerated. People bought more things, at better prices, through more efficient channels.

  • The same tools that eliminated roles made it dramatically cheaper to start things. The cost of launching a business - software, legal, accounting, marketing, design - fell 70-80% in eighteen months. Bloch's scenario shows 7.2 million new business applications in 2027. The tools that caused the displacement also compressed the creation cycle.

Why this deflation is different from every one before it

Those four points explain where the money goes. The deeper question is why the market can't see it coming.

Every prior technology deflation compressed the cost of a physical input: steel, semiconductors, fuel, transistors. Cars got expensive first because manufacturing was inefficient, then got cheaper as production scaled. The pattern was always the same: physical input costs dropped, prices followed, affordability unlocked new demand.

This is the first time the cost of the intelligence input itself is collapsing.

For the entire history of services, human thinking was the bottleneck. Legal advice was expensive because lawyers were expensive. Consulting was expensive because consultants were expensive. The cost of every service was anchored to the cost of the human intelligence required to deliver it.

That mental model is being reshaped, and the market is struggling to update its view because the distinction between technology-driven deflation and demand-driven deflation is one most people have never had to make. When prices fall because nobody is buying, it's a death spiral. When prices fall because the cost of production collapsed, it's a 'living standards boom'.

Citrini models the first. Bloch models the second. The historical pattern - cars, televisions, air travel, computing, mobile phones - favours Bloch every single time.

But Citrini makes a point worth noting: every previous cycle still needed humans to perform the new jobs, and AI improves at the very tasks humans would redeploy to. That's genuinely new, which is exactly why the executive response matters more than the macro debate.

What executives actually need to hear

Bloch makes a distinction most people miss because they think in absolutes: white-collar incomes are being transitionally disrupted, not structurally impaired.

"AI will take jobs" reads as permanent, total, and irreversible. Reality is messier. Anyone watching closely knows the mass layoffs of the last two years were driven by financial exuberance - executives who over-hired during cheap capital, then corrected when rates rose. AI became a convenient explanation for decisions already in motion, and the chronic pace of change has caused Execs to question where / when / who to re-hire, if ever, due to AI.

The human response to disruption is resourceful, not passive. Many of us will have seen someone close to us lose a job in the last 3-5 years, and one of the first reactions is always "I'm going to start something". Whether they succeed is secondary - the instinct is indicative. People don't stand by idly, even in well-resourced welfare states, they build.

The businesses that end up in Bloch's scenario instead of Citrini's treat AI savings as a reinvestment engine, not a cost line. Bottom-up savings create margin, that margin funds top-down strategic bets, those bets create growth, and growth funds further investment. The flywheel compounds.

The problem is that most businesses aren't even getting to the first turn. Their governance model caps AI at email assistance: Copilot drafts messages faster, and ChatGPT summarises meeting notes. That becomes their ceiling due to the gap in understanding of what the frontier of AI can deliver.

Meanwhile, the people inside these organisations have zero concept they can now build tools, automate workflows, and create capabilities that didn't exist last quarter. Not prompt an assistant. Build software. And that's all because of the governance ceiling companies are placing on their own people by only deploying Copilot or ChatGPT, rather than giving them the tools to reimagine how workflows are executed, choosing to let that sit solely with IT. The goal here isn't to open pandora's box and let anyone in the business build anything, with any data, for any task. It's about experimentation: when people start building their own tools based on what the technology enables, their minds open up to the possibilities of what AI can do to their workflow. Next thing they see themselves as guardians of the technology, advocates for change, and innovators looking for new solutions to old problems. Executives need to foster 1) exposure to this, 2) a safe sandbox to build and test, and 3) a clear path to production when something works.

Exposure looks like giving teams access to the right tools, plus a simple mandate: build one small thing that saves you an hour a week. Safe sandbox means tight guardrails: synthetic or low-risk data, approved connectors, logging, human-in-the-loop review, and a kill switch. Path to production means IT and security aren’t gatekeepers, they’re enablers with a lightweight intake, standard patterns, and a “yes, if” mindset.

If this doesn't happen, innovation doesn't materialise. Shadow AI will appear, as will brittle hacks, and a workforce that never learns what’s possible. If done well, leaders will see get compounding capability: people shipping internal tools like a PO extractor, a meeting-to-brief generator, or an ops exception detector in days not months.

The governance ceiling is turning Bloch's scenario into Citrini's, one company at a time.

Three moves for executives this quarter

  1. Stand up a frontier team. Not a committee. A small, cross-functional group (3-5 people) with permission to experiment beyond the Copilot ceiling. Give them access to Claude Code, Cursor, Replit Agent - the tools that let non-developers build internal software. Measure what they ship in 90 days, not what they plan.

  2. Follow the savings. Every AI-driven efficiency gain creates a decision: bank the margin or reinvest it. Track where the savings are going, department by department. If 100% flows to the bottom line, the flywheel is stalled. The companies in Bloch's scenario reinvested. The companies in Citrini's didn't.

  3. Kill the email-assistant ceiling. The current governance model at most enterprises reduces AI to communication tools. Expand the mandate. Let teams build workflows, automate reporting, create customer-facing tools. The risk of over-governance is higher than the risk of experimentation right now, because the cost of standing still compounds quarterly.

Both scenarios agree on one thing: the intelligence premium is unwinding.

The only question is whether the organisation captures the upside or absorbs the downside. Not a technology decision. A leadership decision.

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.

Subscribe to The AI Corner

The fastest way to keep up with AI in New Zealand, in just 5 minutes a week. Join thousands of readers who rely on us every Monday for the latest AI news.

Keep Reading