
The AI bubble hits the front page Sam Altman has said the quiet part out loud: we are in an AI bubble. Investors are overexcited, and some will lose extraordinary sums, yet the long-term opportunity remains larger than the froth. That tension (between immediate excess and enduring potential) is what defines this moment.
A bubble, in plain English, is when prices and promises run well ahead of real cash flows. Storylines carry more weight than numbers, and easy money mixed with fear of missing out pulls more buyers into the race. Supply, costs, and limits eventually catch up, and when reality lands, prices fall quickly. The technology itself usually survives; what collapses are the business models with no underlying economics.
The headlines are everywhere because the spending has become impossible to ignore. Goldman Sach’s March report framed up the capex wave from big tech:
Microsoft has poured billions into OpenAI
Google has reorganised its research groups around AI
Amazon is building out vast new compute estates
And Meta is pushing AI into every consumer surface it controls.
The bills keep rising while profits lag, training remains expensive, scaling of AI is proving challenging, and the public is starting to show fatigue as low-value AI content floods every channel.
DeepSeek’s lower cost to build models claims did not end the AI race, but they changed the story: cost discipline can beat brute force, and that shakes moats built solely on the scale thesis from big tech.
Comparisons and parallels drawn with the late-1990s internet dot com crashed are hard to miss, according to those who witnessed it. Back then, stories moved faster than revenues; when the crash came, the internet survived but businesses with no viable economics did not.
We can expect the same pattern here. Hype and durable value can coexist because the capex wave and the learning curve move at different speeds, regulation will tighten, and projects with weak data or no process change will be cut. Capital will migrate toward firms with sound unit economics, rights, and distribution.
Capital, costs, constraints
The AI economy turns on three forces: capital, costs, and physical limits. Capital is abundant, so new projects get funded quickly. Costs remain high, both for training large models and for serving every response in production. Physical limits such as power, land, fibre, and cooling expand slowly, creating bottlenecks. Together these forces drive rapid growth in infrastructure, but profits lag behind.
Driver 1: Capex ahead of revenue. AI needs giant data centres full of servers, plus power lines and cooling systems to run them. These cost billions and must be paid for upfront, long before revenue arrives. Smart operators reduce the risk by signing energy deals early, building sites in stages, and launching smaller services that start earning while the bigger projects are still under construction.
Driver 2: Training and serving costs. Training an AI is not a one-time job. Companies rerun models many times to clean data, test safety, and refine accuracy. Even after training, the bills keep coming. Every single answer the AI gives, from a code snippet to a customer reply, burns compute and costs money. The only way margins survive is if those costs per answer keep falling, using tricks like reusing common results, pulling answers from a database instead of recomputing them, or shrinking the models. If the cost per answer stays high, growth becomes a liability.
Driver 3: Physical bottlenecks. AI doesn’t just live “in the cloud”, it eats huge amounts of electricity, water, and cooling. Expanding those supplies can take years. That makes location critical. Teams that secure long-term power and find ways to recycle heat keep costs under control. Those that don’t get stuck with delays, higher bills, and land shortages. The best sites are already scarce, and they will decide who can grow fastest.
Why hype and reality can coexist. And why AI rhymes with dot com
Hype and reality often move on different clocks. Markets assign value to what they expect a technology will become, while operations deliver results only once it is embedded into real workflows. That mismatch explains why prices can surge years before profits arrive, and why the gap can persist without the technology itself being in doubt.
The late 1990s dot com boom is the clearest example. I wasn’t following it closely at the time, but the graveyard of logos (e.g. Pets.com, Webvan, eToys etc.) is now shorthand for what happens when stories run far ahead of economics. Capital chased anything with “.com” in the name, valuations were set on eyeballs rather than cash flows, and many firms had no path to profit. When rates rose and patience ran out, the Nasdaq lost almost 80 percent of its value and hundreds of companies disappeared. The internet did not die, only the models with weak economics did. Amazon built logistics and cash flow. Google turned search intent into an ad engine.
AI rhymes with that history but is not identical. The hype is loud, the spending enormous, and training and serving costs remain high. Leadership is concentrated and there are pockets of speculation. But the differences matter. Big Tech already generates real profits and can fund long buildouts. Private capital absorbs shocks before public markets feel them. Useful cases exist today in automation, analytics, and content workflows. Efficiency plays like DeepSeek show that engineering can erode moats built on brute scale.
If an AI bubble bursts, it will not kill the technology. It will clear out weak unit economics. Durable cash flows and defensible systems will survive. Story stocks will not.
So what? Markets are cyclical and this one is huge
Every market moves in cycles: booms, corrections, recoveries. AI is following the same pattern, only at larger scale.
The capital commitments are heavier, the narratives are louder, and a handful of dominant names pull most of the attention. That mix produces bigger bubbles, and when they pop, the technology does not vanish. Instead, inflated stories get repriced and weak business models suffocate.
The losers will be familiar. Speculative apps will go first, unprofitable startups chasing vanity daily active user metrics will diminish, vendors charging per token without proof of value, and enterprises running vanity pilots with no measurable return. Infrastructure bets without foundations will follow: frontier labs with no distribution, speculative data centres without secured power or anchor tenants, and GPU resellers left with excess stock when demand normalises.
The winners, or at least survivors, are clearer. Infrastructure and middleware will hold steady: the “picks and shovels” (chips, memory, optics, power, cooling), efficient model providers that cut serving costs, mandatory layers for retrieval, governance, and evaluation, and integrators who can prove outcomes with audited results. Structural moats will matter most: energy players with secured supply and heat reuse, platforms built around data rights and compliance as regulation tightens, and cash-rich incumbents who keep investing through the downturn while weaker firms disappear.
New Zealand wins by proving use, not building scale
The global cycle of winners and losers will not hit New Zealand in the same way. We are not running frontier labs or building hyperscale data centres, so the direct impact of those bets going wrong (or right) will mostly land offshore. That is a strength. It keeps us out of the capex burn and the speculative overreach that will take out weaker players.
But it does not mean we are insulated. Our exposure is further downstream: we will feel it in the price of cloud services, in the stability of global partners, and in the skills we rely on. And because the giants set the pace, we inherit both their costs and their advances. When their bills rise, ours rise too. When they cut costs through efficiency, we benefit as well.
The opportunity for New Zealand lies in adoption. Microsoft and Mandala estimate that by 2035, AI applications and datacentre infrastructure could add NZD $3.4 billion in value to our economy, with about $2.1 billion of that from applications in healthtech, agritech, and fintech. That is where our strengths already lie, and where we can generate results faster than trying to compete head-to-head with Big Tech.
That means playing it smart when engaging with the AI ecosystem. The principles below set out a clear playbook: how to select the right use cases, how to measure progress, how to structure vendor relationships, and what pitfalls to avoid. New Zealand will not win this cycle by outspending giants, but by proving where AI creates real value, and exporting those lessons to the world.
The New Zealand AI Playbook in 5 Steps
1. Start small, move fast
Run several experiments in parallel, kill the weak ones early, and double resources on the shoots that grow.
Pick three workflows where minutes saved equal dollars saved. Target high-volume, repetitive tasks with measurable error rates and named owners.
Run aggressive 90-day proofs: baseline in weeks 1–2, deploy a scrappy version by week 4, nurture the winners through weeks 5–8, and run a live production test with real users in weeks 9–12.
Set hard stop rules: shut it down if costs don’t fall meaningfully or quality fails on your benchmark tests
2. Build light, scale later
Keep the stack simple. Retrieval on top of source systems, a handful of tools for high-value actions, and small models first.
Log everything, and add a review queue for edge cases.
Scale or distil only when volumes demand it, and keep humans in the loop for risky workflows until you have three clean months of production performance.
3. Measure hard outcomes
Two metrics matter above all: cost per task and gross margin impact.
Enforce budget discipline with weekly caps, per-user limits, and early alerts.
Track quality through accuracy on benchmark tests, error rates, and user acceptance. Publish results weekly.
Keep ROI simple: labour saved plus revenue lift minus serving and licence costs. If the line is negative, stop.
4. Buy smart
Procure on outcomes, not tokens. Insist on price per resolved task, with service credits for misses.
Keep ownership of prompts, logs, and outputs, and ban model training without explicit consent.
Bake exit ramps into every deal: shorter than usual termination, fast data export, and a clear switch path.
Require vendors to prove stability for benchmarking, drift reporting, and weekly test runs.
5. Back the AI Gardeners
Find the employees already hacking value with AI and give them resources to scale.
Redeploy saved hours into backlog tasks with named owners. Train for workflows, not theory: 10-minute guides, runbooks, short embedded videos.
Reward teams that move metrics, not those who polish demos.
The pitfalls are just as clear. Paying for hype in the form of expensive consultants, vanity licences, or pilots without follow-through wastes money and slows adoption. Chasing scale or frontier science with no competitive advantage is a dead end.
Written by Mike ✌

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.
Subscribe to The AI Corner
