
Matt Shumer's "Something Big Is Happening" hit 80 million views this week.
Shumer is the CEO of HyperWrite and OthersideAI, and has been building AI products since before ChatGPT existed. He's not a commentator, this guy builds. The essay is a catalyst, and for those working at the frontier, there is a lot to thank him for. It's done the heavy lifting of getting people who have been ignoring the shift to now take notice.
His core message: the disruption coming for knowledge workers is immense, and preparing means moving well beyond the free tools. Not just using ChatGPT or Copilot to draft an email, but paying for frontier AI tool, building adequate infrastructure, and learning to delegate real work to them. That message could not be more important.
At the same time, the essay implies a lot through what it leaves unsaid, and without more specificity, the message gets distorted. Shumer himself acknowledged as much on CNBC days after the essay went viral: "If I had known how viral this was going to go, I would have thought about certain parts and rewritten some of the parts for sure".
I sat on this for a few days before thinking let's partake in the discourse. My intention here is to add the missing specificity. Clarity where the essay leaves gaps, supported by inputs from two writers who published commentary on Shumer's essay last week.
Essay recap.
For those who haven't read it (link to Shumer's essay), a brief summary follows. For those that have, enjoy the recap. The essay opens with Shumer's own experience building software with AI:
"I am no longer needed for the actual technical work of my job. I describe what I want built, in plain English, and it just... appears. Not a rough draft I need to fix. The finished thing."
He goes further, claiming the latest models demonstrate something most people assumed was years away:
"It had something that felt, for the first time, like judgement. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have."
Anecdotally, I'm not sure I agree on the judgement and taste claim, at least not yet. With code the bar is more binary: does it work or doesn't it (over-simplification, of course). For knowledge work like writing, strategy, and consulting, I'm still doing the heavy lifting on judgement with fine-tuned systems and engineering support.
From there, Shumer compares the moment to February 2020, right before COVID: an enormous disruption most people can sense but haven't experienced directly. On that front, he's right, and the specifics are worth unpacking.
The big thing Shumer gets right
Shumer is right about the big thing: AI is crossing a capability threshold for knowledge work. Most people haven't caught up to what that means (spoiler: Microsoft Copilot, Google Gemini and free ChatGPT don't offer the capabilities in reference out of the box, or in general).
The real shift isn't smarter chat. It's autonomous execution: delegating entire tasks to AI and having finished work come back. Not "help me write this email" but "analyse these 200 support tickets, identify the three most common complaints, and draft a recommendation for the product team".
The tools where these capabilities live are ones the general workforce has never heard of, let alone used: Claude, Cursor, Windsurf, Devin, to name a few. The pace data backs this up:
"About a year ago, the answer was roughly ten minutes. Then it was an hour. Then several hours... that number is doubling approximately every seven months, with recent data suggesting it may be accelerating to as fast as every four months."
I can speak to this directly. I'm not a developer. I'm a semi-technical knowledge worker who consults with businesses on AI adoption, and my complete workflow has been rebuilt three times in four months as new capabilities arrived. The learning curve is steep, fascinatingly exciting, and Shumer is right that the sooner people start climbing it, the better.
Developers are the canary, not the target
Software engineering has been hit first, and not by accident. AI labs built coding capability first because building AI requires enormous amounts of code. If AI can write that code, it accelerates the next generation of itself.
But the canary isn't dying. Shumer predicts "there will be far fewer programming roles in a few years", yet the number of job postings for software engineers has actually increased in the twelve months since Claude Code launched. As AI makes development faster and cheaper, latent demand for software absorbs the efficiency gain. This is Jevons Paradox: when a resource becomes more efficient to use, total consumption increases. Net net, the pie grows.
The experience developers are having, of watching AI transform the tasks they do daily, is coming for every other knowledge worker: lawyers, financial analysts, marketers, accountants, consultants, writers, designers, customer service. The tasks within these roles will change dramatically.
The climb is harder than social media implies
Shumer's practical advice is to sign up for paid AI tools ($20 a month) and spend an hour a day experimenting. That's solid advice, and it's the right place to start.
But think of individual AI adoption as something like a ten-rung ladder. That advice gets someone from rung zero to rung one or two: signing up for ChatGPT, uploading a document, rewriting an email. What Shumer describes in his own work (AI building software end-to-end with no human input required) sits at rung eight or nine. The gap between those rungs is enormous, and the essay doesn't convey how much technical work is required to bridge it. I've been more specific in another article about how businesses should think about AI adoption like climbing AI Everest.
Getting from casual use to autonomous execution requires serious infrastructure: context management systems, progressive disclosure, tools talking to one another, constant fine-tuning of outputs through eval frameworks. The list goes on. It's not a matter of spending more hours with ChatGPT. It's a fundamentally different kind of work, plus it's hard.
Our team is small, lean, built to move fast, and drinking from the AI firehose every day and night. Even then, the effort has been substantial. It's also just plain hard work. Extrapolate that to a mid-size or enterprise business, and the timeline, cost, and human capital required becomes genuinely daunting. For some leaders, conceiving of how to even begin is the real barrier, which is where credible experts become essential.
Naturally the tools will become increasingly accessible for the average knowledge worker. But we're still a ways away from that happening... Most people are still grappling with working with ChatGPT beyond it being a better Google.
Tasks are changing. Jobs are not disappearing.
The essay's strongest passage is also its most dangerous:
"AI isn't replacing one specific skill. It's a general substitute for cognitive work. It gets better at everything simultaneously... Whatever you retrain for, it's improving at that too."
Nobody disputes that AI is transforming the computational layer of cognitive work: the analytical, pattern-matching, data-processing tasks that make up a portion of every knowledge worker's role. That disruption is accelerating, and Shumer is right to sound the alarm.
But "a general substitute for cognitive work" treats every knowledge worker role as if it's entirely computational. It's not. A financial analyst doesn't just build models and crunch numbers - they build trust with clients, read the room in a pitch, and exercise judgement about which risks to flag and which to absorb. A lawyer doesn't just review contracts - they counsel, negotiate, manage relationships, and make calls that depend on decades of contextual experience.
The computational tasks within those roles will absolutely be transformed by AI. The relational, trust-dependent, judgement-heavy parts of the work won't follow at the same pace, and some may not be disrupted at all, at least not evenly distributed in our lifetimes.
The distinction between tasks changing and jobs disappearing is critical to discuss, which Shumer leaves out. The implied conclusion most readers will takeaway from Shumer's essay, that particular roles are about to be eliminated, is a different claim to "the computational parts of those roles are being automated", and a highly confronting one for anyone on the receiving end.
Shumer isn't alone in this framing. Anthropic's Dario Amodei warns of 50% white-collar job disruption within one to five years. Microsoft's Mustafa Suleyman expects most white-collar tasks automated within 18 months. When industry leaders keep making these claims without specificity and distinguishing task evolution from role elimination, the cumulative effect isn't motivation, it's paralysis.
Connor Boyack, writing this week in "AI isn't coming for your future. Fear is", frames this well: a headline saying "AI will replace 50% of jobs" makes the brain imagine 50% of workers sitting idle. It doesn't simultaneously imagine the new roles and industries that will be created, because those don't exist yet. The seen is vivid. The unseen is invisible, but invisibility is not the same as nonexistence.
David Oks, writing on Substack in "Why I'm not worried about AI job loss", provides the economic framework. Labour substitution is about comparative advantage, not absolute advantage: not whether AI can do specific tasks, but whether humans working with AI produce less than AI alone. In software engineering, the human-AI combination is still superior. That changes the conversation from replacement to evolution.
The pressure isn't equal
Shumer writes that the experience tech workers have had "is the experience everyone else is about to have. Law, finance, medicine, accounting, consulting, writing, design, analysis, customer service. Not in ten years. The people building these systems say one to five years."
He's right that the disruption is coming for all of those industries. The question worth asking is whether they'll all experience it at the same pace, with the same intensity, and under the same competitive pressure.
Developers are in the AI labs' own domain. They face intense competitive pressure from peers who've already adopted these tools, and the tools themselves are built for their work first. That combination of pressure and accessibility is why software engineering is moving fastest.
For financial analysts, lawyers, marketers, and the rest of the knowledge economy, that dynamic doesn't exist. These industries aren't the labs' core capability set. The tools reaching them will be powerful, but the accessibility, the specificity to their workflows, and the competitive pressure to adopt are all materially lower. A senior partner at a financial services firm put it plainly: the team understands AI is coming, but they're analysts, trained in analysis, busy doing analysis, enjoying analysis. Learning to manage AI agents feels like a second job on top of the first.
Some industries and geos will be disrupted by startups and newcomers sooner rather than later, that's inevitable. But the timeline will vary dramatically by industry, by geography, and by the competitive dynamics within each sector. A one-to-five-year window for software engineering doesn't translate directly to a one-to-five-year window for regulated professions, creative industries, or roles where the work is primarily relational rather than computational.
None of this dampens Shumer's core message. It sharpens it: the disruption is coming, the timeline varies more than a single essay can convey, and that variance is worth understanding for anyone trying to plan around it.
The models aren't all that matters
Shumer's essay frames model capability as the binding element: as models improve, disruption follows. His pace data is compelling, and the trajectory isn't slowing down. But if model capability were the binding element, we'd already be seeing mass displacement.
As Oks points out, GPT-3 has been publicly available for six years and GPT-4 for three, yet even in outsourced customer service, the lowest-hanging fruit on the automation tree, mass layoffs due to AI have not materialised for long periods of time. Companies that went for aggressive headcount reduction quietly began rehiring because it wasn't workable. Klarna is the clearest example: they cut roughly 700 customer service roles between 2022 and 2024, replacing them with AI. By early 2025, quality had deteriorated enough that the company started rehiring. The models weren't the problem. Everything around them was. Therefore improving intelligence is not the barrier to unleashing job displacement.
Oks' bottleneck argument is the most compelling counterpoint to any imminent displacement narrative. The real constraints are human: company cultures, tacit knowledge, professional norms, procurement cycles, internal politics, and the sheer inertia of how things have always been done. Businesses don't move at the speed of model releases. They move at the speed of people deciding to do things differently.
I see this in every business I work with. The technology is ready. The people, processes, politics, and data are not. As long as those bottlenecks exist, the human-plus-AI combination will be more productive than AI alone. Not a temporary state, the operating reality for the foreseeable future.
Regulation creates a further buffer. A lawyer recently framed it as: the moment regulations accept AI as an approved reviewer, the profession contracts overnight. But regulation moves at the speed of politics, not technology. That gap is measured in years, sometimes decades.
The pattern that keeps repeating
Boyack makes the historical case. When ATMs arrived in the 1970s, everyone predicted the end of bank tellers. Between 1985 and 2002, the US went from 60,000 ATMs to 352,000, and teller employment grew from 485,000 to 527,000. Spreadsheet software was supposed to eliminate accountants. The profession expanded.
Every single time, the technology made the work cheaper, demand expanded, and more humans ended up employed than before. Between them, Oks and Boyack named what my gut was telling me: the technology is extraordinary, but any imminent mass job loss narrative that a reader without context my take away is not supported by the economics, the history, or the evidence on the ground.
Where I land
Maybe I'm reading too much into the implied takeaways that people further from the AI conversation might latch onto. Anecdotal conversations tell me I'm not. And when an essay hits 80 million views, the message that lands isn't always the message that was intended. Shumer's core call to action, that people should move toward these technologies and build capability now, is exactly right. For many readers, though, the takeaway wasn't motivation. It was fear.
Shumer himself acknowledged on CNBC that he would have rewritten parts had he known the reach. The intent behind everything above isn't to dissuade anyone from his message. It's to build on it with the specificity and context that a viral essay can't carry on its own.
The capability is advancing fast, and it will keep advancing. The human bottlenecks, the uneven competitive pressure, and the historical pattern don't change that trajectory. But they do change the timeline, and Shumer's essay doesn't offer sufficient weight to the forces that slow adoption and technology diffusion in practice.
"Don't worry" is still bad advice at an individual level. The risk isn't that AI storms the castle. It's that demand quietly reroutes around it, the work getting done somewhere else by someone who figured out the tooling six months earlier. By the time the impact is visible, the window has closed. That's the real competitive threat, and it doesn't look like disruption from the inside. It looks like a slow quarter, then a missed H2 target. Customer churn for more cost effective rates elsewhere, or "we're taking the service in-house" as tool accessibility improves.
One more thing worth noting: AI doesn't reduce work, it intensifies it. A recent HBR study found exactly this: employees worked faster, took on broader scope, and extended into more hours. Feeling more productive and feeling less busy are not the same thing.
Shumer gets this exactly right:
"This might be the most important year of your career. Work accordingly... The person who walks into a meeting and says 'I used AI to do this analysis in an hour instead of three days' is going to be the most valuable person in the room."
He also gets this right:
"Teach your kids to be builders and learners, not to optimize for a career path that might not exist by the time they graduate."
Curiosity, adaptability, building capability with these tools early. That advice will age well regardless of how the next few years unfold. I wrote about this previously.
The real risk isn't the technology
The biggest risk right now isn't the technology. It's the fear.
Boyack warns that telling ordinary people an avalanche is coming doesn't end with them subscribing to AI tools. It ends with a massive populist backlash: banning data centre construction, choking off development, and guaranteeing jobs for life at the expense of progress. When essays frame AI as an avalanche, the result isn't calm adoption. It's panic, backlash, and calls to shut everything down.
If AI can genuinely accelerate medical research, scientific discovery, and material abundance, then choking it off out of fear is the real catastrophe.
The technology is extraordinary. The timeline is longer than Shumer says. The pie will grow. The people who move now will be positioned to grow with it.
Read Oks and Boyack along with Shumer. The counterarguments are the balancing perspectives required to appropriately hold two ideas at once, and develop a well-rounded perspective on the role AI will play in the working lives of everyone paying attention.

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.
Subscribe to The AI Corner
