
If you want to know where AI is actually heading, ignore the keynotes. Look at the job boards (thank you whoever recommended this on X/Twitter).
With Claude's help, I spent time analysing the current hiring patterns across the major US and Chinese AI labs: OpenAI, Anthropic, Google DeepMind, xAI, plus DeepSeek, ByteDance, Baidu, Alibaba, Tencent, and Huawei.
What emerges is not a story about competing chatbots. It is a story about fundamentally different visions for what AI becomes next.
Note: I've ignored Amazon and Meta in this instance. Not because I do or don't believe they'll be contenders moving forward (Amazon is fast becoming a powerhouse in the AI space, along with their +1M globally deployed fleet of robots): purely down to their job boards had too many pages to analyse (AI didn't help here) and the big 3 frontier models + xAI are of most interest to analyse for the capabilities used by 95% of AI users.
The US Labs: Four Divergent Strategies
OpenAI is betting on atoms. Global scale, including robotics.
With over 450 open positions, OpenAI's hiring tells a clear story: they are becoming an infrastructure company. Data centre hardware operations technicians. Network engineers. Stargate deployment specialists. These are not software roles. They are building physical things at unprecedented scale.
Their Stargate project represents a $500 billion commitment to own the compute layer, but infrastructure is only part of the picture. OpenAI is building out a massive Forward Deployed Engineering team with roles spanning Dublin, London, Munich, Paris, Singapore, Tokyo, and across the United States. We're seeing this in region with their Australian doors opening. This is the enterprise deployment army that will embed OpenAI into the world's largest organisations.
The push into robotics is significant. They are hiring robotics software engineers, simulation realism engineers, and field engineers. They are also building custom silicon with a dedicated hardware team including design verification engineers, firmware engineers, and physical design engineers.
People will call OpenAI out for their continuous operating losses. That is startups 101. Amazon, Uber, Facebook, Tesla: almost every major tech story spent years burning cash to buy time, talent and infrastructure before the business model really clicked.
But it is important to understand what OpenAI is burning money on.
Facebook and Uber were primarily attention and marketplace platforms.
OpenAI sits much closer to the Amazon camp: it is spending billions to build infrastructure for the next decade. Models, tooling, and an ecosystem that other products and companies build on top of.
The real question is not "are they losing money?" but "does this infrastructure become the AWS of intelligence, or a commodity API that anyone can swap out?".
The caveat to the above is Amazon built physical and cloud infrastructure around brutally simple business models (sell goods, rent compute) and controlled its own stack. OpenAI still relies heavily on external compute providers like Microsoft Azure for training and serving its models, and is betting on an AI ecosystem that looks incredibly powerful today but still sits in an unsettled market. Chinese players and open source models are proving there are cheaper, different architectures that might suit some use cases better.
If OpenAI's stack becomes the default intelligence layer, today's losses will look cheap. If the market fragments around open source and alternative providers, they start to look a lot more like subsidised experimentation.
Anthropic is playing the trust game. Building for business.
Anthropic has around 300 open positions, but the distribution tells the where they're focused (which is well known in the ecosystem): nearly 30% of all hiring is in sales and B2B centric. They want a Global Partner Lead for Accenture to architect their most critical systems integrator partnership. They are hiring Customer Activation Managers to guide enterprises through their first 90 days of AI transformation. They need Solutions Architects across San Francisco, New York, London, Tokyo, and Seoul.
The geographic expansion is aggressive. Sydney now has a founding GTM recruiter for Australia and New Zealand. Dublin, Paris, Singapore, Bangalore, and Seoul all have dedicated roles.
Meanwhile, their interpretability team continues to grow with research engineers and research scientists working to reverse engineer how neural networks actually work. When a regulated industry like finance, healthcare, or government needs to explain why an AI made a decision, Anthropic is positioning Claude as the answer.
Their bet is that enterprises will pay a premium for AI they can trust and explain.
This is why I see Anthropic as the second big winner of 2026. While others chase scale, Anthropic is chasing something very difficult to replicate: institutional trust. When boards ask "can we explain this to regulators?" and "what happens when it goes wrong?", Anthropic has built their entire company around having good answers. The sales heavy hiring is not a sign of desperation. It is a sign they are ready to convert years of safety research into commercial relationships that will be very hard to displace once embedded.
xAI is more serious than headlines suggest. Vertical focus.
Forget the clickbait about anime companions. With around 270 open positions, xAI reveals a company pursuing multiple strategic bets simultaneously.
Their Foundation Model team represents roughly 17% of all hiring. These are serious research roles: Member of Technical Staff for Reasoning, Pretraining Scaling, Multimodal, and RL Infrastructure. They have eliminated the distinction between "researcher" and "engineer" in favour of unified technical staff, but the work itself is frontier research.
The AI Tutor programme is industrial scale, with over 40 specialised roles spanning quantitative finance, corporate banking, medicine, chemical engineering, physics, and pure mathematics. This is not some casual crowdsourcing of random talent, this looks more like structured RLHF at a level that rivals any lab.
What surprised me most: xAI is building a government practice. They have dedicated roles for Government Engineering, a Mission Manager for Government, and infrastructure engineers specifically for US Government work. Combined with their X Money payments team and enterprise engineering roles, xAI is pursuing consumer, enterprise, and government simultaneously.
The Memphis data centre operation (Colossus) accounts for 25 dedicated roles across construction, electrical engineering, mechanical systems, and facilities operations. They are building their own compute infrastructure from the ground up.
One word: Elon. Don't bet against him. xAI's Colossus reached 100,000 GPUs in 122 days, versus the typical 4 years for comparable data centres from rivals like Microsoft or Meta. That's no fluke, it's pure brute force of motivating a workforce and partners to achieve unreal outcomes (Nvidia's Jensen Huang said it himself, he called the 19-day hardware-to-training ramp-up "superhuman," outpacing industry norms by orders of magnitude).
The question for xAI is focus. They are pursuing consumer, enterprise, government, and infrastructure simultaneously while also building foundation models and training data operations. That is an enormous surface area for a company of their size. Musk has pulled off this kind of multi front execution before at Tesla and SpaceX. But AI moves faster than rockets, and the competitive set is deeper. xAI has the capital and the talent to matter, but spreading across this many bets means they need to win several races at once.
Google DeepMind is thinking longer term than anyone.
DeepMind has around 100 open positions, making it the smallest of the four US labs by headcount. But the roles they are hiring for reveal ambitions the others are not publicly pursuing.
They have a role titled Research Scientist, Post AGI Research. Read that again. They are hiring someone to think about what happens after artificial general intelligence arrives.
Their CBRN team (Chemical, Biological, Radiological, Nuclear) has multiple research engineer positions. They have a Nuclear Engineer on a two year contract. Combined with Research Scientist roles in Fusion and Medical AI, DeepMind is positioning itself at the intersection of AI and existential risk.
The robotics investment includes hardware engineers in Cambridge, Massachusetts building the physical systems for Gemini Robotics. But notably, DeepMind is not trying to manufacture robots themselves. They are building the brains that will power other companies' bodies through partnerships with Apptronik, Boston Dynamics, and Agility Robotics.
The Senior Psychologist or Sociologist for AI Psychology and Safety role is telling. DeepMind is thinking about how humans will actually interact with embodied AI in the real world.
Google is my big pick for 2026 alongside Anthropic. Not because they will ship the most products (although, evidently they have outshipped everyone else this year, and will continue to do so in 2026), but because they are building capabilities no one else is even attempting.
When the conversation shifts from "which chatbot is best" to "how do we deploy AI safely in hospitals, power grids, and robotics", DeepMind will have spent years preparing. Google's distribution through Android, Chrome, and Search gives them deployment paths the startups cannot match. Unlike OpenAI, DeepMind does not need to raise another funding round or prove unit economics next quarter. They can play the long game because Alphabet's balance sheet lets them.
China: A Different Kind of Race
The Chinese AI talent market operates on a completely different scale and with different constraints.
ByteDance has over 10,000 job openings, with more than 2,300 focused specifically on AI. Baidu has increased AI hiring by 60% through its AIDU programme. Tencent is adding 28,000 roles over three years, with more than 60% focused on AI. Alibaba's 2026 campus hiring programme is 80% AI related positions. Huawei's Brave New World initiative aims to recruit 10,000 new employees.
These numbers dwarf what US labs are doing. The scale reflects both opportunity and desperation. China faces a talent shortage exceeding five million AI professionals, with demand growing 26% annually.
Chinese labs are heavily recruiting overseas, targeting researchers who obtained PhDs in the US or Europe and might return. DeepSeek's highest offers reach approximately NZ$250,000 to $350,000 annually. The focus areas differ from US labs too. While American companies split attention between consumer products, enterprise, infrastructure, and robotics, Chinese labs emphasise LLM development, AI infrastructure, and reducing dependence on foreign chips.
This is AI development shaped by geopolitical reality. The US export restrictions on advanced chips have forced a strategic pivot toward talent as the scarce resource, not compute.
The section on China in the AI space deserves an article in itself, and much more than the 3 bullet points I've provided. The barrier to analysis is language. Naturally AI could help with this, but I have no way of verifying or figuring out where to start. Excited to see what comes out of China in 2026. I have zero doubts we'll see another Deep Seek moment, which will be 10x more disruptive.
There are folks far better versed in where the Chinese AI market is at than me.
What This Means
The hiring patterns reveal a fragmented future. B2B markets don't see a winner takes all playing field (like consumer), therefore there will not be one winner in AI. There will be different winners in different domains.
Infrastructure and compute will likely consolidate around whoever builds the data centres. OpenAI's Stargate project and xAI's Colossus facility represent massive bets that physical infrastructure is the moat.
Enterprise AI is a trust battle. Anthropic's sales heavy hiring and interpretability focus make them the likely choice for regulated industries. Their aggressive geographic expansion suggests they see the next two years as the window to establish this position.
Consumer AI is a distribution battle. xAI's advantage is not purely technical. It is access to X's user base and a willingness to move fast across consumer, enterprise, and government simultaneously.
Robotics and embodied AI remains early innings. OpenAI, DeepMind, and xAI are all making serious bets, but manufacturing at scale remains unproven. Watch the simulation engineers and field engineers. Those roles suggest who is furthest along the path from prototype to production. Expect Amazon's scale, first mover advantage, and capabilities to dominate the US market, yet, be absolutely dwarfed by the momentum and development happening in China. The race between the US and China in this domain isn't even close.
Re: China is building a parallel AI ecosystem. The talent war there is not about which product wins. It is about whether China can achieve AI self sufficiency before the talent gap becomes insurmountable.
For anyone building a business around AI, the implication is that the technology layer is fracturing. Picking a single AI provider is increasingly a strategic choice about which future you are betting on. They all vary:
Infrastructure ownership
Enterprise trust
Consumer engagement
Embodied intelligence
Scientific moonshots
The job boards just told us where the money is going. Now we get to watch if they are right.
Written by Mike ✌

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.
Subscribe to The AI Corner
