Let's talk about the big three AI platforms: ChatGPT (OpenAI), Claude (Anthropic) and Gemini (Google).

It became clear over the past 18 months that sticking with one isn't a viable strategy to cover all bases of productivity.

The capability differentiation between platforms is widening, not converging. ChatGPT, Claude, and Gemini are each developing defensible advantages in specific domains, and those differences directly impact how actual work gets done. This isn't about brand preference, rather it touches on advantages that drive operational leverage.

Personal productivity advantage in 2025 comes from portfolio diversification + intelligent tool selection + workflow optimisation. If you believe one platform will eventually do everything well, Google's Gemini might be on that trajectory in the medium term (and as features become hygiene standards, they should all get to a point where the toolsets feature like a commodity, but the missing piece will be in integration and trade offs in model quality.) you're misreading the market dynamics. There are gaping holes in each of the major AI toolsets that another should pick up the slack for your workflow.

For those skeptical about why they should look at using the other tools, it’s healthy to be testing the same capabilities across various tools for the same task to continue to develop your AI Instinct (one of the most important skills that you should develop), avoid tool-lock, and actually see where the frontier is moving. Different platforms expose different strengths, and if you aren’t comparing them side-by-side as new capabilities are released (and as your own abilities develops), you’re blind to the gaps in your own stack.

For clarity, I pay for ChatGPT and Claude and we have a paid plan through work. These aren’t the only AI toolsets, but they’re the main three I flit between as part of updating my go-to workhorse stack. Here’s a list of the wider tools I dip in and out of when I need them.

I do think my ChatGPT paid subscription is under threat, depending on how the next 3–6 months of releases go. It's becoming harder to justify with the developments taking place in Claude and Gemini and their approach to building one is doubling down on a brilliant assistant, the others are building the operating system for how modern work actually gets done.

The Claude Skills Migration: Why I Moved 20 Workflows

Two years ago, ChatGPT handled everything for me. Fast, accessible, and good enough for most tasks.

18 months ago, I started bifurcating: ChatGPT for speed, Claude for depth, accuracy and creativity.

Then Claude Skills came out in October, and quickly I realised the continuous frustration Custom GPTs could be resolved through switching to Claude Skills. GPTs got me 90% but always needed re-work. Context felt limited. Task chaining was arduously clunky.

Problem is, you max out at 20 Claude Skills. I migrated 20 of my GPTs immediately. Why for?

  • Progressive disclosure architecture: Skills load only what's needed when it's needed. You get deep context capacity without sacrificing speed. This solves the core tension in AI workflows: comprehensiveness vs responsiveness.

  • Composable by design: Skills automatically stack together. Claude coordinates which ones to invoke and when so that you're not manually orchestrating workflows, the system does it intelligently. This is a structural advantage, rather than a feature improvement. Not to mention the accessibility to the web, tools and data sources in Claude Skills, they're a true weapon.

  • Deterministic execution layer: Critical operations run as code, not token generation. This is critical for anything requiring reliability: data processing, format conversions, API calls. You get both the intelligence of LLMs and the determinism of traditional code.

  • True portability: Same skills work across all conversations. This changes the economics of capability development.

Current allocation:

  • ChatGPT handles 20% (quick reasoning, thought partner on the go, fast validation).

  • Claude Skills owns 80% (everything structured, everything repeating).

Unless OpenAI materially upgrades Custom GPTs, this split will be permanent once the volume of Claude Skills I can build is unlimited (and Anthropic stops capping with unnecessarily low usage limits!).

Deep Research: Where Gemini Created Category Separation

This morning I ran the same competitive analysis across three platforms. Same assignment, same scope.

ChatGPT Deep Research:

  • Time: ~23 minutes

  • Sources: 10-15

  • Report Quality: Pretty good - good narrative

Claude Deep Research:

  • Time: ~11 minutes

  • Sources: 450

  • Report Quality: Really good - well thought through and detailed, Strong synthesis

Gemini 3 Pro Deep Research:

  • Time: ~2 minutes

  • Sources: 150+

  • Report Quality: On parity with both the others Equal or better

  • Bonus: Google Drive integration, Real-time thinking panel, plus turn it into an infographic, audio, a webpage etc.

The speed of Gemini means I'm able to run a deep research query, review the resorts, think about it, and run another query, multiple times over before the other platforms have even finished their first run. Its speed is phenomenal.

Sure, Claude's reviewing multiples more sources, that's great, and an excellent way to validate. But half the time I just need data, that I can then reconsider or tighten my research brief to run again for targeted outputs. Gemini lets me move faster to get to the answers I need faster. ChatGPT's slowness means I'm having to rely on a one-shot prompt type approach because I don't want to have to wait 45-mins for the right information, let alone when it only searches 20 resources.

Result: I completely migrated deep research workflows away from ChatGPT earlier this year, and Gemini's speed updates have meant I no longer use ChatGPT for deep research as a third option for comparison.

Browser Stack: Still on Comet, Still Evaluating Atlas

I haven't moved off Perplexity's Comet browser despite OpenAI launching Atlas in October.

The logic is that Comet works really well. Sidecar AI assistant, tab management, voice mode, cross-tab context, background Assistant handles async, complex tasks. The workflow is frictionless.

Atlas launched with great features and the experience has been superior to Comet in some ways and lacking in others. But today's feature set doesn't justify migration efforts.

I'll reassess opportunistically. Right now, if it's not broken, why fix it?

Google AI Studio: Early Stages Of Becoming My Productivity Workspace

This is where my workflow shifted most dramatically.

I never really used OpenAI's Codex. Claude Code has been impressive for building my confidence in an IDE for coding activities and Google AI Studio was the workspace for vibe coding or building micro-apps for micro tasks part of my day to day. With the developments from Gemini 3, this has become my application generation workbench instead, as the vibe coding speed, accuracy and seamless capabilities to deploy has become difficult not to select as the default.

Pre-built composable elements available in Google AI Studio to build apps.

Current production: hovering between 5-15 micro-applications performing very specific tasks for me. Automated spreadsheet formatting. Meeting artifact extraction. Slide deck generation from docs. Image processing with Nano Banana. Data transformation pipelines.

This is why AI Studio is a game changer:

  • "Vibe coding" interface: Describe what you want, get a functional web app. The barrier to building AI-powered tools has collapsed. People will shout about Lovable (and I agree it's good) but Google AI Studio has seamless access and integration into all of my Google capabilities and models. It's far superior for spinning up a tool that I can easily use and disregard soonafter.

  • One-click Cloud Run deployment: Immediate production availability. No infrastructure setup, no DevOps complexity.

  • Secure API proxy: Your key stays protected and are easy to spin up and manage to contain cost.

  • Zero approval overhead: Build, test, deploy without procurement or IT involvement. This eliminates the traditional gatekeeping that slows capability development.

Google's targeting 1M applications built on AI Studio by year-end. That ambition maps directly to observed reality as the barrier to functional AI tooling has collapsed.

This is the micro-SaaS future: individual contributors building specialised tools without developer resources, budgets, or approval workflows. That changes how quickly organisations can adapt.

Where OpenAI Continues to Underperform

The sad things is that ChatGPT was my best mate in AI. We've grown distant as its released features have been underwhelming, focused on creating a better Assistant, but not significantly improving the workbench for how I can level up productivity in the same way that I can with Claude and Gemini

Three structural disadvantages in my preferred workflow:

  • Data source connectivity: Inconsistent file discovery. The system fails to reliably locate or surface historical information. Critical for workflows requiring reference to previous work. Gemini and Claude materially outperform here. Connectivity reliability seemed to improve when Company Knowledge was released, but this didn't feel like a significant setup in accuracy. Claude and Gemini consistently outperform for me here when retrieving relevant information. In particular for Claude Skills, being able to retrieve information on-demand as required

  • Image generation: ChatGPT's image capabilities are junk: Character consistency: Non-existent across iterations. Text rendering: Unreliable. Output quality: Below Midjourney, below Nano Banana. Production image workflows: Nano Banana for image editing and finessing (Gemini 3), Reve for image generation (Midjourney model).

  • Structured data analysis: Beyond model capability, this is about data preparation discipline. Clean headers, simple tables, no merged cells, consistent typing. With proper structure, Claude and Gemini consistently outperform ChatGPT on complex spreadsheet operations.

What I'm Executing Over the Next Fortnight

  • Ramping up micro-application production. Nothing specific in mind, but leveraging Google AI Studio more and more to create micro-apps for accelerating through day-to-day tasks.

  • Gemini's RAG-as-a-Service for knowledge management and retrieval: Gemini’s RAG-as-a-Service for knowledge management and retrieval: it’s a dead-simple way to drop a full retrieval pipeline into any production app without building retrieval, chunking, storage, or embeddings yourself. You plug your PDFs, docs, and knowledge bases straight into your product with a few lines of code and get grounded, citation-backed answers instantly: not a Notebook LM, ChatGPT connector, or Copilot-style search wrapper, but a proper retrieval engine built into the API itself.

  • AI-native brand design at scale: instead of fiddling with prompts, you lock your brand voice and visual rules into a reusable skill that produces consistent, on-brand assets every time. Claude Skills already nails near one-shot accuracy and Gemini’s catching up fast, giving you campaign-ready design output in minutes, not a Canva-with-AI gimmick but a real brand-conditioned engine in your workflow.

  • AI-powered creative ops at scale: Airtable AI Plays lets you turn messy creative production into repeatable, automated workflows that generate, version, and adapt assets without manual lift. You lock your brand rules into a play once, then spin out dozens of variations instantl: not a spreadsheet hack, but a proper creative automation layer that cuts turnaround time from hours to minutes.

The era of AI Monogamy should be over for you

No single AI platform can do everything you need. The real productivity edge now comes from picking the right tool for each job and moving between them without friction.

This isn’t needless complexity. Each tool has genuine strengths, and the small overhead of running multiple platforms is nothing compared to the performance gains.

The era of platform monogamy is over. Teams sticking to one provider out of habit, sunk costs, or comfort aren’t making strategic decisions, they’re defending inertia.

The leaders treat AI like a portfolio: they rebalance often, optimise for outcomes, and switch fast when a better tool arrives.

The only question is whether an organisation is willing to continuously optimise or stay locked into old decisions. Most teams are still anchored to their first AI platform, worried about complexity.

That hesitation is now a competitive gap, and it widens fast.

Written by Mike

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.

Subscribe to The AI Corner

The fastest way to keep up with AI in New Zealand, in just 5 minutes a week. Join thousands of readers who rely on us every Monday for the latest AI news.

Keep Reading

No posts found