We've spent the past three years typing into AI chat boxes. Claude, ChatGPT, Gemini. All brilliant, all constrained by the same fundamental limitation: they give us text when what we actually need is a tool.

That constraint is evaporating as more and more people learn what's possible with liquid interfaces in AI toolsets, and the likes of Google/Claude (in particular) enables users to develop interactive app like interfaces with natural language.

The most underrated shift in AI right now isn't bigger models or better reasoning. It's that generative AI can now build any UI that you want on demand. This fundamentally shifts the way we should be thinking about how we complete jobs and tasks.

Not using static applications like Word/Docs, or Slides/Powerpoint. Or requesting IT tools that you wait weeks for IT to provision. Not rigid software with predetermined workflows. Actual, functional micro-applications that materialise when you describe what you need in your own language, and disappear when you're done.

Micro-apps on the front. Agents in the back. This is where AI gets genuinely useful beyond helping everyday users write better emails, analyse data, and draft a Board report.

Here's what's actually changing

For decades, work has followed the same pattern: you have a task, you identify which tool does that task, then you adapt your work to fit the tool's constraints.

You want to analyse market data? Fire up Excel and figure out which formulas to use. Need to create a presentation? PowerPoint dictates your structure. Writing a proposal? Word determines your editing workflow. Create a graphic? Spin up a mock up in Figma. The tool comes first, then you mould your thinking to fit it.

The entire enterprise software industry has been built on this premise: create a general-purpose tool, add enough features to satisfy the broadest possible market, then force everyone to adapt their work to fit the tool's constraints. This model has shaped entire careers. We hired for "Excel skills" and "PowerPoint proficiency" as hygiene layers. We used to measure productivity by how well someone navigates predetermined interfaces to complete an outcome.

What happens when the interface can be generated to match the work instead?

You stop thinking "I need to do competitive analysis, so I'll open this specific tool and adapt my analysis to its structure". You start thinking "I need to understand how our competitors are positioning against us in APAC markets. Build me an interface that shows pricing trends, messaging shifts, and product launch timelines across the region".

AI flips this completely.

Instead of moulding your work to fit the tool, you describe your work and the tool moulds itself to fit you. Those vibe coding in the world of Lovable, Cursor and Bolt will be more than familiar with this concept.

Where the recent developments have taken us a step further is that these capabilities of generating fluid interfaces to our liking is becoming increasingly available in Claude (Artifacts) to consume and interact with information, and is quickly becoming a mainstay in Gemini. This means development in Lovable / Cursor isn't isolated (missing data source connectivity to our work), it also isn't just accessible to techies or those curious enough to play, because it now lives in an interface that isn't intimidating, and is hooked into all of our data sources, understanding the context of our business.

Gemini is already moving in this direction. They're rolling out the ability to have a conversation and then turn that conversation into an artefact - a document, spreadsheet, or presentation. That's genuinely useful progress. But it's still constrained by traditional output formats. You're still thinking "I need to produce a slide deck" rather than "I need an interface that lets me explore pricing scenarios". The artefact is predetermined. The real shift happens when you stop thinking in documents and start thinking in purpose-built interfaces. Not "turn this conversation into slides". But "build me a tool that does what I actually need".

This goes far beyond documents and spreadsheets. Think any modality of editing.

Think about creating a video from research. Today's reality: you do research in ChatGPT, generate images in Midjourney, create video in Veo or Runway, then edit in Premiere Pro because you need to adjust pacing. Four different tools. Four different interfaces to learn. And here's the real friction: you don't even know what buttons you need. What's a J-cut? How do you adjust colour grading? What's the difference between a transition and a dissolve? You're Googling terminology just to use the software.

Now imagine: "Build me an interface that takes this research summary, generates supporting visuals, creates a 90-second explainer video, and gives me controls to adjust pacing and emphasis". You get exactly that. One interface. With only the buttons that matter for this specific task. Not Premiere Pro's 400 features. Just the six controls you actually need right now. The agent handles research-to-visuals-to-video in the background. You're not jumping between tools. You're not learning terminology. The interface presents exactly what you need to refine the output, labelled in plain language you already understand.

You don't need access to everything these tools can do. You need access to exactly what this task requires. Nothing more. Nothing less.

Example here with me reviewing insights from a survey using Claude's Artifacts to view the information dynamically. No pivot tables, spreadsheets, or formulas. With natural language I can change the dashboard to fit my needs: more charts, more insights, less colour.

See below for video of Google Dynamic View of me performing property market research, with supporting post here.

This isn't a subtle shift. It changes what skills are important (articulating problems clearly vs memorising software features), what becomes possible within time constraints (anything you can describe vs anything that has pre-built tooling), and who can build solutions (anyone who can articulate needs vs people who can code or wait for developers).

The work comes first. The tool materialises to support it. And over time, this will reshape how organisations think about capability. The question won't be "do we have the right software?". It'll be "do our people know how to describe what they need to accomplish?". Which comes back to what humans are great at: creativity, asking good questions, and problem solving beyond just getting an answer but understanding human intuition and what our audience needs and wants to be served information.

This isn't about replacing Word or Excel today or in the next five year. It's about recognising that sometimes the best interface for your task doesn't exist yet, and now it can exist in about thirty seconds if we have the right tools in-place.

What this looks like in practice

Consider how an executive prepares for a board meeting today. You're juggling:

  • Financial data in Excel

  • Competitive intelligence in various PDFs

  • Strategic initiatives tracked in slides

  • Market trends from analyst reports

The traditional approach: spend hours copying data between applications, reformatting charts, ensuring everything's consistent. The output is static the moment you finish it.

If a board member asks "what if we increased pricing by 5%", you're back to Excel, recalculating, updating slides, hoping you don't break something. Conversational analytics and capabilities across these data sources is getting better and more available.

Now imagine describing to an AI: "Build me a board dashboard that shows Q4 financials, competitive positioning, and three scenarios for next year's pricing strategy. Let me toggle assumptions and see real-time impact on margin and market share".

You get exactly that. Not a chat response with suggestions. An actual interactive dashboard (per video above). Custom-built for this specific board conversation. With the exact scenarios you care about, connected to your actual data, using proper financial terminology and best practices you didn't have to specify because the AI inferred them from context.

When the meeting's over, it evaporates. Or you save it. Or you iterate it for the next board meeting. The interface is fluid, not fixed.

We can take this even further.

Consider how you create a document today. You ask ChatGPT or Claude to draft a report. It generates the content. You can edit it in ChatGPT's Canvas with back-and-forth conversation, or paste it into Word and use familiar editing tools. That's useful. But it's still fundamentally a text interface.

Here's what that approach misses: you don't just want the text. You want control over the research that created it.

What if instead of getting a finished report, you got an interface that showed you every section with its source citations, and each section had a checkbox? Include this paragraph. Regenerate that one with different framing. Swap this stat for a more recent one. All visible, all controllable, all without touching the prose directly.

Or better yet. What if the interface looked like a Bill of Materials for your report? Each claim is a component. Each component shows its source, confidence level, and alternative framings. You're not editing text anymore. You're curating an argument from inspectable parts. When you're done, it renders into whatever format you need. Word doc, slide deck, exec summary.

Or consider briefing an agency or colleague on a project.

The traditional approach: write a text document explaining what you need. Except you're often communicating about something outside your wheelhouse. You don't actually understand the technical requirements. You use the wrong terminology. You leave out context that seems obvious to you but isn't. The brief lands, gets misinterpreted, and three weeks later you get back work that's technically what you asked for but completely wrong.

This is one of the most common failure modes in business. We're poor communicators about things we don't fully understand. And text briefs force us to be precise about things we can't be precise about yet.

Now imagine instead: "Build me a project brief interface for a website redesign. Include sections for target audience, key user journeys, technical constraints, and success metrics. Give me dropdown options for common patterns and a place to upload reference examples".

You get a structured interface that guides you through what actually needs to be communicated. The agency or colleague receives something they can interact with, ask questions against, and validate their understanding before starting work. You're not trying to be precise in prose anymore. You're filling in a purpose-built structure that surfaces what matters.

The interface itself becomes the communication tool. Not a document about the work, but a shared artifact that clarifies the work as you build it together.

The distinction is key to reason with if the point hasn't been accepted: current AI gives you output that you then edit using traditional tools. Micro-apps give you purpose-built interfaces for the actual task you're doing. Which in this case isn't "editing a document". It's "validating research and constructing an argument from evidence".

That's a completely different interface. And now you can just describe it and have it built in seconds to your liking.

Here are just a few things that change as a result

  • Time to tool drops to zero: You no longer think "I need to build an application for this". You think "I need to do this task" and describe it. The application manifests as a byproduct of articulating your need. This fundamentally changes what work is possible within time constraints.

  • Curation becomes the new skill: Instead of learning fifty different software packages, you learn how to describe what you need with precision. How to recognise when the generated interface is actually useful versus when you need to refine your request. How to curate and improve tools over time. This is a genuine skill shift. From tool mastery to tool creation.

  • Personalisation goes infinite: Two people doing "competitive analysis" might need completely different interfaces. One wants timeline views and market signals. Another wants financial comparisons and strategic positioning matrices. Both can have exactly the interface that matches their thinking style. Not through endless configuration menus, but through describing how they actually work.

  • What's possible explodes: Right now, what you can accomplish is limited by what tools exist and which ones you know how to use. When tools can be generated on demand, the limiting factor becomes imagination and articulation. That's a wildly different constraint.

"But Microsoft / Google Docs / Sheets / Slides type interfaces work just fine?"

Some will argue this is over-engineered nonsense. That most people just want their familiar tools. That Word or Docs is fine, actually.

They're not wrong. Plenty of people prefer the certainty of fixed interfaces. They know where every button lives in PowerPoint. They don't want that changing. And that's completely legitimate.

But what I'm putting forward is we're not talking about forcing everyone onto fluid interfaces. We're talking about offering the ability to curate your own tools to the people who want it.

The real question isn't "should everyone have this?". It's "should anyone be prevented from having this?".

Because right now, if you need a specific interface for a specific task, you have exactly three options:

  1. find existing software that sort of does it (and adapt your work to fit)

  2. wait months for IT to maybe build it,

  3. or do without.

All three options are costly in different ways.

The fourth option. Describe what you need and get it immediately hasn't existed until now.

What about agents?

This is where it gets properly interesting. Because these micro-apps aren't just static interfaces. They're powered by agents working in the background.

You create a competitive intelligence dashboard. Behind that interface, an agent is continuously pulling new competitor announcements, market signals, regulatory filings. You don't micromanage it. You told it once, in natural language, what to track. It handles the rest.

You build a scenario planning tool. The agent doesn't just show you static calculations. It can run simulations, stress-test assumptions, flag risks you didn't consider. The interface is what you interact with. The agent is what makes it actually intelligent.

This is the architecture that I've been thinking through: micro-apps for human interaction, agents for continuous work. You curate the front-end to match how you think. The agents curate the back-end to match what you need.

The implications are massive

Over time, this changes the entire relationship between people and software. We move from a world where work adapts to tools into a world where tools adapt to work.

This has genuine consequences:

  • IT will shift from provisioning applications to governing data access and agent behaviour

  • Learning curves compress dramatically because there's nothing to learn. Just articulate what you need

  • Software will become contextual rather than categorical

  • The distinction between "user" and "builder" will start to blur

For executives specifically, this means you're no longer constrained by what your analysts can produce in the time available. You can spin up custom analysis tools for any strategic question, see multiple scenarios simultaneously, and iterate in real-time during decision-making conversations.

The opportunity right now isn't just adopting this for yourself. It's recognising that your organisation's ability to curate its own tools will become a genuine competitive advantage. Because while your competitors are still waiting for IT to provision software or people are constrained to the limitations of the software they can access, your team can build what they need and move to the next question very quickly.

This is a very big deal.

Written by Mike

Passionate about all things AI, emerging tech and start-ups, Mike is the Founder of The AI Corner.

Subscribe to The AI Corner

The fastest way to keep up with AI in New Zealand, in just 5 minutes a week. Join thousands of readers who rely on us every Monday for the latest AI news.

Keep Reading

No posts found