Writing
8 min read

After the Interface: A Three-Part Series

Part 1: The Collapse of the Interface

The interface isn't going away. It's just stopped being the default assumption. That shift is already changing what design teams are asked to do — and most haven't noticed yet.

Hero image for Part 1: The Collapse of the Interface

Most of the conversation about AI and UX is about tools: how to use them faster, how to integrate them better, and how to stay relevant. That's the wrong frame. What's happening is bigger than workflow. This series is my attempt to name it clearly: what's actually changing, where it leads, and what UX should do with the window it has.


We used to ask: Does this screen communicate clearly? Does this flow make sense? Is this component reusable? The questions most design teams are moving toward look different: When should the system ask for confirmation? What happens when the AI gets it wrong? Who owns the edge case when the agent decides on its own?

That's not a UX trend. That's a different job.

As AI takes over more of the execution layer, the interface stops being the default assumption and becomes one surface among many. UX either evolves into the function that governs how systems behave on behalf of people, or it gets reduced to finishing work on whatever screen is left. The window to claim the governance role is open now. It won't be for long.

That shift hasn't fully arrived yet. But it's close enough that the teams not preparing for it are already behind.

The interface isn't gone. It's becoming optional.

Here's what most teams are missing. The interface hasn't disappeared, but it's no longer the obvious deliverable.

Three years ago, when someone said "design this feature," the answer was always a screen. A flow. A component. That assumption is cracking. Some features will be better served by an agent that just does the thing. Some experiences will work better through voice that never renders a pixel. Some workflows will belong end-to-end in automation, with a UI only for exceptions.

Most design teams haven't updated their defaults to reflect this. They're still designing screens by instinct, adding the AI layer as a module on top. That's the wrong architecture, technically and organizationally.

What's actually shifting

These aren't changes that have fully landed. They're changes you can feel coming, and they're worth naming clearly. From where I sit, four things are materially changing in how UX work gets done.

First, the review conversations are starting to be about behavior as much as appearance. What should the system do proactively? Where should it hold back? When it generates something wrong, how does the user correct it, and how quickly? Those questions are harder than "should this button be 16 or 18 pixels." They require a different kind of judgment, and most design reviews aren't structured for them yet.

Second, designers are beginning to specify behavior, not just appearance. The artifacts are starting to include prompt scaffolding, confidence states, fallback interactions, correction paths alongside the flows and components. A designer who can only work in visual space will be half a designer in a few years. That's not a criticism of where anyone is today. It's a read on where the job is going.

Third, the design-to-engineering loop is compressing. AI coding tools mean more ideas get prototyped faster. That's good. It also means weak design thinking gets built and shipped faster. Vague specs used to survive because building took time. Now they don't. That buffer is shrinking, and the bar for clarity has gone up.

Fourth, discovery is compressing too, and faster than most people expect. Research initiatives that used to take ten weeks are getting done in four or five days. Synthesis that required a dedicated research sprint can happen in an afternoon. That changes what's possible at the front of a project and raises the bar for how much teams are expected to know before they start building. Designers and researchers who can move quickly and rely on credible discovery will have more influence earlier. Those still quoting ten-week timelines will get bypassed.

The pattern most teams are stuck in

The most common mistake isn't laziness or lack of talent. It's a reasonable response to an unreasonable pace of change: treating AI as a feature, rather than a shift in the underlying paradigm.

When that happens, the approach tends to look familiar. The product team adds an AI copilot to the existing product. UX designs it like any other feature: a panel on the right, a chat input, some suggestion states. The team ships it. Adoption is lower than expected. The conclusion is that AI isn't ready yet. Nobody asks whether the design frame was wrong from the start.

The problem isn't the AI. It's that the product was designed around the wrong paradigm. Adding an AI layer on top of an existing interface is like rearranging the map after the territory has already changed. The paradigm has to change, not just the component.

A paradigm-aware approach looks different. Start with the user's goal, not the interface. Ask what the system should handle entirely. Ask where human judgment is genuinely irreplaceable. Design the handoff between human and machine first, then build the interface around the trust moments and the exceptions. That's a different process from what most teams are running.

What AI literacy actually means for designers

There's a version of AI literacy that's about tools: learning to prompt, using AI to generate variants faster, moving through component work more efficiently. Useful. Also the least interesting part of the shift.

The real literacy is understanding AI failure modes well enough to design around them. Unpredictability. Capability confusion. Confidence miscalibration. When a system acts more certain than it is, users get hurt. When users can't read what a system is capable of, they over-trust it or abandon it. Neither shows up in traditional usability heuristics.

What I'm working toward with my team goes beyond technical fluency. The designers who will matter most in this next phase aren't just the ones who understand AI systems. They're the ones who can translate that understanding into language that moves decisions.

That means business acumen: understanding how design choices connect to product bets, revenue, retention, and risk. Not superficially, but deeply enough to speak credibly in rooms where those trade-offs are made. Designers who can frame technical AI concepts, limitations, or a capability gap as a competitive exposure get heard differently than designers who frame everything as a UX issue.

It means communication and storytelling: the ability to make the invisible visible. AI interactions produce a lot of behavior that never shows up on a screen. Explaining what a system is doing, why it made a choice, where it's uncertain, requires designers who can build narratives around systems, not just interfaces. The skill isn't presentation polish. It's translating complexity into something a skeptical stakeholder can act on.

It means facilitation: the ability to run the room when the hardest questions don't have obvious answers. Who decides when the agent can act autonomously? What's the threshold for human review? What does "good enough" mean when the output is based on probability? Those decisions don't belong to any one function. They need someone who can structure the conversation, surface the real disagreements, and move the group toward a decision. That's a design skill, even if it's never appeared in a job description.

These capabilities aren't soft extras. They're what separates designers who get invited to problem-framing conversations from designers who get handed specs. And in the next few years, as AI compresses timelines and raises the bar for early clarity, that distinction will matter more than it ever has.

Why the next 18 months matter

The teams that come out ahead won't have an advantage because they adopted AI faster. They'll have it because they redesigned their practice for the right paradigm.

The interface isn't dying. For a lot of experiences, it's still the right answer. But treating it as the only answer is already a constraint, and in three years it will be a real liability.

The designers I'm watching aren't waiting for permission. They're asking different questions, building different skills, and letting go of defaults they've operated from for years. The ones standing still are betting the ground stays firm under them.

It hasn't, for a while now.

The interface becoming optional isn't the crisis. The crisis is that when it recedes, someone has to govern what replaces it. The question isn't whether UX is ready for that role. It's whether UX realizes it's already been doing it.

uxaidesignproduct-design