AI Training for a UX Organization
The resistance wasn't about the tools. It was about identity. And peer modeling beats curriculum every time.

The teams that have been most transparent about running AI training for designers are converging on some surprising findings. Not about the tools. About what happens when you try to change a professional culture.
I'm in the middle of building and running one of these programs right now. It's not done. Here's what I'm seeing.
The best designers can be the hardest to convince
The hardest pushback doesn't come from designers who are struggling. It comes from the ones who are excellent.
Five years in, a designer has built real craft. Real opinions about typography, about how research should be structured, about what a good prototype feels like. That expertise isn't incidental. It's the thing that makes them good, and the thing that tells them who they are professionally.
When AI shows up in that context, those designers don't see opportunity first. They see threat. Not to employment, necessarily. To the thing they've spent years building.
The reframe that tends to work is simple. These tools don't replace what you know. They let you skip the parts that don't require your judgment, so you can spend more time on the parts that do.
That framing doesn't land in a kickoff slide. It has to happen through use, through watching the work get better, and through recognizing your own taste in the result.
I'm still figuring out how to accelerate that realization. So far, you mostly can't. It happens on its own timeline.
Who moves fastest
The people who move fastest aren't who you'd expect.
Most experienced designers are cautious. Most junior designers are enthusiastic but scattered. The ones who move quickest tend to be two to four years in. They have enough context to understand what the tools are actually doing, and not so much invested identity that change feels like loss.
They find their own uses before the formal program ends. They show the rest of the team what it looks like in practice before anyone else is ready to look. Every team seems to have one or two of them. Worth identifying early and getting out of their way.
Curriculum fades. Normal sticks.
Here's the thing about curriculum: skills fade.
Prompting strategies, synthesis techniques, workflow integrations, specific methods tend to be half-remembered a few weeks after a program ends. I've watched it happen. You run a solid session, people leave energized, and a month later the habit hasn't formed.
What doesn't fade is the shift in what feels normal.
So the program I'm building isn't organized around tools. It's organized around three distinct chapters, each one targeting a different kind of change.
Chapter 1: AI fluency
This is the foundation layer. Tools, usage patterns, prompt engineering, CLI basics. The goal isn't to make designers into engineers. It's to remove the intimidation and replace it with enough fluency that AI stops feeling like something happening to them and starts feeling like something they can direct.
Prompt engineering gets more attention here than most design training programs give it, because it turns out to be a genuine leverage point. A designer who can write a precise prompt, one that encodes constraints, context, user needs, and the right level of ambiguity, gets fundamentally different output than one who doesn't. That gap compounds quickly. It's the difference between AI as a vending machine and AI as a thinking partner.
This chapter also covers the CLI and agentic workflows. Not because every designer needs to live in a terminal, but because understanding how these systems actually work changes how you design for them. You can't design trust, recoverability, or appropriate AI confidence if you don't have a mental model of what the system is doing underneath.
Chapter 2: Designing for AI
Fluency gets you in the door. This chapter is about what you do once you're inside.
The skills here are the upstream ones: systems thinking, facilitation, and designing for trust. How do you design an experience where the AI makes a recommendation and the user actually knows what to do when it's wrong? How do you hold the user's perspective in a product meeting where engineering is talking about model accuracy and product is talking about conversion? How do you see a decision made in one corner of a product and understand what it's going to do two surfaces downstream?
These aren't new skills for experienced designers. They're the skills that have always mattered. What's new is that AI-augmented products put them under pressure in ways that basic digital products didn't. A button either works or it doesn't. An AI recommendation exists on a spectrum of confidence and correctness that users have to calibrate to in real time. Designing for that requires a different frame than designing for deterministic systems.
This chapter is where the earlier articles in this series live in practice. Trust as a deliverable. The upstream problem. What happens when the system is wrong and nobody designed for it.
Chapter 3: Becoming an AI futurist
This is the hardest chapter to teach and the most important one to get right.
The first two chapters make designers better at what exists now. This one is about what comes next, and more specifically, about building the habit of thinking proactively about that question rather than waiting for the answer to arrive and scrambling to catch up.
The designers who are going to lead through this transition aren't just the ones who learned the current tools well. They're the ones who developed a practice of watching where the industry is heading, forming opinions about it, and adjusting their work before the shift is obvious. That's a different skill than execution. It's closer to what analysts and strategists do: pattern recognition across signals that aren't fully formed yet.
In practice this chapter looks like this: how to read research and model releases as design signals, how to stress-test your current practice against where the field is moving, and how to develop a point of view on what AI-augmented experience design looks like two years from now and work backward from it.
Not prediction. Orientation.
The goal is designers who can walk into a product conversation about an AI feature that doesn't exist yet and have something real to contribute. Not just about the interface, but about what users will need, where trust will break, and what the experience has to account for before the first line gets written.
The industry is not going to slow down and wait for teams to catch up.
The outcome that matters
Here's the thing about curriculum. The skills in chapter one will fade without reinforcement. The framing in chapter two will sharpen with use. Chapter three doesn't really end. It's a practice, not a module.
Which is why the most important outcome of any of this isn't what designers can do when the program ends. It's what feels normal to them six months later.
The moment that changes a team usually isn't a training exercise. It's when someone uses a tool naturally, in context, for real work, and does it where others can see it. When a colleague mentions in standup that they synthesized 40 user interviews before the meeting, the room changes. Not because they taught anyone anything. Because they made it ordinary.
Peer modeling does something curriculum can't. I've stopped trying to engineer it and started trying to create conditions where it can happen.
Start with the skeptics
Start with the skeptics. Not to convert them. To understand them.
The most resistant designers tend to have the most to teach about what's actually at stake. Their reluctance is telling you something. Usually it's not a skills gap. It's an identity question they haven't resolved yet.
The enthusiasts will figure it out regardless. They're already experimenting. They don't need much from you.
The skeptics set the cultural temperature. If they come around, the team tends to come around. If they don't, you end up with a fractured team, half building in a new way, half waiting for the whole thing to blow over. That fracture is harder to close than it looks, and I'd rather not find out how hard firsthand.
The program will end. The shift it's trying to create won't be done.
The designers who are leaning in are already moving differently. Faster, thinking at a higher level of abstraction, and spending less time on execution that doesn't require their judgment. That's the proof point that matters. Not completion rates.
The ones who aren't there yet will get there eventually. Not because AI will replace them. Because the people around them are going to raise the bar for what the work looks like, and that's hard to ignore for long.
That’s the culture I am building. We're still in it.