
From Hype to Practice: Thoughts on AI and UX After Upper Bound 2025
I recently attended Upper Bound 2025 in Edmonton—a conference that’s become a hub for serious conversations about the evolving role of artificial intelligence in work, creativity, and systems design. It was energizing, overwhelming, and at times a little unsettling—but mostly, it gave me language and frameworks I needed to reflect on how we’re thinking about AI within our teams and organizations.
Working in UX within a large organization means I often sit at the intersection of strategy, design, and tech culture. AI conversations can be especially tricky—not because people aren’t curious, but because the terrain is still undefined. Upper Bound gave me a few mental models that have helped me reframe what responsible, realistic AI integration could look like for digital and creative teams, especially those managing at scale.
The problem with policy as a starting point
One of my biggest takeaways is that AI policy—while necessary—isn’t enough. Most organizations start with broad, cautionary language. These policies tend to be written reactively: trying to protect data and reputation without necessarily enabling innovation. For UX teams, enabling innovation might mean allowing safe experimentation with AI tools to improve workflows—for example, testing an AI writing assistant to speed up microcopy production or using AI to explore multiple design variations. In contrast, overly restrictive policies might prohibit tool use outright without evaluating real risks, which can stall creativity and prevent teams from developing informed practices. At Upper Bound, it became clear to me that good governance doesn’t start with control—it starts with clarity, literacy, and use-case awareness.
A framework shared by Stephanie Enders that resonated with me the most was a progression advancing from literacy (shared language) to skills (guided use) to competency (confident, contextual application). That’s the arc we need to aim for—not just rules about what not to do, but structures that help teams explore what could be done, safely and purposefully. It’s hard to design or lead when AI is treated only as a threat or vague directive for teams to embrace.
Workflow is where the real conversations live
Another insight: AI’s value isn’t always in the big, transformative use cases. Sometimes it’s just in making a clunky part of your day a little smoother. I’ve seen firsthand how tools like Copilot or Figma AI can reduce friction for developers and designers. At Upper Bound, I saw even more examples—tools that help with research, writing variants, auditing for bias, or developing code quickly.
But none of that means much unless teams are actually talking about how they work. For creative and UX teams, AI workflows don’t appear out of thin air. They need to be discussed, experimented with, and shared. The culture of sharing—“how AI saved my day”—might be just as important as the tools themselves.
Designers as translators, not just users
One model I found useful was the segmentation of AI roles into creators, augmentors, and translators.
Creators are the people building the models, training the systems, and developing the core infrastructure. They tend to be engineers, data scientists, or researchers who shape the underlying logic and architecture of AI tools.
Augmentors are professionals who use AI to enhance or streamline their own workflows—without fundamentally changing what they do. Designers, writers, and developers often fall into this group, using AI to generate drafts, explore ideas, or automate tedious tasks.
Translators sit between creators and end users. They interpret what AI tools can do, explain their limitations, and help teams or clients understand where and how these tools can fit. This is a natural space for UX professionals, who already guide decision-making through research, systems thinking, and communication.
That translation role is deeply tied to UX practice. It requires judgment, clarity, and a healthy skepticism of magic solutions. We already have experience guiding teams through new systems and unfamiliar interfaces. We can do the same for AI, by helping contextualize its value without overstating its capability.
Culture beats tooling
The last big takeaway? Culture is what determines whether AI helps or hinders your team. You can have access to every tool in the world, but if the environment is fearful, unclear, or siloed, none of it matters. On the other hand, a team that’s encouraged to share small wins, ask critical questions, and learn together can go much further—even with limited access.
For me, this means normalizing the conversation. We don’t need to require AI use. We just need to say, “It’s okay to try.” From there, we can build the scaffolding—spaces to experiment, like low-risk pilot projects or internal hack days; channels to share, such as dedicated Teams threads or regular team demos; and time to learn, through workshops or collaborative working sessions. Governance can follow, but culture has to lead.
AI is not a trend that’s coming to disrupt us; it’s already part of how we think, work, and solve problems. For teams like mine, and roles like mine, the goal isn’t to become AI experts overnight—it’s to start making thoughtful, local decisions. Decisions about how AI can fit our workflows, reflect our values, and strengthen our craft.
That’s the work now.