Writings

Insights from the front lines of scaling design in high-growth SaaS companies

AI, User Experience Richard Baker AI, User Experience Richard Baker

Trust is the user experience of the future

As AI becomes more deeply embedded in our work and daily lives, the question of what is real is getting harder to answer.

As AI becomes more deeply embedded in our work and daily lives, the question of what is real is getting harder to answer.

When I review a candidate’s portfolio, I sometimes catch myself wondering: am I looking at the work of a great storyteller, or at the carefully tuned outputs of a well-crafted prompt? When I read an article that inspires me, is it the voice of a skilled writer, or the product of a clever prompt and some human edits?

And then the harder question: does it even matter?

In some situations, the answer is yes. If I’m hiring someone for a role where public speaking and live Q&A are key, authenticity matters a great deal. No AI tool can yet replicate the ability to think on your feet, respond with empathy, or earn trust in real time.

But in many cases, it may not matter much at all. If AI helps someone brainstorm, refine their arguments, or present their ideas more clearly, isn’t the outcome the thing that matters? Clearer communication, sharper insights, and ultimately better work. Creativity has always been entangled with tools, from typewriters to Photoshop. Perhaps AI is simply the next extension.

Still, the pace of change makes one thing clear: trust is becoming a scarce resource. And trust depends on transparency.

Which is why I believe individuals and organizations alike should start declaring how they’re using AI. An AI Disclosure, if you will. Not because regulation demands it, but because the user experience does.

Think about it: every interaction is a user experience, whether you are reading an article, applying for a job, or contacting customer support. If people don’t know where AI fits into that experience, they’re left to guess. Guessing breeds doubt, and doubt erodes trust. Clear disclosure, on the other hand, reduces friction. It sets expectations, prevents disappointment, and creates confidence in the interaction itself.

For individuals, this might mean acknowledging how AI shows up in their creative process: drafting an outline, synthesizing research, or sharpening language. It does not need to be a disclaimer or apology. It can simply be a candid note that says: this idea is mine, but I used AI as a collaborator.

For companies, the stakes are higher. If AI is routing customer requests, approving transactions, or rejecting claims, customers deserve to know: is AI making the final call, or is a human still in the loop? Disclosure here is not a matter of etiquette; it is a matter of accountability and a core part of the user journey. A customer’s trust in the system is part of their experience with the brand.

Is this necessary today? Probably not. But over time, openness about how AI is used will become a differentiator. People will gravitate toward organizations that design for trust, just as they already gravitate toward those that design for simplicity and delight. Transparency is becoming a design choice, and one that shapes how every interaction feels.

Because in the end, the problem is not AI itself. The problem is what happens when we stop being sure what is genuine and what is not. And if trust is the glue that holds our relationships, businesses, and societies together, then disclosure is not just a courtesy. It is a crucial part of the experience.


My AI disclosure: The idea and first draft were mine, but AI helped me polish it, then remove em-dashes from the final draft 😉.

Read More
Leadership, Management, Career Richard Baker Leadership, Management, Career Richard Baker

Read, Adapt, but Don’t Copy: Avoiding the Pitfalls of Blind Implementation

Don’t blindly implement what you read in a book

Books, frameworks, and case studies are valuable—they distill years of experience into actionable insights. But context is everything—what worked for one company, team, or industry may completely fail in yours.

🚨 The pitfalls of blindly applying a book’s advice:

Ignoring your unique challenges – A startup can’t implement the same processes as a 10,000-person company. A UX team in fintech faces different constraints than one in gaming.

Forcing a framework that doesn’t fit – Not every team thrives with Agile. Not every company should “Move Fast and Break Things.” Context dictates success.

Overlooking culture and team dynamics – Leadership strategies that work in one environment may backfire in another. A process that fosters collaboration in one team might create bottlenecks in yours.

Wasting time and resources – Implementing a system just because it worked for someone else can lead to overcomplicated workflows, disengaged teams, and solutions that don’t solve your problems.

How to assess if a book’s advice will work for your team:

Is it a one-way or two-way door decision? A irreversible or costly-to-reverse extensive can be thought of as a one-way door. These decisions require deeper scrutiny—restructuring a team or shifting core strategy isn’t easy to undo. Two-way doors (reversible decisions) are safer to experiment with—if a new design critique format or sprint cycle doesn’t work, you can revert.

Does it align with your team’s size, stage, and constraints? A process that works for a company of 10 designers might break when scaled to 100. Instead, look for snippets that can plug into existing processes or ways of working.

Have you pressure-tested it against your company culture? Does the advice assume decision-making power you don’t actually have? Or, will it build a culture that doesn’t align to your business values?

Can you run a small, low-risk experiment? Before overhauling a workflow, try a pilot version with a small team. Gather feedback, iterate, and only then consider scaling. When things do go well, shine the spotlight on that team as a bright-spot in the company. This will help with change management over time.

☠️ But what if you’ve already implemented something, and it didn’t work?

Reversing a bad decision isn’t easy, especially if it’s hurt morale or trust. But there are a few different approaches on how to walk back a bad decision and potentially help minimize the damage without losing your team’s confidence:

1️⃣ Own the mistake—transparently. Acknowledge that the change didn’t have the intended impact. Your team will respect honesty more than defensiveness.

2️⃣ Share the “why” behind the reversal. Explain what you learned. Was it a misalignment with team needs? An unforeseen bottleneck? A cultural mismatch?

3️⃣ Involve your team in the next steps. Instead of dictating the fix, ask for input. What would they keep? What should change? This shifts ownership back to the team and builds a more collaborative, iterative culture.

4️⃣ Rebuild trust through action. Demonstrate that you’re listening. If you say you’ll iterate, follow through. If you promise fewer top-down changes, commit to it and make it so.

5️⃣ Make “experimentation” part of your culture. If your team sees decisions as learning opportunities rather than rigid mandates, they’ll be more open to future changes.

The takeaway:

The best leaders and designers don’t just follow advice; they adapt it for their context.

Read widely. Learn deeply. But always test before you implement—and be willing to course-correct when needed.

What’s a book or framework you’ve had to walk back after realizing it didn’t fit your team?

Read More