Trust is the user experience of the future
As AI becomes more deeply embedded in our work and daily lives, the question of what is real is getting harder to answer.
When I review a candidate’s portfolio, I sometimes catch myself wondering: am I looking at the work of a great storyteller, or at the carefully tuned outputs of a well-crafted prompt? When I read an article that inspires me, is it the voice of a skilled writer, or the product of a clever prompt and some human edits?
And then the harder question: does it even matter?
In some situations, the answer is yes. If I’m hiring someone for a role where public speaking and live Q&A are key, authenticity matters a great deal. No AI tool can yet replicate the ability to think on your feet, respond with empathy, or earn trust in real time.
But in many cases, it may not matter much at all. If AI helps someone brainstorm, refine their arguments, or present their ideas more clearly, isn’t the outcome the thing that matters? Clearer communication, sharper insights, and ultimately better work. Creativity has always been entangled with tools, from typewriters to Photoshop. Perhaps AI is simply the next extension.
Still, the pace of change makes one thing clear: trust is becoming a scarce resource. And trust depends on transparency.
Which is why I believe individuals and organizations alike should start declaring how they’re using AI. An AI Disclosure, if you will. Not because regulation demands it, but because the user experience does.
Think about it: every interaction is a user experience, whether you are reading an article, applying for a job, or contacting customer support. If people don’t know where AI fits into that experience, they’re left to guess. Guessing breeds doubt, and doubt erodes trust. Clear disclosure, on the other hand, reduces friction. It sets expectations, prevents disappointment, and creates confidence in the interaction itself.
For individuals, this might mean acknowledging how AI shows up in their creative process: drafting an outline, synthesizing research, or sharpening language. It does not need to be a disclaimer or apology. It can simply be a candid note that says: this idea is mine, but I used AI as a collaborator.
For companies, the stakes are higher. If AI is routing customer requests, approving transactions, or rejecting claims, customers deserve to know: is AI making the final call, or is a human still in the loop? Disclosure here is not a matter of etiquette; it is a matter of accountability and a core part of the user journey. A customer’s trust in the system is part of their experience with the brand.
Is this necessary today? Probably not. But over time, openness about how AI is used will become a differentiator. People will gravitate toward organizations that design for trust, just as they already gravitate toward those that design for simplicity and delight. Transparency is becoming a design choice, and one that shapes how every interaction feels.
Because in the end, the problem is not AI itself. The problem is what happens when we stop being sure what is genuine and what is not. And if trust is the glue that holds our relationships, businesses, and societies together, then disclosure is not just a courtesy. It is a crucial part of the experience.
My AI disclosure: The idea and first draft were mine, but AI helped me polish it, then remove em-dashes from the final draft 😉.