← Return to FRAME
Published
August 20, 2025
Contributed by
Tagged with
Behaviour change
Brand & creative
Customer experience
Cultural insight
Innovation
Communication
TRA
Download reportDownload guideDownload publication

Who gave this bot a lab coat? The risk of AI eroding brand trust

In a world increasingly shaped by artificial intelligence, we’ve entered a new era of belief, one not built on evidence, but on emotional performance.

I recently conducted a series of interviews with friends, family and colleagues to gather information for a thought experiment we ran around AI. The conversations demonstrated that LLMs such as GPT and Claude have become our modern-day life advisors. They help us rewrite angry emails, diagnose rashes, explain our existential dread, and settle once and for all if oat milk is good for us. And somehow... we trust them. Not because they are qualified or have proof, but because it feels right.  

We do this despite knowing that LLMs aren’t built on purely facts but on prediction. Their strength lies in their ability to identify patterns – not truths. Even though we know the algorithms hallucinate, we are guilty of lowering our discernment. We don’t interrogate or ask for sources. Instead, we make assumptions. When answers are shared confidently and concisely in full sentences with perfect grammar, why would we need to question them?

For many, LLMs are playing the role of a modern-day oracle. A digital entity that responds with confidence and polish, offering answers that feel certain, even when they shouldn’t be. And, like oracles of the past, the answers are often ambiguous, sometimes inaccurate, and rarely challenged.

A growing body of evidence is uncovering our over-reliance on AI and its negative impacts

Tone creates trust. If something looks sleek, speaks fluently, and delivers quick answers, it triggers the same response we might have to someone with a clipboard, a lab coat, a confident tone, and a name badge. So, we don’t check the work, we don’t push back. Instead, mistaking tone for truth and fluency for fact. But this has consequences.

When humans give us advice, we instantly evaluate their credibility, but a conversation with an LLM isn’t as complex as a social exchange. Because it’s a tool and not a person, we skip the scrutiny.  

If interactions with LLMS aren’t triggering our social defences in the same way interactions with humans do, brands need to consider how to mitigate the possible impact on trust, belief, and emotional shortcuts.  

AI-driven chatbots and customer service initiatives let brands speak with polish, speed, and confidence. It removes friction and feels smart, helpful, and reliable. That means theoretically faster decisions, smoother paths, and fewer drop-offs. But if people aren’t questioning the results in the same way that they do people, mistakes will go unchecked. Trust can quickly turn into backlash if the answer is wrong, misleading, or tone-deaf.

The accuracy of information provided by LLMs isn’t the only risk of AI adoption; there’s also the impact on experience.  

LLMs don’t just answer questions; they shape how we ask them, and that’s changing our expectations and interpretations of the truth. The more we use these models, the more we internalise their style, certainty, and framing of what’s ‘normal’. Over time, there’s a risk that similar brand-developed models will mimic this. Instead of being useful fact-finding tools, they could start becoming a lens.  

These tools are known to reflect our worst ideas back to us, and they do so in a positive way. We’re already seeing numerous examples where these models have encouraged people to make dangerous or illegal decisions. Some models are only built to reinforce.  

It’s up to brands to determine the boundary between affirmation and actuality. If your AI tool is confidently wrong, your brand will still bear the blame. Customers don’t care if it was technically the algorithm’s fault or a human’s; when trust is broken, it’s hard to build back.  

The key takeaway? AI can’t just sound good, it has to do good.

If your AI speaks with fluency, it also needs to act with care. It should reflect your values, stay true to your brand, and understand the emotional role it’s now playing in your customer experience. Brands must not just sound right, they must be right, act with care, and align AI behaviour with brand values.

Download report

Download guide

Download publication

Daniel van Vorsselen
Business Director
Daniel is an experienced CX researcher and strategist, helping organisations collaborate and engage better to drive customer outcomes. He has extensive experience across Financial Services, Retail, Automotive and Tech across NZ, Canada and Australia.
Contact author →
More on CULTURAL INSIGHT
More on BRAND & CREATIVE
More on CUSTOMER EXPERIENCE
More on BEHAVIOUR CHANGE
More on Innovation
Daniel van Vorsselen