1. AI’s evolution is outpacing human adaptation, creating tension between efficiency and meaning.
2. People reject being sidelined, not technology itself.
3. Designing for growth and learning keeps AI human-centred.
4. Trust, fairness, and transparency drive acceptance.
1. AI’s evolution is outpacing human adaptation, creating tension between efficiency and meaning.
2. People reject being sidelined, not technology itself.
3. Designing for growth and learning keeps AI human-centred.
4. Trust, fairness, and transparency drive acceptance.
.jpg)
There’s a lot of talk about the failure of AI projects, with many not delivering the efficiency gains that companies expect. AI’s evolution is outpacing human adaptation, and that gap is where anxiety, resistance, and misunderstanding live – creating tension across every industry.
The blame is often placed on the limitations of the technology (and yes, it’s still in its infancy), or the ongoing problem of AI hallucinations – when an AI generates false or misleading information that sounds plausible. Nobody wants AI prescribing medicine. Others argue that businesses are simply not adopting AI correctly – enter the AI consultants.
Insights from our ‘The future we want' research highlight what might be driving some of these challenges. We asked people to imagine a future where learning and growth are constantly tracked, a real-time score showing how quickly your skills are becoming outdated and what you need to learn next. It’s a world where progress is measurable but exhausting.
While some liked the idea of visible improvement, many found it robotic and competitive, anticipating a system that measures progress and subtly steers people toward outcomes, but forgets meaning.
All of the problems with AI adoption are real and need attention, but at the heart of these challenges lies a more fundamental issue: people. This challenge is one technology has faced time and time again. It’s why the field of human-centred design emerged, and why it grew rapidly through the wave of digital transformation in recent decades.
Human-centred design has always been about understanding needs – but how about designing for how people learn, grow, and evolve alongside technology?
Arguably, our understanding of how people think, feel, and respond to technology, has brought us to today’s world – where we have an abundance of user centric tools that have, for the most part, improved how we live and what we create.
But while people often view AI as just a tool, it’s more than that – it threatens human agency. Historically, no technology has directly confronted the core of human decision-making. AI does. It can make choices on our behalf if we let it.
Take recruitment, for example, where companies increasingly use AI to screen, rank, and, in some cases, select candidates. Beyond hiring, AI is now monitoring performance, predicting potential, and nudging learning paths. It promises objectivity but risks turning people into data points.
In these moments, a machine is making decisions that can shape a person’s trajectory. It’s no surprise that there’s resistance, especially in creative industries, where human agency is crucial to producing meaningful, effective work.
AI is further straining social cohesion. Cohesion depends on trust, and trust depends on authenticity. Yet in a world where the ‘real’ is increasingly indistinguishable from the machine-made, what does authenticity even mean?
Recent studies show that Australia and New Zealand have some of the lowest levels of trust in AI. This isn’t surprising given these nations’ values of fairness and scepticism toward concentrated power. But the distrust isn’t unique to these countries – it’s global, with doubts of AI’s safety, security, and impact on people echoed across markets.
Fairness isn’t only about outcomes; it’s also about transparency and respect. People want to understand how decisions are made and know they still have influence.
In today’s context, AI is often seen as a threat – a one-way value exchange that benefits companies more than people. For consumers and employees to see value, they need to gain something themselves.
In TRA’s recent study, nearly half of the respondents agreed that AI helps the company make progress, but it doesn’t generally help me make progress.” Only 13% disagreed.
This finding reinforces a wider pattern we’re seeing across markets – people don’t reject technology itself; they reject being sidelined by it.
This matters because people are more likely to embrace technology when it feels personally relevant, useful, and aligned with their values. This was true at the advent of the internet, and it’s true now. Technology acceptance hinges on whether people perceive it as both useful and easy to use. If AI is seen to enhance what people already value, resistance softens.
For AI adoption to land well, companies need strategies that go beyond buzzwords and hype. This means less focus on the technology itself and more energy on designing its use around people’s values, concerns, and realities.
AI can remove friction from daily life, enhance creativity, and support our work. But if it’s only designed to measure performance or drive people toward one-sided outcomes, it risks being rejected or not used as intended. Conversely, if it is adopted in a way that values the things that make us human – curiosity, reflection, and learning – it becomes a useful partner of growth, not a monitor of it.