← Return to FRAME
Published
November 10, 2025
Contributed by
Tagged with
Behaviour change
Brand & creative
Customer experience
Cultural insight
Innovation
Communication
TRA
Download reportDownload guideDownload publication
Published
November 10, 2025
Contributed by
Tagged with
Behaviour change
Brand & creative
Customer experience
Cultural insight
Innovation
Communication
TRA
Summary

In the final episode of The future we want’ podcast series – Andrew Lewis is joined by Lindsay Horn and Daniel Talbot to explore how innovation and systemic change are reshaping people’s sense of control. Together they unpack why many feel both optimistic and sceptical about technology – seeing its potential while questioning how it’s used. From climate-risk mapping to AI decision-making, the trio discuss what happens when progress outpaces perception, and why the future only works when people are part of it.

Future: Taking people on the journey

Published
Nov 10, 2025
Contributed by
Tagged with
Behaviour change
Brand & creative
Customer experience
Cultural insight
Innovation
Summary
Read summary
Read summary
Read summary
Read summary
Read summary

In the final episode of The future we want’ podcast series – Andrew Lewis is joined by Lindsay Horn and Daniel Talbot to explore how innovation and systemic change are reshaping people’s sense of control. Together they unpack why many feel both optimistic and sceptical about technology – seeing its potential while questioning how it’s used. From climate-risk mapping to AI decision-making, the trio discuss what happens when progress outpaces perception, and why the future only works when people are part of it.

Andrew Lewis: When we humans tell stories about the future, we often veer into wildly opposing narratives. The hopeful optimism of the utopian paradise on one hand, the dark, anxious tropes of the dystopian vision on the other. But in truth, most of us simultaneously hold conflicting emotions from both camps. When asked to reflect on what's possible for our lives and society in the near future, both the good and the bad are present. On the one hand, a real sense of optimism for the benefits these changes could bring to our everyday lives and our ability to tackle some of the big societal issues we face.

But also, on the other hand, fear that these benefits won't be fairly distributed, that we may lose our human agency, or that governments and corporations won't act ethically with these new powers. It suggests there's a tightrope for brands and organisations that they must walk when invoking innovative visions for us about how they can add value, both as individuals and societies to us, that there are critical concepts and ideas that must be present to ensure we stay on the right side of the emotional highway with people. And as we'll hear today, when we start talking about using innovation to tackle the big issues in our lives, the stakes get even higher.

Andrew Lewis: Hello and welcome to Frame, a podcast dedicated to the art of knowing people. Over a series of three episodes, we're navigating the hopes and fears Kiwis and Aussies hold for the future and exploring what the future we really want looks like. Each episode, I'll be joined by thought leaders from different human science disciplines. And by layering their unique perspectives on the topic, we're going to search for the truths others don't, the uncommon truths. In our first episode, we started with people their hopes, fears and the pull between optimism and scepticism. In our second, we turned to brands and customer experience, asking how those personal views shape the way people want brands to actually show up in their lives. Now, in this third conversation, we're widening the lens again. Today we're going to look at the big picture of society and systems, the frameworks that quietly but powerfully shape our daily lives.

Andrew Lewis: And we'll explore how innovation and systemic change can leave people feeling anxious if it doesn't connect with the future we really want. With me today are Lindsey Horne, Director of our behavioural science practice, and Daniel Talbot, who leads our innovation practice. Hello to you both.

Lindsey Horne: Hi, Andrew.

Daniel Talbot: Kia ora, Andrew.

Andrew Lewis: Great to have you here today. Dan, why don't we start with you? When you looked at the findings of the research, at how people thought about the different potential futures and scenarios, we showed them at a kind of societal level. What themes came through for you?

Daniel Talbot: Yeah, what struck me is that most people are actually quite optimistic about technology itself. They can see that potential. But that doesn't automatically translate into life will be better for me and my whanau. In fact, two thirds told us they feel more sceptical about the future than trusting it. And almost the same number said that they're cautious rather than hopeful. So when we dug deeper, that caution really centres on how technology is being adopted and if it is being used in ways that enhance that human experience or in ways that remove human agency. So to bring this to life, one of the scenarios that we showed people, in this scenario, drones were scanning homes and land to measure climate risk so that data would be used by people, organisations, and government to make the risks of climate change much more transparent. And this isn't a completely out there future scenario.

Daniel Talbot: Less advanced versions of this are already happening now. We just pushed it a step further and the reactions were really split on this. About half of people thought, great, this is useful. It gives me clearer, more accurate information. But the other half, they felt uncomfortable. For them, it made an emotional and human process of buying a home feel quite rational. Making it made it feel too clinical, and crucially, it felt out of their control. And even among those who reacted negatively, many still admitted they know that it's necessary.

Andrew Lewis: We've discussed this idea in different ways over the past couple of episodes, but what's really interesting is that even though there's clear value for people in this, I mean, we're literally helping people make better informed choices and understand the risks that a potential purchase might face, there's still this weird negativity around these kind of ideas. And it really does feel like just because it adds value for people does not mean that they're going to respond to it positively.

Daniel Talbot: Yeah, there's definitely a lot of cognitive dissonance playing out in this scenario. On the one hand, there's a recognition of the need, people understand that it needs to happen. But on the other hand, a sense of agency is being taken away in people's decision making. But that kind of reaction, it's not new. People have always pushed back on change, even in cases where it clearly makes people's lives better. And when you're talking about something as emotionally loaded as climate change and home ownership combined, that resistance is naturally heightened. But it's not a case of whether we move towards these systemic changes like we have to. It's not an option.

Daniel Talbot: Insurance companies and banks need more accurate data on climate risk and people. Right, yeah, people deserve that information. So, yeah. But despite the hesitation from many, there are people who are concerned. It still did rank as the second most positive scenario in Australia and the third most positive scenario in New Zealand. So there's a clear opportunity for brands in these industries to lead this change, despite it feeling like a confronting topic. It requires a bit of boldness to lean into this. But when doing that, it's important to consider how do you bring people along on the journey so they don't feel like that change is being imposed on them and how do we retain that agency in these situations?

Andrew Lewis: Yeah. And that's really the crux of it, isn't it? It's not the change itself that makes people uneasy necessarily. It's whether they feel like they're shaping it, whether they're part of it or not, whether they have some agency control or whether they're sort of being swept along. I guess it also talks to the reality of having to make sure people are in the right perceptual space to receive technology.

There's a quote that someone brought up in one of the earlier episodes, the William Gibson one about the future's already here. It's just not evenly distributed, which I think is kind of what you're talking about here. You know, for some people this is a long way away and scary, and for others it's close and fine. You know, one person's sci fi is another's reality right now.

Andrew Lewis: But this idea of, yeah, it's the perceptual space that kind of matters. Lindsey, where have you seen that play out? This kind of need to bring people's perceptions along in terms of making sure that they're open to kind of adopting new technology and that the benefits can be implemented?

Lindsey Horne: Yeah, well, as Daniel mentioned, people are really split about whether they want that information about climate risk. And we know that different people have different risk tolerances. Like, you just have people that are more naturally future focused and then you have people that, I don't want to say they have their head in the sand, but, you know, they might be more prone to being in the here and now, or they might just have like a lot of mental bandwidth limitations.

You know, they might be really busy, they might have a young family, and they might just not be ready to take that information on board. But regardless of the why, I think the key takeout is that we need to meet people where they are, particularly for those who are not at the forefront. They're not the people that are going to be using these drones to scan their homes and get the latest climate risk. And also as Daniel mentioned, this really isn't a far off future.

Lindsey Horne: I mean, all of our local and regional councils already have a really good understanding of where the climate risks are in their region and then they do overlay that with mesh block and housing data to understand where those risks are. What we're talking about is taking that to a new level. And I guess the question we often have at TRA is how do you marry up that climate science or that risk science with the human science and understand where our people's head's at with all of this?

Daniel Talbot: It's like the work we did for Ministry of Environment. It was exactly that task, wasn't it?

Lindsey Horne: Yeah, 100%. So we recently worked with the Ministry for the Environment here in Aotearoa, New Zealand and we looked at people's perceptions and behaviours around climate adaptation. So exactly this like understanding climate risk. And we looked to see if people knew if they were living in a flood prone area and if they were taking precautionary actions and taking precautionary behaviours like riparian planting or consistently cleaning out their gutters so they didn't get flooding. And it was really interesting. We were able to match people's self reported answers or their perceptions against mesh block data. So we knew where they roughly where they were living and therefore we knew if their house that they were living in was in a flood prone area. So we were really matching up people's perceptions with the reality.

Lindsey Horne: And I think you can guess, but the gap was pretty huge. Only 40% of people living on a floodplain actually knew that they were living on a floodplain. And this is all publicly available data, it's not hidden behind paywalls. Like I said, our councils, it's all available on council websites, our insurance companies have access to this. So I guess going back to that future state scenario of if we're going to have more and more information and this idea of, you know, just give people the information and then they'll act accordingly.

The big question is, will we actually listen to it and will we take people on a journey? Because another really strong theme that came through our research, especially in response to uncertainty about the future, was all about agency and that sense of control, but also about humanity. We want agency and control, especially those future focused folks, but we also want humanity. We don't want others to get left behind and have that, even if they are not the future focused people.

Lindsey Horne: We want those people to still have access to the information and behaviours that they need to do.

Andrew Lewis: And if we want people to take adaptive behaviours, use the value inherent in these new technologies, it is actually about kind of how we take people on the journey, how we give people a sense of agency and control with that information.

Lindsey Horne: And not just the early adopters. Right. Like we need to take everyone on this journey.

Andrew Lewis: Yeah, which is really interesting if you think of it from an organizational perspective or a brand perspective that's involved in this kind of space. It's actually less about heroing rational benefits in the technology and those ideas are much more about, you know, where are people at and how do we take them on this journey with us. How do we kind of create some sense of agency and control?

Lindsey Horne: Yeah, and it's really easy to just focus on the people that are already doing the things that we want them to do and just hope that the laggards catch up. But for things like climate action or, you know, actions that we need everyone to take, we kind of can't afford to just ignore the hard to shift people.

Andrew Lewis: Yes. So this theme that we're starting to talk about around humanity and human agency makes me think of another one of the scenarios we looked at, one of the ones we presented to people to help us understand and how they feel about potential futures. It was called the digital twin scenario, which essentially had technology advising us on kind of everything to do with what we do in our daily lives. How do you think this relates, Daniel?

Daniel Talbot: Yeah, there's definitely parallels there. At the heart of it, people feel like these big life decisions are slipping out of their control and that the sense of human agency is being chipped away at, like Lindsey talked to there. So in this scenario, another scenario set in 2030, so people have what we call a digital twin, basically this is an AI simulation of their best possible future self. And that twin can model different choices, show you those likely outcomes, and then recommend a path that it thinks you should take.

So people using it for day to day decisions and also using it for relationship decisions and work decisions. And it was a fascinating one because to me, in some ways this is clearly already happening today. It's just not as obvious as in this scenario. Like this scenario, people are already asking AI for advice on how you make decisions and how many people are taking those decisions.

Andrew Lewis: Should I get this mullet?

Daniel Talbot: Yeah, yeah, yeah, exactly.

Lindsey Horne: Don't get the mullet. I'll be your digital twin and just say no.

Daniel Talbot: But when you package that up as a digital twin, so when it's packaged in a certain way that seem to be driving life decision making for these important decisions people suddenly find it really confronting.

Andrew Lewis: Right.

Daniel Talbot: Yeah.

Andrew Lewis: When you step back and put it all together, it starts to freak people out, which is interesting because obviously this is a scenario that is unfolding. There's a lot of the whole agentic AI world building this idea of AI assistants and doing things for you. And also, if you look at how people are using things like ChatGPT at the moment, one of the biggest use cases at the moment is self help and therapy. And those kind of ideas, even romantic partners, seem to be kind of a reality for people in this.

Daniel Talbot: Yeah, yeah, exactly. And despite these use cases today, despite the fact that people are using it for self-help, they are using it for a form of therapy, it was still the scenario that people were most fearful of. So in Australia, it ranks the very lowest in terms of positive sentiment, and in New Zealand, it came second to last. So here's the nuance in this.

So people don't mind outsourcing the small stuff. They'll happily take AI's help with what to eat or what to watch or the best way to get somewhere. But when it comes to the big emotional choices. So if we're talking about jobs, we're talking about relationships or where to live, that's where the fear starts to kick in.

Daniel Talbot: When it seemed to be driving them towards the future, it feels like people are outsourcing the very essence of what makes them human. And I think this taps into a broader cultural awareness. Right now, people are seeing the downsides of our digital world already. I mean, they're experiencing it. Misinformation, data misuse, social disconnection.

So people are naturally quite wary if we're handing over those decisions to AI. And I think if brands lean too far into AI right now, there's a real risk of losing trust. If you look at the brands who are overtly using AI in their advertisements, or that recent AI actor, for example, I'm not sure if you've seen that one, but there's a lot of.

Andrew Lewis: Yes, Tilly.

Daniel Talbot: There’s a lot of pushback to that. It seemed to be encroaching on the things that people actually care about. Their jobs, their livelihoods, who they are as people. And once people feel that their agency is being taken away in these ways, it's a very hard thing for brands to win back.

Lindsey Horne: It's kind of like figuring out where the line is. Right. And not overstepping it.

Andrew Lewis: Yeah. Even though there's theoretically inherently lots of value for people in some of these technologies.

Lindsey Horne: But it's like if you overstep it, then you're in the creepy zone.

Andrew Lewis: You're absolutely in the creepy zone. Yeah, I think this is a really interesting one around this kind of idea of when you talk about, with the scenarios, there's obviously lots of benefit for either individuals or societies in the scenarios that we show. But then you get in people these conflicting emotions of maybe some optimism or hope on the one hand, but fear and scepticism and those kind of ideas on the other. Which again points to the fact that sort of like the technology can invoke either. And the job is really to understand if you are a brand organization is how you reinforce the outcomes of it are about supporting humanity, so to speak, rather than kind of removing agency and control. Lindsey, you work across behavioural sciences, what you do day to day.

Andrew Lewis: Is there something fundamental to human behaviour about this idea of losing agency or decision making?

Lindsey Horne: Yeah, absolutely. I mean, agency and control is very fundamental to the human condition. And as much as we like to fit in with the crowd and follow social norms and do what other people do, we also still really want to maintain a sense of, of control and autonomy and call our own shots. And so there's a real backlash to the thought of AI making those really important decisions for you, just as Daniel said, especially without you knowing.

I think that's also where people really pushed back. It's one thing if AI is telling you to do this, but it's another thing if it's happening without you even knowing. So sure, it can make suggestions and help in the process, but we really want to be the ones to hit go on the idea. And we are seeing that people can be quite dubious of nudges or communications that are clearly informed by AI.

Lindsey Horne: So when messages had a really obvious AI disclosure, so people could see that the message was from AI, people's trust in that message was quite eroded, particularly for some topics or industries. So in contexts where judgment or nuance or if it was quite ethical or sensitive, there was a lot of backlash around if it was an AI generated message. And I guess this isn't anything new.

So ever since organizations have been working with algorithms, we've already seen a bit of a resistance. Like in the literature it's called the algorithm aversion. And like I said, particularly if it's in a sensitive matter. And also what also heightens this aversion is when the AI messages are too certain and kind of come across as really overconfident. That's when people are like, I smell a rat.

Lindsey Horne: You can't be that confident.

Daniel Talbot: It's interesting because a lot of brands right now, they are pushing towards AI agent agents that have the ability to actually make decisions for people. And then there's the hyper personalization that AI enables. And both of these things seem to be quite counter to what you're talking about, Lindsey. Yeah, but I guess we're not saying that it doesn't have a role. AI definitely has a role to play to help people work through choice overload and analysis paralysis. Where we have just too many decisions, we tend to freeze.

So people need that ability or that support to make those decisions. And the digital twin, it can play a role in whittling down our options, but still allowing us to make that final call there, especially in the context of risk and uncertainty or in bigger system shifts.

Daniel Talbot: We need some support there so that we don't freeze up.

Andrew Lewis: Right, well, let's bring some of what we've learned from looking at these future scenarios with people and how they've responded to them, back to some lessons for brands and organisations. If you had to give a brand or organization a takeaway, what would it be?

Lindsey Horne: Yeah, I guess I would start around the idea of like, do you have permission to play in this space and what's your role? And what we saw through the research is that there's actually a role for a whole range of different players to be at the forefront of these future scenarios because of the caution and scepticism alongside the optimism, but, you know, still a lot of caution and scepticism. People saw that there was a role for regulation, for watchdogs, for public sector governance, and this is likely stemming from that caution, but also very much a role for the end user to be involved in the process too.

And I think again, that comes back to that need for agency and control. People who use the products and the services and the scenarios want to have a role in that and what they're going to look like. And I guess following on from that, governance and the end user, there is a role for brands and organisations who are likely providing the products and services that did come through in our research as well. The big outtake for me is that the technology, the services, the product, they will all likely move faster than people's perceptions.

Andrew Lewis: And probably already are.

Lindsey Horne: Yeah, I mean, the fact that we have a lot of this quite intense technology that can tell us about climate risks, but lots of people even know about it or aren't using it. Yeah. Tells us that the technology's gonna move faster than our perceptions. So I guess going back to that adoption curve, there's going to be the early adopters, the future focused people. But looking past those people, we need to know where the middle of the bell curve is sitting and the laggards. And we can't move too far too fast without them, otherwise they'll lose their sense of control. And this could just lead to further mistrust or even a backlash culture, which we sometimes culture moves really quickly.

Lindsey Horne: It's kind of like the bell curve splits and you see a backlash culture. So we really want to set people up for success, but ultimately we've got to take them on a journey and maintain their sense of agency and control.

Andrew Lewis: So almost, if you're thinking about the role for brands and organisations, the taking on the journey is the biggest part of the product and service, in a way.

Lindsey Horne: Absolutely.

Andrew Lewis: Talking about success and kind of growing human agency.

Lindsey Horne: Yeah.

Daniel Talbot: And I think brands need to remember that they're dealing with technologies that could fundamentally change the way people live or are fundamentally changing the way people live. And if you're doing that, if you have something that's capable of that, you've got to start with people. We are talking about humans here, and humans quite rightly want to feel like they have a sense of human agency.

Andrew Lewis: Yes.

Daniel Talbot: And as I said earlier when we were talking about the climate scenario, it doesn't mean avoiding what's necessary or what technology makes possible. It's about designing these future solutions in a way that still leaves people with transparency, with some choice over their decision making. Just an overall feeling that they're a part of this, that they're not having things imposed on them.

Andrew Lewis: Yeah.

Lindsey Horne: Come by our vibes.

Andrew Lewis: Exactly. Well, thanks, Dan. And that really does bring us full circle. Whatever the system or technology progress has to feel like something people are part of, not something that's being done to them. And I think there's often a temptation with innovation and new technologies to focus on them directly, on the benefit that emerges from them, to kind of see it as something people will embrace for the inherent value in it.

But what she’s actually saying is really a really big part of the job, particularly in these larger, big, systemic changes, is how we take people on the journey and how we create agency for people, make them feel like captains of their own. Own ship. Well, that's it for this third and final episode in the Mood of the Nation series.

Andrew Lewis: Thank you, Dan. Thank you, Lindsey, for bringing the societal and behavioural perspectives here. Super fascinating discussion. And the clear message is this, you know, change must work for people, progress only sticks when it feels human and inclusive. That brings us to the end of this three-part series. We've heard how people imagine the future, their hopes, their worries, and what they want from brands, organizations, and the systems that are around them. And if there's one takeaway, it's that the future we want is the one that feels human. For brands, that means more than just keeping up with technology.

Andrew Lewis: It means earning trust by designing solutions that give people agency, create inclusion, and feel meaningful in everyday life. It's not about doing everything the system makes possible, but how to show up in ways people value, remember, and choose again. To download the full Mood of the Nation report, visit theresearchagency.com/future and for more uncommon truths, subscribe to TRA’s FRAME.

Download report

Download Guide

Download publication

Andrew Lewis
Managing Director
Andrew is passionate about anything related to data. Highly skilled in all facets of Quantitative research, advanced analytics, market sizing and financial analysis. Extensive experience in Financial Services, FMCG, Utilities, Telecommunications, Social research, Government projects. Andrew is exceptional in providing clients with the confidence to act based on a sound understanding of the opportunities and issues they face.
Contact author →
Daniel Talbot
Strategy & Innovation Director
As Strategic Qualitative Director at TRA, Daniel draws on his diverse background in research, human-centered design, and strategy to bring human truths to the forefront. With a belief in the ability for insights to solve any problem, he has helped brands across Aotearoa and the globe connect more meaningfully with their audiences and grow in the right direction.
Contact author →
Lindsey Horne
Behavioural Insights Director
With a background in neuroscience and applied behavioural science, Lindsey works across behaviour change projects with social and government clients. Her approach to behaviour change is holistic, from broader cultural and social change through to behavioural economics and nudges.
Contact author →
More on CULTURAL INSIGHT
More on BRAND & CREATIVE
More on CUSTOMER EXPERIENCE
More on BEHAVIOUR CHANGE
More on Innovation
Andrew Lewis
Daniel Talbot
Lindsey Horne