This internet browser is outdated and does not support all features of this site. Please switch or upgrade to a different browser to display this site properly.

How much can we trust AI? Podcast insights

Copy Link
Human and robot hands touching digital checkmark.

We like to believe our decisions are our own – shaped by our values, interests and lived experience. But artificial intelligence is beginning to influence many of the choices we think we make independently.

In The Future Of Human–AI Decision-Making, we were joined by Professor Billy Sung to explore how this shift to AI decision-making is unfolding in practice – how much we should trust it and why being human still matters.

Below is just a selection of insights from the discussion. You can listen to the full episode, The Future Of Human–AI Decision-Making, on Apple Podcasts, Spotify and more. 

🎧 Insight one: Predictive AI already shapes decisions – often without people realising  

When people talk about artificial intelligence today, they’re often referring to tools like ChatGPT. But that framing misses a much bigger picture. 

Q. Billy, what do we mean when we’re talking about AI? 

Billy: “Artificial intelligence, or AI, is actually everywhere. But the rise of tools like ChatGPT – currently the largest consumer-facing generative AI platform – has led to a widespread generalisation of what AI is and how it works. 

“For many people, AI has become shorthand for generative AI. In reality, predictive AI has been embedded across society for years, well before we used generative tools like ChatGPT, Claude, BART or Gemini. 

“So, for instance, Google Maps uses AI. When you enter a destination, the system draws on traffic data, real-time conditions and historical patterns to predict the fastest route. That process – using data to predict an outcome – is artificial intelligence at work. 

“At its core, AI is not ‘intelligence’ but instead it’s a system designed to use data to better predict a particular goal or outcome.” 

“There are different types of AI that serve different purposes. Predictive AI forecasts outcomes, such as routes, recommendations or demand. Generative AI produces new content, whether that’s text, images or audio. 

“Beyond these visible tools, much of AI actually operates behind the scenes. Recommendation systems, such as those used by streaming platforms, are another long-standing example of artificial intelligence shaping everyday experiences. 

“In general, AI is influencing decisions everywhere. It’s really about prediction: anticipating outcomes and guiding decisions toward a goal.” 

This shift in visibility matters – because once AI moves from the background to centre stage, expectations around trust and responsibility change. 

Happy people in the driver and passenger seat of a car looking at their phones.
Google Maps is an example of AI at work, drawing on traffic data, real-time conditions and historical patterns to predict the fastest route. Image: Adobe Stock

🎧 Insight two: We can (sometimes) trust (some of) AI’s decisions 

Trust isn’t simple. With AI it depends on what the system is being asked to do, where it’s deployed and how much data it has to learn from. 

Q. Can we trust AI? 

Billy: “Whether we can really trust AI is a multi-billion-dollar question. And the answer depends on what kind of task the system is being asked to perform.  

“Many of today’s AI systems – particularly recommendation engines – are highly developed. Platforms like Netflix, search engines like Google, and e-commerce sites like Amazon rely on models trained on vast amounts of behavioural data to predict what users are most likely to watch, click or buy next. 

“In marketing and consumer psychology, it’s well established that people’s choices can be predicted to a certain extent – not 100 per cent. AI systems can identify patterns that suggest which product, brand or option a person is more likely to choose based on past behaviour and the behaviour of similar users. 

“In these consumer contexts, AI is doing what it does best: using existing data to predict a likely outcome. 

“Problems arise when AI is asked to predict outcomes that are fundamentally unpredictable. 

“A lottery is a useful example. Even if an AI system were allowed to generate lottery numbers, the output would still be meaningless – because the numbers are random. In those cases, trust is misplaced because prediction is impossible. 

“So, whether we can trust AI’s decisions and predictions comes down to the model, the data it has access to, and the environment it operates in.”  

“Without sufficient context, AI doesn’t fail dramatically. It fails quietly – by making plausible but suboptimal recommendations. 

“From a practical standpoint, current AI systems are best understood as partial contributorsnot decision-makers. They can often deliver 50 to 60 per cent of what a person is looking for – surfacing options, narrowing choices, and processing information at scale. 

“But a human still needs to remain in the loop, crafting prompts, interpreting outputs and applying judgement.” 

In the full episode, we explore the skills needed to thrive in this new decision-making environment and the emerging context–privacy paradox. Don’t miss out on the insights. 

🎧 Insight three: Trust in AI will grow where platforms are reliable, transparent and fair  

As AI systems become more embedded in everyday decision-making, will users, industries and institutions come to trust AI more or less? 

Q. What do you think will determine whether trust in generative AI rises or falls in the future? 

Billy: “This is now a rapidly growing field of study. Academic research into AI trust has grown significantly in recent years.” 

“Across literature, the same framework appears again and again: reliability, transparency and fairness.”  

“Reliability is the most basic requirement for trust. At a technical level, this refers to the accuracy and precision of an AI system’s predictions. Can it consistently produce outcomes that align with real-world behaviour? 

“People also want to understand how an AI system arrived at a particular recommendation or output. 

“This is where transparency – often called explainability in the technical world – becomes critical. Explainability refers to whether an AI system can communicate the reasoning behind its outputs in a way that humans can understand. 

“Research published in leading academic journals actually shows that when systems provide clear explanations for why a recommendation was made, user acceptance can increase by 40 to 50 per cent. In other words, people are far more willing to trust AI when they can see the logic behind it. 

“The third pillar of trust is fairness – not just in terms of access to AI, but in how decisions are shaped behind the scenes. 

“Fairness raises ethical questions about whose interests an AI system ultimately serves. This becomes particularly important as advertising and commercial incentives increasingly intersect with generative AI platforms. 

Is it possible to still trust conversational AI to be fair when responses contain advertising? In the full conversation, we explore this in detail. 

🎧 Insight four: The coming shift is “shared agency” with co-created human–AI decision-making 

As AI systems move beyond isolated tools and into everyday workflows, the future of decision-making is less about automation and more about how human-machine connection. 

Q. What do you think is the most likely future for human-machine decision-making? 

Billy: “The most likely future of human–AI decision-making isn’t full automation – and it isn’t humans handing over control. Instead, it’s what researchers describe as shared agency: a co-created decision-making process where humans and AI each play distinct roles. 

“We already share decisions with AI – through search engines, recommendation platforms, navigation tools and conversational AI. What’s changing is not whether AI is involved, but how deeply it becomes embedded across the decision journey. 

“Rather than acting as a decision-maker, AI increasingly functions as a decision assistant – narrowing options, surfacing patterns, and reducing cognitive load – while humans retain responsibility for the final choice.” 

“Consider a near-future version of a familiar decision: buying a car. Before visiting a dealership, a buyer might consult AI to clarify their needs – price range, vehicle type or key features – and quickly narrow the field. AI doesn’t make the decision, but it shapes the consideration set by filtering options, comparing models and summarising large volumes of review data.  

“The appeal of shared agency is efficiency. AI excels at processing scale: hundreds of documents, thousands of reviews, years of behavioural data.  

“Over the next five to six years, this pattern is expected to expand across everyday decisions. 

“The critical distinction is that shared agency preserves human accountability.” 

For the immediate future, important decisions will remain human – even when informed by machines.” 

Close up of hands typing on a laptop.
Billy doesn’t see AI taking over decisions in the near future, but it will influence them by doing what it does best: filtering options, comparing versions and summarising large volumes of data. Image: Adobe Stock

🎧 AI-generated content in practice: the AI podcast case study 

One of the clearest ways to understand both the potential and the limits of AI is to look at how it’s being used in practice. Billy’s Professor Insight Podcast is a fully AI-generated production – and a useful case study in what AI can do well, where it falls short, and why human insight and input still matter. 

Q. Can you tell us about the Professor Insight Podcast? 

Billy: “So the podcast I’ve been doing is actually a side project, and it started in a very unexpected way. 

“I was overseas on extended carers leave and driving between hospitals every day. As an academic, I was still supervising students and reading a lot of material, but I didn’t have time to sit down and read hundreds of pages. 

“At the time, Google released NotebookLM and I started feeding documents into it and getting summaries back in a broadcast-style format. I could listen while driving, and suddenly I’d covered 500 pages of material without sitting at a desk. 

“That’s when I realised I could generate podcast-style content focused on AI, neuroscience and decision-making – and make complex research more accessible. 

“The podcast itself is fully AI-generated, but we disclose that clearly at the start of every episode. 

“In practice, generating an episode still takes two to three hours. I read the source material, decide what’s interesting, prompt the AI carefully, listen to the output and then edit it. 

“If everything is prompted well, AI can probably do about 70% of the work. The remaining 30% still needs human judgement.” 

“If you don’t prompt it properly and just let it run, it probably does about 30% of the job. 

“So, I don’t think AI will replace human-to-human podcasts any time soon. You still need a human in the loop to shape the content and make it meaningful.” 

Get the full story

Discover how AI is reshaping human-machine decision-making – from an expert in the field.  

Listen now
Copy Link