Public attitudes toward ‘chatbot therapists’ shifted dramatically during the rapid rise of generative artificial intelligence in 2023, according to a new Curtin University study, that is now behind the redevelopment of a safer, wellbeing chatbot called Monti.
Data collected both before and after the emergence of ChatGPT revealed a major shift to using generative-AI chatbots, valuing their natural conversation style and apparent understanding. This may have led earlier ‘rules-based’ chatbots to seem repetitive and lacking in understanding.
The research is now guiding the redevelopment of Curtin’s next generation of wellbeing chatbot, Monti, co-designed with consumers to promote safe, reflective emotional exploration.
Research team lead and Professor of Mental Health Warren Mansell, from the Curtin School of Population Health, said 2023 marked a turning point in the public understanding of AI-supported wellbeing tools.
“As generative AI entered everyday life, people began to view chatbot ‘therapists’ less as gimmicks and more as potentially credible tools for self-reflection,” Professor Mansell said.
“With demand for mental-health support continuing to outstrip supply, responsible AI tools can help bridge the gap, but only if they are designed with care, evidence and humility.”
Interviews during the study revealed users valued a curious questioning style that helped them to explore their own goals, problems, and generate new perspectives, aligning closely with the principles of the scientific theory that underpins the Curtin research team’s work, known as perceptual control theory (PCT).
Monti’s guiding motto, ‘Notice More, Explore Further, Think Wiser’, captures the tool’s role as a catalyst for curiosity and clarity, not a substitute for human relationships.
The authors emphasised that responsible innovation required evidence-based design, transparency, safety monitoring and a clear understanding of user needs. These principles are now guiding Monti’s next phase of development, as the research team aim to provide it to Australian universities from mid-2026.
The Curtin study suggests well-designed AI chatbots can play a meaningful role, empowering individuals to reflect, clarify their concerns and seek human help when needed.
Published in JMIR Formative Research, the article, ‘A Rule-Based Conversational Agent for Mental Health and Well-Being in Young People: Formative Case Series During the Rise of Generative AI’, can be found online here.