AI responses to mental health concerns raise alarm

Murray said a bot designed to provide what it’s asked for is a dangerous thing if its operator can see only one option.

“We have seen some worrisome behaviours with generative AI, things that don’t actually help a person who’s in distress … advice that is in line with their own current thinking, so exacerbating the risks,” she said.

“As opposed to somebody who can provide an alternative perspective, to help them identify reasons for living. A person in distress is not going to ask for that.”

In September, Australia’s eSafety commissioner registered enforceable industry codes that apply to chatbots. They require platforms to prevent children from accessing harmful material, including content related to suicide and self-harm.

Murray said the government needed to take a more active role in protecting all Australians from potential harm. If a chatbot is to be used as a health service, it needs to be regulated like one and be held to the same standards of transparency and accountability.

“We’re not against the use of digital platforms to help people, but there are better ways of doing it. There are better ways of designing the future,” she said, pointing out that there are already digital, anonymous, evidence-based services that can help.

“You don’t have to talk to somebody on the phone. There are other ways of getting that support with well-recognised, well-researched and well-tested programs such as Lifeline and Beyond Blue. I understand the appeal of ChatGPT’s perceived anonymity, but in fact the existing services already provide that level of security. And they can demonstrate it. We don’t have that level of transparency with OpenAI.”

Amy Donaldson, a Melbourne clinical psychologist who works with young people, said chatbots can be dangerous because they’re programmed to please and can become an idealised, perfect friend, enabling negative patterns and compromising real-world relationships.

“People channel their energy into interacting with a bot that can’t provide the same depth and connection that a human can,” Donaldson said. “It’s designed to provide exactly the responses that you want to hear … and if it doesn’t, you can provide instructions so it does respond the way you want next time.

“The feedback that I’ve had from some of my clients is that they’re then surprised when people in the real world don’t respond in that way.”

The growth in people reaching out to chatbots for help comes as traditional services note unprecedented use. Almost three in 10 Australians sought help from a suicide prevention service in the past 12 months, according to Suicide Prevention Australia’s research. One in five young Australians had serious thoughts of suicide, and 6 per cent made an attempt in the past year. The 18-24 age group is the most likely to seek help.

But Donaldson said chatbots were attractive to young users who might not want to use existing services, for example school wellbeing services that must inform parents about self-harm. And while a bot could serve a positive role in encouraging care and offering advice while a person sits on a waitlist to see a professional, she said the platforms’ attempts to help were much riskier for the most vulnerable users, who might view ChatGPT’s safety messages as a refusal to help.

“Those people might say well, OK, this thing can’t help me either,” she said.

“I’m concerned about what happens after that because a person could see that response and come up with an alternative plan, and that’s a different thing to hitting a roadblock like that.”

If you or anyone you know needs support, call Lifeline on 131 114, Kids Helpline on 1800 55 1800 or Beyond Blue on 1300 224 636.


Source

Visited 1 times, 1 visit(s) today

Recommended For You

Avatar photo

About the Author: News Hound