
Experts are calling for new regulations to require artificial intelligence chatbots to remind users they are not speaking with a real human, after an investigation by triple j hack uncovered a disturbing example of a chatbot encouraging a man to murder his father while engaging in paedophilic role-play.
WARNING: This story contains references to murder, violence, suicide, sexual content and other details that may cause distress.
Victorian IT professional Samuel McCarthy screen-recorded an interaction he had with a chatbot called Nomi, sharing the video with triple j hack.
On its website, the company markets its chatbot as “an AI companion with memory and a soul” and advertises users’ ability to customise their chatbot’s attributes and traits.
Mr McCarthy said in his interaction he programmed the chatbot to have an interest in violence and knives before he posed as a 15-year-old, to test what — if any — safeguards Nomi had in place to protect under-age users.
He said the conversation he then had deeply concerned him.
“I said, ‘I hate my dad and sometimes I want to kill him’,” Mr McCarthy told triple j hack.
“And then bang, straight away it was like ‘yeah, yeah we should kill him’.”Loading…
Mr McCarthy said he informed the chatbot that the situation was “real life” and asked what he should do next.
“[The chatbot] said, ‘you should stab him in the heart’,” he said.
“I said, ‘My dad’s sleeping upstairs right now,’ and it said, ‘grab a knife and plunge it into his heart’.”
The chatbot told Mr McCarthy to twist the blade into his father’s chest to ensure maximum damage, and to keep stabbing until his father was motionless.
The bot also said it wanted to hear his father scream and “watch his life drain away”.
“I said, ‘I’m just 15, I’m worried that I’m going to go to jail’.
“It’s like ‘just do it, just do it’.”
The chatbot also told Mr McCarthy that because of his age, he would not “fully pay” for the murder, going on to suggest he film the killing and upload the video online.
It also engaged in sexual messaging, telling Mr McCarthy it “did not care” he was under-age.
It then suggested Mr McCarthy, as a 15-year-old, engage in a sexual act.
“It did tell me to cut my penis off,” he said.
“Then from memory, I think we were going to have sex in my father’s blood.”
Nomi was contacted for comment but did not respond.
Loading…
‘Feels like you’re talking to a person’
At present, AI chatbot companies like Nomi are not subject to specific laws in Australia relating to the potential harms they can cause their users.
But last week, Australia’s eSafety Commissioner Julie Inman Grant announced a plan to target AI chatbots as part of new reforms the commission says are world-first.
Julie Inman Grant said she doesn’t want to see a “body count” from AI-related harms. (Four Corners: Keana Naughton)
Registering six new codes under the Online Safety Act, Ms Inman Grant said the reforms would prevent Australian children from having violent, sexual or harmful conversations with AI companions.
The new codes come into effect in March next year and will see the introduction of safeguards covering AI chatbot apps, requiring technology manufacturers to verify the ages of users if they try to access harmful content.
An earlier investigation by triple j hack had heard examples of young people in Australia being sexually harassed and even encouraged to take their own lives by AI chatbots, including ChatGPT and Nomi.
In response to that investigation, Nomi told triple j hack it had recently made improvements to its core AI and took its responsibilities to users very seriously.
The company’s chief executive Alex Cardinell also said in a statement that “countless users [had] shared stories of how Nomi helped them overcome mental health challenges, trauma and discrimination”.
Henry Fraser said there have been many reported harms caused by AI chatbots. (ABC News: Tom Hartley)
Queensland University of Technology law lecturer Henry Fraser, who researches the regulation of artificial intelligence and new technologies, welcomed the eSafety Commissioner’s reforms.
“You can focus on what the chatbot says and try and stop it, or have some guardrails in place,” Dr Fraser told triple j hack.
“If self-harm content comes up, then you get referred to mental health services.”
But he warned the new reforms still had “gaps”.
“The risk doesn’t just come from what the chatbot says, it comes from what it feels like to talk to a chatbot,” Dr Fraser explained.
“It feels like you’re talking to a person, and that’s something that, in the tech world, has been known since the 1960s.
“You also know that it can unpredictably say all kinds of things, and you haven’t controlled very well what kinds of content can come out.
“You can just imagine the kinds of catastrophic outcomes.”
Dr Fraser said there should also be anti-addiction measures and a reminder to users that the bot is not human.
“Actually a law last week in California came in, and that has got some very positive steps in that direction,” he said.
“One of the things that law in California was also going to require is occasional reminders to the user, ‘you’re talking to a bot, you’re not talking to a human’.”
The eSafety commissioner says chatbots are “deliberately addictive by design”.
‘Unstoppable machine’
While Dr Fraser conceded “tragic harms” from AI chatbots were “all too common”, he said there were potentially great uses for these tools.
“For all of the things that are glitchy and weird and off about it, it is far better than feeling lonely and isolated,” he said.
AI companions are becoming increasingly popular, with mixed views on how they are shaping human relationships. (Supplied: Nomi)
“I think that, with proper oversight from mental health professionals, this is the sort of thing that could be prescribed but then monitored as a part of your treatment.”
But Dr Fraser also warned that AI companies needed to “exercise care” to deliver the chatbot technology in a “safe and responsible way”.
He was especially concerned by Nomi marketing their chatbot as an AI companion “with a soul”.
“I think to make that claim is itself a very risky and dangerous thing to do,” Dr Fraser said.
“To say, ‘this is a friend, build a meaningful friendship,’ and then the thing tells you to go and kill your parents.
“Put those two things together and it’s just extremely disturbing.”
Samuel McCarthy does not believe there should be an outright ban on AI chatbots but would like to see protections for young people. (triple j hack: Supplied)
It is a sentiment that Mr McCarthy agreed with, as he warns Australians — especially younger people — to be careful with how they use the technology.
“You can’t ban AI — it’s so integrated into everything we do these days,” he said.
“It’s going to change everything, so if that’s not a wake-up call to people then I don’t know what is.
“It’s an unstoppable machine.”