Government announces new restrictions on deepfake AI abuse



This article contains references to tech-facilitated child sex abuse.The federal government will strengthen Australia’s online safety laws to restrict access to deepfake tools, targeting the growing use of artificial intelligence to create non-consensual sexual material.Communications Minister Anika Wells said in a statement on Tuesday the problem of deepfake abuse is “too important for us not to act”.

“There is a place for AI and legitimate tracking technology in Australia, but there is no place for apps and technologies that are used solely to abuse, humiliate, and harm people, especially our children,” she said.

The announcement follows an AI roundtable at Parliament House, which heard law enforcement agencies are “overwhelmed” by the volume of child sex abuse material online, with AI now driving a significant share of the problem.

Independent MP and roundtable co-convener Kate Chaney said: “AI is changing child sexual abuse in Australia, it’s making it easier to generate this material, but it can also be used to help law enforcement.”

LISTEN TO

Calls to criminalise possession and use of AI tools that create child abuse material

Wells said the Albanese government “will use every lever at their disposal to restrict access to nudification and undetectable online stalking apps and keep Australians safer from the serious harms they cause”.

The government did not specify a timeframe for when the restrictions will take effect, but has committed to working with the industry to enforce them.

The rise of ‘nudify’ apps

“With just one photo, these apps can nudify the image with the power of AI in seconds. Alarmingly, we have seen these apps used to humiliate, bully, and sexually extort children in the school yard and beyond,” Grant said earlier this year.

There are laws at both state and federal level that prohibit the distribution of non-consensual and sexually explicit material.In July, Chaney introduced an amendment to the Criminal Code to include deepfake sexual material, with breaches carrying up to 15 years in jail. She proposed making it illegal to download tools designed to generate deepfake sexual material and illegal to download an image for training “nudify” apps. The bill is currently before parliament.”What happens is offenders are downloading these tools and training them using images of children and then deleting the images so they then don’t possess those images, which is the current offence, and then they can be re-generated at any time on demand, even using children’s names if they’ve been trained like that,” she said.

“Not only does it normalise and desensitise this behaviour, but also makes it hard for law enforcement to identify actual victims; they’re spending a lot of time trying to distinguish between synthetic material and actual children.”

Working alongside tech platforms

Wells, however, acknowledged that Tuesday’s announcement alone would not be enough to solve the problem of deepfake abuse.”While this move won’t eliminate the problem of abusive technology in one fell swoop, alongside existing laws and our world-leading online safety reforms, it will make a real difference in protecting Australians,” she said.DIGI, the association advocating for the digital industry in Australia, said it welcomes strong action from the government against nudification apps.

Jennifer Duxbury, director of regulatory affairs, policy and research at DIGI, said: “We support the ecosystem approaches to tackling harm, and look forward to working constructively with the government on the details of the proposal.”

Meta, which owns Facebook and Instagram, said deepfake abuse material violates its rules and they are responding proactively to reports of harm. Source: AAP / Cfoto

The problem isn’t just local.In June, Meta filed a lawsuit against Hong Kong company CrushAI — a platform capable of creating sexually explicit deepfakes — after the company ran more than 87,000 ads across Meta’s platforms, according to the complaint filed in the Hong Kong district court.Meta alleged the ads violated its rules against non-consensual intimate imagery and said it actively removes ads, blocks links to websites hosting the apps, and restricts search terms such as “nudify”, “undress”, and “delete clothing” from Facebook and Instagram. It also stated that it would share signals about ‘nudify’ apps with other tech companies, allowing them to address these issues on their respective platforms.Readers seeking support can ring Lifeline crisis support on 13 11 14 or text 0477 13 11 14, Suicide Call Back Service on 1300 659 467 and Kids Helpline on 1800 55 1800 (for young people aged 5 to 25). More information is available at beyondblue.org.au and lifeline.org.au.Anyone seeking information or support relating to sexual abuse can contact Bravehearts on 1800 272 831 or Blue Knot on 1300 657 380.

Source

Visited 1 times, 1 visit(s) today

Recommended For You

Avatar photo

About the Author: News Hound