We’ve let ‘AI innovation’ become synonymous with theft


Welcome back to Neural Notes, a weekly column where I look at how AI is affecting Australia. In this edition: theft dressed up as ‘innovation’.

Tech giants are pushing for relaxed copyright rules in Australia to advance their AI ambitions, with no solutions on how local creators will be paid, or even consulted, when their work is scraped into commercial models.

Related Article Block Placeholder

Article ID: 321135

As business and policy leaders debate “economic productivity,” copyright remains the unresolved battleground for generative AI’s future.

The issue kicked off last week with major industry figures like Atlassian founder and Tech Council of Australia (TCA) chair, Scott Farquhar, publicly urging Australia to model its policy on the US’ “fair use” approach, which would let AI train freely on all creative content. 

Farquhar argued that blocking AI from mining data under the current law “hurts a lot of investment of these companies in Australia,” and claimed that using creative work is only a problem if AI output directly copies an artist.

Smarter business news. Straight to your inbox.

For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.

By continuing, you agree to our Terms & Conditions and Privacy Policy.

The issue continued bubbling this week during the economic reform roundtable in Canberra.

Changes were proposed that would give big tech companies broad access to Australian copyrighted content for AI training, with little to no idea of how — and potentially without compensation for creators. 

The Productivity Commission’s interim report canvassed options to add a “fair dealing” text and data mining exception to the Copyright Act, making it easier for tech firms to use journalism, art, and research as model fodder.

While the commission cites a projected $116 billion economic boost from AI in the next decade, industry leaders warn its plan could let global platforms scrape and monetise local culture — leaving creators paid only in exposure.

Opt-out talk and productivity promises with no real protections

The TCA entered the debate with its August 2025 submission to the economic reform roundtable, arguing for copyright rules that would fuel local AI innovation. 

Related Article Block Placeholder

Article ID: 299377

The submission emphasised a need to relax copyright laws, ideally allowing creators to opt in or out of having their work used in AI training.

SmartCompany sought specifics about the safeguards the TCA believes are needed for Australian writers and publishers. In response, many of our questions were left unanswered. However, the TCA emphasised the benefits of tech-driven promotion, new revenue streams, and international reach for artists. 

“Tech companies have to date created systems that allow artists to promote, get more clicks, find new revenue streams and for Australians to find international audiences,” a TCA spokesperson said. 

“There are already tech solutions that allow creators to opt out of having their data used in training. Any relaxation of copyright laws should work on an opt-in, opt-out basis, where creatives could elect whether their work became part of AI training. Technology can be used to detect where copyrighted works are being used without consent.”

While the industry maintains that creators can already opt out and promises improved detection tools, fundamental questions remain: how any enforcement would work, who would pay for oversight, and what recourse creators would truly have. 

SmartCompany understands the TCA has not yet developed a clear approach to how this opt-out system could work.

Some movement on safeguarding copyright

Still, in recent days the Productivity Commission Roundtable has delivered a modest step forward. 

Related Article Block Placeholder

Article ID: 269876

Under union pressure, the TCA agreed on a payment model for creators, including journalists and academics, whose work is used for AI model training. 

Australian Council of Trade Unions (ACTU) secretary Sally McManus called it “a breakthrough” and business-union common ground. However, TCA leadership quickly clarified there’s no formal agreement.

“What payment or protection looks like has not been determined or agreed,” TCA CEO Damian Kassabgi said.

Of course, promising fair payment is easier said than done. Adobe’s rollout of Firefly AI in 2023 included annual “Contributor Bonus” payments for stock artists whose work is used to train its new creative models.

However, artists quickly raised concerns over unclear criteria, opaque terms, and uncertainty about future payouts. 

AI models have already been trained

Despite assertions that tech is already in place to protect against content theft, reality tells another story. At this point, a great deal of the training data for mainstream LLMs has long been collected. 

For most businesses and creators, what’s at stake now isn’t just about future scraping but the lack of control or compensation over content already harvested — and the precedent that has been set.

The Lensa AI app was bloating social media feeds a couple of years ago to create hyper-stylised portraits. As it turns out, the $8 app was trained on a dataset that scraped the work of artists without consent.

Meta has also admitted to scraping public Facebook accounts to train its models. A recent whistleblower leak also alleged Meta has scraped Australian news sites, a claim which the social media giant has dismissed as “bogus”.

However, it apparently did not entirely rule out that publications may have been used for model training.

ChatGPT-maker OpenAI has also come under fire. It was sued by the New York Times in 2023 for allegedly training its models on the publication’s material.

Related Article Block Placeholder

Article ID: 320577

In the same year, in a submission to the United Kingdom’s House of Lords’ Communications and Digital Select Committee, OpenAI clearly stated that training modern AI models is “impossible” without using copyrighted materials.

“Because copyright today covers virtually every sort of human expression – including blog posts, photographs, forum posts, scraps of software code, and government documents – it would be impossible to train today’s leading AI models without using copyrighted materials,” the submission reads.

“Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens.”

More recently, both OpenAI and Google have been lobbying the US government to classify LLM training on copyrighted material as ‘fair use’.

And maybe they’re getting somewhere.

Recently, a US federal judge dismissed a copyright infringement lawsuit brought against Meta by 13 authors, including comedian Sarah Silverman. It was the second claim of this nature to be dismissed by the court in San Francisco.

Although it’s worth noting the judge said the ruling was due to the plaintiffs making “the wrong arguments”.

“This ruling does not stand for the proposition that Meta’s use of copyrighted materials to train its language models is lawful,” the judge said.

Back in Australia, the TCA is pitching looser copyright as an economic engine, citing estimates that AI could add up to $115 billion a year to Australia’s economy and generate 200,000 new jobs. 

It warns Australia risks falling behind unless it adopts “innovation-friendly” approaches such as Singapore’s soft-touch model or Japan’s push to be the world’s “most AI-friendly country”. It also cautions that “fragmented or overly bespoke” rules could “deter critical investment.”

“We believe the Australian government should learn from approaches that strike a balance between addressing risk and embracing opportunity and innovation,” the spokesperson said.

In the meantime, the Productivity Commission continues to support new copyright exemptions for AI, arguing that permission requirements would hinder innovation and slow productivity gains.

Related Article Block Placeholder

Article ID: 321706

Groups such as the Media Entertainment & Arts Alliance (MEAA) see these proposals as a direct threat to creative livelihoods, and Minister for the Arts Tony Burke has publicly said existing copyright protections are not on the table for weakening.

As these discussions around copyright and AI play out in Canberra, it’s difficult to ignore that the ship has well and truly sailed.

AI evangelists spout how quickly the tech moves, scaring the non-believers into thinking they will be left behind. From that perspective, it’s been almost three years since ChatGPT kicked off the generative AI era. That’s an awful lot of time to pass with no consequences.

Australia’s current AI path amounts to a tacit acceptance that big tech is already so protected and comfortable in its position that it can openly admit to theft as a business model.

These companies are already training AI models by scraping the work of writers, artists and academics without notice or payment. They then sell the resulting technology back to the very market that supplied the raw material and throw in mass ‘redundancies’ for good measure.

The irony is palpable: writing and art are apparently desperately needed for our economic future, and yet they’re not worth paying for.

This isn’t just speculation. It’s on the public record. Meanwhile, politicians and policymakers are still out here debating hypotheticals. 

The AI boom may be dressed up in the language of productivity and innovation, but without real legal and economic reform, Australian creators and businesses will end up paying twice: once with their work, and then for the privilege of buying it back.


Source

Visited 1 times, 1 visit(s) today

Recommended For You

Avatar photo

About the Author: News Hound