Welcome back to Neural Notes, a weekly column where I look at how AI is affecting Australia. In this edition: how Australia’s AI story in 2025 was defined less by product drops and more by rule-making… the majority of which arrived in the final months of the year.
The Albanese government finally made some moves on regulation, creators pushed back on “free data” for training models, and some local startups raised serious capital.
Related Article Block Placeholder
Article ID: 328096
Here’s a look back on some of the most important AI news this year.
AI governance without AI legislation
Just a few weeks ago, the Australian government finally decided not to follow in the EU’s footsteps and legislate standalone AI laws.
This was outlined in its long-awaited (some might even argue ‘overdue’) National AI Plan. It framed Australia’s approach around boosting adoption and investment while managing risk through existing privacy, consumer, competition, and workplace laws rather than a standalone AI statute.
Smarter business news. Straight to your inbox.
For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.
By continuing, you agree to our Terms & Conditions and Privacy Policy.
The plan signalled more coordination between regulators, the establishment of an AI Safety Institute, and targeted reforms where gaps emerge. However, it stopped short of introducing new criminal offences or a formal risk classification regime.
That decision has real consequences for Australian companies. At home, many businesses are still operating broadly under the same legal framework they were using before generative AI went mainstream.
Abroad, exporters are dealing with far more prescriptive rules. Any Australian SaaS company selling into Europe is now grappling with the phased rollout of the EU’s AI Act, including conformity assessments, documentation requirements and heightened transparency obligations for high-risk uses and general-purpose models. In practice, this has created a split reality: relatively flexible domestic rules paired with strict offshore compliance expectations.
Related Article Block Placeholder
Article ID: 328397
Within government, the guardrails did tighten. Updates to the policy for the responsible use of AI in government baked in stronger expectations around governance, risk assessment and transparency, particularly for high-impact applications.
For founders and vendors, this has translated into more detailed procurement questionnaires and audit-style scrutiny when selling AI tools to the public sector.
The subsequent decision to quietly abandon plans for an independent AI Advisory Body only reinforced concerns about how much external oversight will sit alongside this self-regulatory approach.
Copyright, training data and the “no free lunch” decision
The most consequential policy decision of the year did not come in an AI bill at all, but in copyright law. After months of consultation and pressure from big tech and lobbyists alike, the federal government ruled out introducing a broad text-and-data-mining exception that would have allowed AI developers to train models on copyrighted books, journalism, images and music without licences.
Copyright became the ethical fault line of Australia’s AI debate, with ministers framing the decision as protecting creators’ rights and ensuring fair compensation.
Related Article Block Placeholder
Article ID: 255488
Of course, the training ship had already well and truly sailed in the years leading up to the decision.
Still, for AI companies, particularly those considering training large models locally, that choice forces difficult trade-offs. Developers must either pay for content licences, rely on narrower first-party or synthetic datasets, or move more of their model training offshore under more permissive regimes.
In the short term, that uncertainty has pushed many Australian startups to build products on top of US- and EU-based frontier models rather than investing in training large models from scratch.
For businesses adopting AI tools, it also raises unresolved questions about liability, data provenance, and where responsibility sits when platforms shift risk back onto users. And that risk-shifting argument isn’t hypothetical. In Germany’s GEMA case, OpenAI argued responsibility should sit with users who prompt the system (an argument the court rejected).
Offshore rules, domestic ramifications
Internationally, 2025 was the year AI regulation started to feel real rather than theoretical. The EU’s AI Act continued its phased rollout, while the US relied on executive orders, agency guidance and enforcement actions rather than a single national law. For now, at least.
Even so, US regulators and cloud providers effectively set global standards through the requirements baked into major platforms.
For Australian companies, this has turned compliance into a feature of the infrastructure layer. Hyperscalers and model providers now market products as “EU AI Act-ready” or aligned with emerging US safety expectations.
The result is a growing power asymmetry: Australian founders absorb the cost and complexity of offshore compliance without having much say in how those rules are written.
For many mid-sized startups, the strategic question in 2025 was not which model was most powerful, but which vendor’s compliance story would keep customers, regulators and boards comfortable.
Related Article Block Placeholder
Article ID: 329000
A strong year for AI startup funding, but with a catch
Despite this regulatory complexity, 2025 was a strong year for Australian AI funding. According to a Dealroom report, the combined enterprise value of more than 470 VC-backed Australian AI startups is around US$11.7 billion (AU$17.6 billion). And this isn’t surprising. Our own weekly startup funding roundups have been dominated by ‘AI-first’ companies in 2025.
Q1 data from Cut Through Ventures also pointed to AI funding hype. However, as we reported at the time, the data was somewhat skewed by many startups ham-fisting ‘AI’ into their pitch decks to attract VC attention. And it worked.
One of the year’s standout deals was Sydney-based health-tech Harrison.ai’s $179 million Series C, one of the largest AI-related raises in Australia’s history.
Elsewhere, steady mid-stage capital flowed into applied AI companies in logistics, agriculture, SaaS and more.
Across the market, investor behaviour did shift away from generic chatbots toward vertical, domain-specific AI products designed to plug directly into real business workflows and global markets.
How Australian businesses actually used AI
For most Australian businesses, the AI story of 2025 was about making it boring. Enterprises embedded AI into office suites, developer tools and customer service platforms, changing how emails are written, tickets resolved, and code shipped.
This was supported by a wave of frontier-model releases from OpenAI, Google and Anthropic, with improvements in multimodal reasoning, longer context windows and tighter cloud integration.
Market share was also front of mind, with OpenAI releasing the affordable ChatGPT Go in India before rolling out to select countries across the world. This ‘affordability’ play was arguably balanced by the not-so-gradual shift we have seen towards ads on the platform.
Small businesses did adopt alongside their enterprise counterparts, but from a lower and more uneven base. Research cited in Labor’s National AI Plan, drawing on work by the National AI Centre and Fifth Quadrant, found just over one-third of Australian SMEs have adopted AI. That uptake fell to 29% among regional organisations compared with around 40% in metropolitan areas.
The plan also noted that around a quarter of regional businesses were not aware of AI opportunities at all.
Where adoption did occur, it was often narrow and tool-led rather than strategic. SMEs leaned on AI-assisted quoting tools, auto-generated marketing content and chat-based admin support, frequently embedded inside existing software rather than rolled out as dedicated systems.
What lagged behind was hands-on support around governance, training and accountability, leaving many small businesses to navigate AI adoption with limited guidance at a time when voluntary disclosure and self-regulation remain the dominant policy settings.
In other words, there is huge room for improvement in 2026 – especially for those who are grappling with soft laws at home but harder obligations offshore while still relying heavily on global platforms to bridge the gap.