
The federal government continues to drift along on AI regulation, betwixt and between, with different stakeholders attempting to pull it in different ways: AI companies demanding handouts, the Productivity Commission urging open slather and an unfettered right for AI companies to use others’ content, creators urging the opposite (backed by unlikely supporters like News Corp), and unions demanding the economy be frozen in amber lest someone, somewhere, lose their job.
Meanwhile, colossal sums are pouring into IT infrastructure — which comes with eye-watering energy demands — for a product the business case of which doesn’t quite exist yet but could eventually dwarf search engines in terms of monetisability.
That’s not so much a policy problem for the government — unless it’s stupid enough to succumb to calls for “sovereign AI capability” and waste money on an “AussieGPT” (Richard Holden nailed the risks of such shonkery). There may very well be an AI bubble, and if it pops it will inflict serious damage on sharemarkets and perhaps even financial markets, but it’s private money being poured away. An AI company like OpenAI could be the next Google or Amazon or it could be the next WeWork. Let the market decide.
Related Article Block Placeholder
Article ID: 1223358
The policy problem for the government is that it’s unclear exactly what the policy problem is — not in the sense that government’s should go looking for problems to solve (there’s too much of that as it is), but in the sense that it’s very likely the economic, social, political and cultural impacts of AI will be considerable — at least as large as those of social media and search engines, and quite possibly much bigger if agentic AI become a key interface between individuals and the rest of the world, generative AI is used to manufacture disinformation, and chatbots replace personal and professional relationships on a population scale.
Given we’re still working out how to regulate social media long after it has inflicted material damage — and provided some benefits — on a society-wide scale, the ability of democratic governments (even those not owned by big tech, like the Trump regime) to respond in an effective and timely manner to the negative impacts of AI looks slight indeed.
Independent. Irreverent. In your inbox
Get the headlines they don’t want you to read. Sign up to Crikey’s free newsletters for fearless reporting, sharp analysis, and a touch of chaos
By continuing, you agree to our Terms & Conditions and Privacy Policy.
That concern prompted the head of the Australian Law Reform Commission, Mordy Bromberg, to call in August for a process to proactively begin scoping the regulatory challenges posed by AI across the economy, rather than the siloed debate that was occurring ahead of the productivity roundtable, in which vested industrial interests pitched their case for specific regulatory changes.
Bromberg’s call fell into a void. It’s not that there is no-one within government thinking about AI. Andrew Leigh, an inveterate magpie mind, spoke yesterday about the role of AI in what he termed a “progressive productivity agenda”, citing the way in which AI had seen demand for radiologists increase, rather than killing the profession, as some had predicted a decade ago.
That example demonstrates the impossibility of predicting the impacts of AI (as a former spruiker of the wonderful “interconnectedness” delivered by social media, I’ve got particular experience of being badly wrong about the impact of new media technology). There are smart, experienced people in the tech space who hold very different views to Leigh — who see a coming jobs hecatomb as AI significantly more advanced than the current publicly available versions emerges from labs and begins wreaking havoc on white collar jobs, dislocating employment markets on a large scale, with attendant effects on the financial system and the wider economy.
Even if such risks are limited, they aren’t, in the view of well-informed people, trivial. Perhaps Leigh’s future of AI firing up productivity, increasing demand for skills and improving outcomes will come to pass. Perhaps the opposite will. It’s thus incumbent on governments to be thinking in risk management terms about AI: not merely about the regulatory impacts, as per Bromberg, but about the potential of significant economic, political and social dislocation.
While risk management is a core part of bureaucratic management systems (or should be; auditor-general reports seem to regularly emerge suggesting the benefits of risk management are rarely pursued by the public service), bureaucrats are used to dealing with known, foreseeable risks that can be prepared for and mitigated. The problem of AI policy is unknown risks — unknown both in scale and nature. And that’s on top of a more traditional bureaucratic problem: that the public service lacks the specialist expertise to properly address the technical issues involved.
Related Article Block Placeholder
Article ID: 1224151
One solution to this risk management problem might be for the government to establish a relatively informal advisory panel of wise heads to maintain a watching brief on economy and society-wide AI impacts, with the goal of regularly reporting to government on what they’re seeing, and flagging for governments potentially significant issues. The panel would act as a precursor to the bureaucratic process: once it identified what it believed was a significant issue that merited government attention and, perhaps, action, the bureaucrats could be charged with investigating the issue.
The wise heads would need to be experts from a variety of fields — economists, scientists and engineers who understand AI and its resource requirements, investors with a deep understanding of the financial side of AI and its infrastructure needs, and lawyers with a grasp of the regulatory issues. In a relative shallow gene pool of Australian civic life they might be hard to find, but the search needn’t be confined locally. The goal is smart people flagging issues for the government quicker than the bureaucracy could, and without the influence of vested interests. And it wouldn’t cost a great deal.
It’s a low-risk solution to a risk-management problem. It could cost a couple of million a year to buy the time of clever people. But it might give the government a heads-up on emerging issues it needs to consider. It might even spare us from repeating the bizarre problem that we’re still trying to effectively regulate social media long after the damage has been done. Only, the damage might be magnitudes greater.
What should Australia be doing to prepare for the rise of AI?
We want to hear from you. Write to us at letters@crikey.com.au to be published in Crikey. Please include your full name. We reserve the right to edit for length and clarity.