
Welcome back to Neural Notes, a weekly column where I look at how AI is affecting Australia. In this edition: a patient’s refusal to allow AI transcription in the doctor’s office raises serious questions around consent, data collection and ethics.
This week’s column actually began with a social media post. A parent, who also happens to be a technology specialist, telling a story about a paediatrician refusing to see their child because they declined consent for an AI transcription tool to be used during the appointment.
Related Article Block Placeholder
Article ID: 318779
The parent raised concerns about the tool’s compliance with privacy, ethical, and international standards. Despite offering to transcribe the appointment manually and ensuring no additional burden on the specialist, the clinic refused to proceed unless the AI tool was used. The appointment, which cost $1300, was cancelled.
To be clear, this wasn’t a medically critical tool. It was not required for diagnosis, but for admin.
AHPRA’s response: Non-consent doesn’t guarantee care
When I approached the Australian Health Practitioner Regulation Agency (AHPRA) for comment on this situation, it made no attempt to discourage this kind of hardline response.
Smarter business news. Straight to your inbox.
For startup founders, small businesses and leaders. Build sharper instincts and better strategy by learning from Australia’s smartest business minds. Sign up for free.
By continuing, you agree to our Terms & Conditions and Privacy Policy.
Instead, AHPRA emphasised that doctors do not have to see patients except in an emergency.
“In some circumstances, the relationship between a doctor and patient may break down or become compromised (eg, because of a conflict of interest), and you may need to end it,” a spokesperson said, quoting the Medical Board’s Code of Conduct.
“Good medical practice involves ensuring that the patient is adequately informed of your decision and facilitating arrangements for the continuing care of the patient… Except in an emergency, a doctor doesn’t have to see a patient.”
This is a similar response to what the parent received when they too reached out to AHPRA.
In its response to SmartCompany, AHPRA noted it had issued guidance to practitioners about the use of AI in healthcare, and said it “recognises that helping practitioners talk with patients about AI can build public trust and support its safe use in healthcare”.
However, the agency did not address whether patients have a right to decline non-critical AI tools, or whether cancelling care on that basis is proportionate.
The AMA’s position: Patient rights must come first
The Australian Medical Association’s (AMA) 2023 position statement on AI in healthcare is more direct. It states AI must support human decision-making, not replace it, and calls for patients to have full transparency and control over their personal health data.
A recent submission from the AMA to the Department of Industry, Science and Resources (DISR) reinforces this further.
“Regulations must facilitate full disclosure and patient consent for the use of health data and any AI-generated health information,” it stated.
The AMA also calls for a regulatory environment that ensures AI tools do not undermine healthcare delivery or patient trust.
Related Article Block Placeholder
Article ID: 312912
It advocates for safeguards such as clearly defined responsibilities, the right of patients to decline AI use, and human oversight in all clinical decisions.
“AI must uphold patients’ right to make their own informed healthcare decisions,” the position statement reads.
It also affirms that “the development and application of AI to healthcare must be inclusive, undertaken with appropriate consultation,” and that “the use of AI in healthcare must protect the privacy of patient health information”.
The AMA also supports the principle that patients are the owners of their health data and that AI systems must be transparent and auditable. Any application of AI must be subject to regular review, and clinicians must be able to override AI recommendations where appropriate.
Australian medical AI scribes show what ethical consent can look like
Some of the strongest commitments to those principles aren’t coming from regulators, but from the Australian startups building the tools themselves. In this case, AI-powered transcription tools designed for medical practitioners.
“From day one, PatientNotes has been opt-in only… I actually expected 30% of patients would decline, and I’m still personally quite shocked it’s wound up being less than 1%. But the fact that so few people decline doesn’t make their rights any less important,” Sarah Moran, co-founder of PatientNotes, said to SmartCompany.
If a patient says no, clinicians using PatientNotes can switch to manual entry. Notes can also be deleted on request.
Heidi Health CEO Dr Tom Kelly took a similar stance, saying patients should be able to opt out of non-critical tools at any time.
“Healthcare is human… and the best clinician-patient relationships are built on a foundation of trust and transparency,” Kelly said.
Heidi Health is aiming for full ISO/IEC 42001 certification by year’s end, while PatientNotes is already aligning to the standard. Both have built their consent models into the product from day one.
And this is important. Nearly one in four Australian GPs are now believed to be using AI scribes, according to the AMA.
While some doctors have reported feeling less fatigued and more present with patients when using these tools, privacy experts have warned that consent protocols remain inconsistent across clinics. Without specific regulation, patients may be unaware of how their data is handled.
A quiet shift in the meaning of consent in the age of AI
Related Article Block Placeholder
Article ID: 321135
These responses raise an uncomfortable question. If the makers of these AI medical scribe tools are building with ethics in mind, why aren’t regulators setting the same standard?
To be fair, the regulatory landscape is still in flux. While the federal government has outlined proposed guardrails for the safe use of AI in high-risk settings, these are not yet policy. And so the regulatory vacuum continues.
But AHPRA’s response doesn’t suggest a system grappling with nuance or pushing significantly towards a clearer framework. It suggests one that defers responsibility, citing professional discretion and staying silent on patient rights.
Meanwhile, calls for formal safeguards are growing louder. Experts like professor Mimi Zou at the University of New South Wales have warned even “innocent” use cases like note-taking can pose privacy and security risks without strong legal frameworks.
This raises a deeper ethical concern: when institutions treat documentation tools as gatekeepers to care, consent becomes conditional, not informed.
True guardrails aren’t just about how AI functions; they’re about who gets to say no, and what happens if they do. If refusing a non-critical tool leads to a cancelled appointment, then the system isn’t protecting patient rights; tt’s quietly eroding them.