Open this photo in gallery:
The hospital notified the Privacy Commissioner of Ontario of the breach on Dec. 17, 2024, and also notified affected patients that their medical information had been compromised.Graham Hughes/The Globe and Mail
An AI bot listened in on a meeting of physicians at an Ontario hospital and sent details about the patients they discussed to dozens of current and former hospital employees, some of whom should not have had access to the information, according to the province’s privacy watchdog.
Details of the privacy breach are included in a letter that the Information and Privacy Commissioner of Ontario’s office sent to the hospital on Oct. 27. The letter was subsequently released publicly without the hospital’s name.
The incident is a cautionary tale for health care providers as transcription software is increasingly adopted and risks violating patient privacy, which is protected by law.
AI in everyday tech brings hidden cybersecurity risks
According to the letter, on Sept. 23, 2024, a group of hospital physicians met for a virtual “rounds” meeting, in which they discussed seven patients who had been admitted to the hospital and were receiving care.
Many former employees of the hospital were still on the meeting invite list. One of those – a physician who had left the hospital in June, 2023 – had used his personal e-mail address (the letter does not name him but refers to him specifically as “he”) and installed Otter.ai, an AI-powered transcription tool, on a personal device in September, 2024.
On Sept. 23, an Otter.ai bot attended the meeting in his place and – unnoticed by the other participants – recorded the proceedings. Afterward, it sent a summary and a link to a transcript of the meeting to 65 people on the invite list, 12 of whom were no longer employed by the hospital.
When the hospital discovered the breach, it directed staff to delete the e-mails and remove any such transcription tools from their devices, according to the letter. It also asked the physician who had installed Otter.ai to remove all hospital materials from his personal devices, which he said he did.
On Dec. 17, 2024, the hospital notified the Privacy Commissioner of the breach. It also notified five of the seven patients that their medical information had been compromised. (The other two had died.)
AI sparks debate as Canadian professionals weigh risks and rewards
The hospital updated its artificial intelligence policies and changed its firewalls to block access to Otter.ai and other transcription tools.
The Privacy Commissioner recommended that the hospital make a formal request to Otter.ai to delete the patient data and take other internal steps to ensure such a breach would not happen again.
Otter.ai did not respond to a request for comment from The Globe and Mail.
Teresa Scassa, the Canada Research Chair in information law and policy at the University of Ottawa, said the case exposed how vulnerable institutions are to breaches as the use of AI tools becomes more widespread.
Even if an institution comes up with policies around AI use and signs contracts with software vendors to respect privacy, individual employees may still create vulnerabilities through the use of personal devices, she said.
Prof. Scassa said the rise of agentic AI tools – which can act independently – poses an even greater threat. In this case, she said, it wasn’t clear whether the physician even realized the Otter.ai bot had attended the meeting on his behalf and recorded it – or whether it had done so for other meetings as well.
“That’s a whole level of autonomy and independence that we haven’t been prepared for but need to start thinking about,” she said.