I-MED and harrison.ai have gone to ground as politicians and advocates slam the companies over Crikey’s revelation that the companies are training artificial intelligence on patients’ private medical scans without their knowledge.
Got a tip about this story or anything else? You can anonymously contact Cam Wilson here.
Yesterday, a Crikey investigation exposed how Australia’s biggest medical imaging provider had given a buzzy healthcare technology startup backed by industry heavy hitters access to potentially hundreds of thousands of Australians’ chest X-rays, raising concerns from privacy experts.
This data was used to train an AI tool that has propelled harrison.ai into becoming a darling of Australia’s startup scene with a valuation in the hundreds of millions of dollars, and helped modernise I-MED’s business.
Neither company responded to repeated requests for comment from Crikey about the legal basis for using and disclosing this personal information.
In response to Crikey’s investigation, Attorney-General Mark Dreyfus said there were real concerns around AI and health information.
“The use of health information to train AI models raises privacy concerns about the lawful handling of personal information,” a spokesperson for his office said.
“Reform of Australia’s privacy laws is crucial to ensuring appropriate safeguards are in place for AI and other rapidly developing technologies.”
The federal government has proposed to reform Australian privacy laws — which privacy experts say could clarify what data is protected — but is only set to consider those changes should it get reelected.
Greens Senator and digital rights spokesperson David Shoebridge said the government bears responsibility for misuse of health data because of its failure to pass these reforms.
“Going to get a scan for a medical condition is often a really vulnerable time for people, finding out those scans will be gobbled up by corporations to train their AI without consent may even deter people from seeking medical help,” he told Crikey.
“The government’s failure to progress their promised privacy reforms is partly to blame here. Every day of inaction sees potentially thousands of people having personal information they want to keep private shared, monetised and exploited.”
Consumer advocacy group CHOICE’s senior campaigns and policy advisor Rafi Alam said harrison.ai and I-MED’s behaviour was “highly concerning”.
“Consumers tell us time and time again that laws need to change to stop this free-for-all over our personal information,” he said.
“We’re encouraged by some of the government’s recently proposed amendments to the Privacy Act, but a lot more needs to be done quickly to protect people from invasive practices — including a legal obligation for businesses to collect and use our data fairly, and strict guardrails on artificial intelligence systems.”
Harrison.ai and I-MED didn’t respond to further requests for comment. Nor did harrison.ai’s backers Blackbird Ventures, Skip Capital, Horizon Ventures or Ramsay Healthcare. I-MED’s owner, Permira, did not immediately respond to a request for comment either.
At least one person involved is aware of the story, however. Last night (and again this morning), harrison.au co-founder Dr Aengus Tran looked at this reporter’s LinkedIn profile. He didn’t, however, accept a subsequent request to connect.
Should we allow companies to train their AI on our personal medical information? Let us know your thoughts by writing to letters@crikey.com.au. Please include your full name to be considered for publication. We reserve the right to edit for length and clarity.