Get all your news in one place.
100’s of premium titles.
One app.
Start reading
AAP
AAP
Jennifer Dudley-Nicholson

Why your words and photos might already be training AI

Authors and experts are concerned Microsoft's AI tool cannot be removed from some apps. (Lukas Coch/AAP PHOTOS)

Author Michelle Prak is known for her tense outback thriller but software delivered the ultimate plot twist when AI showed up in her latest manuscript and refused to go away. 

The South Australian writer says Microsoft's artificially intelligent assistant Copilot swept into her workspace uninvited after a software update and, despite her best efforts, she cannot evict it. 

"It's a bit of an insult and really maddening that they want to offer their help every time I press enter," she said. 

"The sanctity of the blank page is gone."

But Microsoft is not the only tech firm adding AI to its platform. 

The logo of social media app X.
X changed its terms of service to let tweets, photos and videos train its AI model Grok. (Joel Carrett/AAP PHOTOS)

Social network X recently changed its terms of service to allow tweets, photos and videos to train its AI model Grok, and Meta confirmed it is scraping data from its Australian Facebook and Instagram users. 

Artificial intelligence experts say these firms owe it to users to provide more clarity about their AI features and should make them optional. 

If companies fail to do so, they warn, laws and lawsuits will likely have the final say. 

Ms Prak, whose novel The Rush was published earlier this year, says seeing an AI assistant appear in her Microsoft word-processing software was a "really nasty, rude surprise".

Avoiding the use of AI tools is important for authors, she says, to avoid questions about copyright, creativity and authenticity.

"I really want all my work to be pure – I don't want anything to do with AI," she said.

"If I submit my work to a publisher or a literary magazine, will it trip up their AI detectors? I do not like it there."

But completely removing the AI feature has proven impossible, Ms Prak says. 

Users can opt out of allowing Microsoft Word to use their data for AI training in its privacy menu but cannot completely remove Copilot from Microsoft Word software.

A spokeswoman for Microsoft Australia says existing subscribers may be able to disable the AI tool by removing updates but new subscribers will not be given that choice.

The dogmatic approach to AI is concerning, RMIT information sciences professor Lisa Given says, as many people will not research new software features or read terms and conditions to find out how their data is being used. 

Tech firms offering AI and using customers' information need to be transparent about their intentions, she says, and give users the opportunity to activate services rather than switching them on by default.

"I have concerns about people having to opt out because that requires time, knowledge and education," she says. 

"When you have something that's opt-in, it's a much more deliberate choice."

Introducing and activating AI features without consultation can also lead to dangerous outcomes for organisations, she says, which could find their sensitive data is being shared incorrectly.  

Mandatory AI guardrails currently being drafted in Australia should consider opt-in provisions, she says, as data-hungry companies were unlikely to offer them.

"AI is not only ahead of regulation and the lawmakers but it's also ahead of everyday workers trying to make choices," she said. 

"The onus is often left on us but it's a constantly moving landscape."

Rules around high-risk AI use cases are currently being considered by the federal government after a public consultation wrapped in October. 

UNSW AI Institute chief scientist Toby Walsh.
UNSW AI expert Toby Walsh says protecting Australians from AI may mean reforming privacy laws. (Julian Smith/AAP PHOTOS)

But protecting Australians from unintended AI consequences could also involve reforming privacy laws, UNSW AI Institute chief scientist Toby Walsh says. 

Meta is harvesting data from Facebook and Instagram users to train its AI model Llama, for example, but will not let Australians opt out of the act like it does for users in the European Union. 

"There are various privacies that we don't have that they have in Europe because they have better data protection," Prof Walsh says. 

"Sadly, we haven't updated our privacy laws as quickly as they have in Europe and elsewhere."

AI companies have been keen to push copyright boundaries, he says, as they need huge amounts of information to train their large-language models. 

While more are signing licensing agreements with publishers for access to work, such as a three-year deal struck with HarperCollins over non-fiction books last week, Prof Walsh says it may ultimately take lawsuits to change the industry's approach to copyright material. 

A contentious court battle between the New York Times and OpenAI, for example, is being fought over whether the firm scraped stories from behind its paywall to train AI without its permission. 

"There's a number of lawsuits in place and class action suits that will decide what's to happen but it's critical that we work out where our values are and that we appreciate the contributions of authors, musicians, painters and all the other people that add to our society," Prof Walsh said.

"We have to push back against the premise that just because things are available, companies have consent to use them."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.