Is that you? Or is it the bot? Linguists have said the nuance and character of human language is at risk, as Apple becomes the latest tech firm to launch artificial intelligence tools that can rewrite texts and emails to make users sound more friendly or professional.
The ability to lighten a grumpy missive or turn arcane language into something a five-year-old could understand is promised from the new technology, which will be available on UK iPhones, iPads and Macs from Wednesday.
But the potential boon for time-pressed people who struggle for the right writing tone has brought warnings that such tone-shifting technology could devalue and flatten human communication. One language expert described the automatic systems as “the ultimate superficiality”.
Apple’s developers have been training its AI model on unspecified bodies of text. Around half of all phones sold in the UK are made by Apple, and its AI launch comes after Google and Microsoft released their own tools to help users adjust their writing tone through their Gemini and Copilot AIs.
Last week Apple’s chief executive, Tim Cook, said of the writing tools: “It’s still coming from you. It’s your thoughts and your perspective.” He compared it to a spreadsheet doing sums automatically rather than punching them into calculator, or using a word processor instead of typing.
In the US, where the Apple Intelligence system has been live for weeks, users reported that the tools can on occasion make you “come off sounding stuffy”. Another new feature that summarise emails and texts has turned the most emotional exchanges into robotic bullet points.
In one case, a text from a girlfriend breaking up with her boyfriend was boiled down to: “No longer in a relationship; wants belongings from the apartment.”
The optional summaries can also strip out nuance and humour, rendering messages “administrative and robotic”, said one user who sent a picture of their young son having fun working with his dad on a car, which the AI had summarised as: “Photo shared of child reaching into car hood; air filter changed.”
A summary of some information from a Ring door camera app posted by a user on Reddit read bafflingly: “Dog took boot. Kitten cheese escaped the house.”
A message from Amazon was rendered as “package was delivered tomorrow”, and five Gmails were summed up with the bleak couplet “Russia launches missile and drone attack; shop early for Black Friday Deals”.
Apple has said it is continuing to improve the features with the help of user feedback.
Also on Wednesday Google unveiled its next generation of personal agents which will surf the internet for you, carry out boring work tasks, fill your grocery basket and even coach you in computer games tactics.
The new Gemini 2.0 agents have been released to developers and for trials. The company’s Nobel Prize winning AI leader, Demis Hassabis, admitted the advances open up “many questions … for safety and security”.
A demonstration featured a man visiting London asking his agent to dig out from his emails the door code of the apartment he was staying in and remember it. Google said it was carrying out “extensive risk assessments”. One agent – Project Astra - is multilingual and talks at the speed of normal human conversation.
Prof Tony Thorne, a language consultant at King’s College London, said: “AI is nudging us towards a neutral language that is much less rich.” He suggested it could foster uncertainty about who, or what, we are actually texting or emailing with.
Rewritten texts would lack the sender’s “idiolect”, the distinctive suite of words and grammar that gives personality to our sentences, he said. Apple’s AI system does not analyse users’ data to train itself, which is good news for customers concerned about data security but not for those hoping the AI might learn their tone.
“[AI] misses emphasis, nuance and the force of different aspects of a conversation,” Thorne said. “I don’t think it knows the difference between those that are crucial emotionally and descriptively and those that are just text.”
Previous studies have shown that using algorithmic responses increases the use of positive emotive language. But if writers are suspected of using tone filters, readers view them more negatively.
Prof Rob Drummond, a sociolinguist at Manchester Metropolitan University, said: “We are giving control of forming our identity to a machine. It is creating an extra layer of inauthenticity to this identity creation. I do wonder if longer term people are going to react against that.” He described automatic tone changing as “the ultimate superficiality”.
The Apple Intelligence assistance will be available only on certain iPhones with the 18.2 operating system. It will be rolled out this week in the UK, Australia, New Zealand and Canada. Versions for use in the European Union, where regulations differ, are expected in April 2025.
The system will also allow users to create AI versions of their photos in an “image playground”, adding costumes and changing locations, and to create their own bespoke emojis and “clean up” photos by using the AI to remove unwanted objects or people.
Apple is linking its Siri voice assistant to OpenAI’s ChatGPT system when it determines that the Microsoft-backed AI provider can give a helpful answer. It will ask the user before any request for information is sent outside Apple’s secure domain.