Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Jay Rayner

Could a chatbot write my restaurant reviews?

Jay Rayner sitting at a table with a plate of spaghetti in front of him, being offered a fork of food by a robot hand
‘“The air was thick with a miasma of MSG and regret.” That really was a line I could have written.’ Photograph: Photo Ilka & Franz Illustration Justin Metz/The Observer

One afternoon an email arrives that threatens to end my career. Or at the very least, it makes me think seriously about what the end of my career might look like. It comes from a woman in Ely called Camden Woollven who has an interest in my restaurant reviews, a taste for the absurd and perhaps just a little too much time on her hands. Woollven works in the tech sector and has long been fascinated by OpenAI, a company founded in 2015, with investment from among others Elon Musk, to develop user-friendly applications involving artificial intelligence.

In November last year, after $10bn worth of investment from Microsoft, OpenAI released ChatGPT3, a tool which has been trained on a vast array of data and allows us to commission articles and have human-like text conversations with a chatbot. It’s currently free to use and therefore clocked up 1m users in the first week. Within two months it had 100m users, making it the fastest growing web application in internet history. People all over the world were prompting ChatGPT – the initials stand for Generative Pre-trained Transformer – to write essays for them, or computer code, or even compose lyrics in the style of their favourite songwriter. If it involved words, they were getting ChatGPT to do it. And then gasping at the speed and fluency of what came back, while quoting lines from the Terminator movies about the apocalyptic rise of the machines.

Woollven, meanwhile, had asked another of OpenAI’s applications, called Playground, to write negative reviews of lousy Chinese buffet restaurants in Skegness in the style of, well, me. I have never reviewed anywhere in Skegness, let alone a Chinese buffet. She described it, apologetically, as her “new favourite hobby”. In one, fake me said I hadn’t “seen such a depressing display of Asian-fusion food since I was caught in a monsoon in the Himalayas”. Bit of an odd thing to write, that. What’s the connection between bad food and monsoons? But OK. Another, though, gave me pause. “The dining room was a low-lit, faux-oriental den of off-pink walls and glittering papier-mâché dragons; the air was thick with a miasma of MSG and regret.” Oh God. That thing of using an emotion to describe a place? That really was a line I could have written. Granted, not one of my best, but me all the same.

Like print journalists everywhere I shuddered. One afternoon, in a break from getting servers to write worrying parodies of me, Woollven gave me a tutorial. The tech had been around for a few years, she said. This was the third iteration of ChatGPT. The second, released in 2019, had been trained on 17bn data points. “This version has been trained on 10 times that and is the largest AI language model to date,” she said. It had been fed truckloads of text from all over the web, which means it can use probability to work out what the next word should be. It’s predictive text, but on performance-enhancing drugs measured in terabytes. This month OpenAI announced the release of a further iteration, ChatGPT4.

ChatGPT3 has, Woollven said, “been specifically fine-tuned for its conversational ability. Oh, and it’s quite censorious. It won’t write porn for you, for example.” This is understandable. In March 2016 Microsoft launched an AI bot called Tay which was meant to learn conversational ability through interactions with real people. Within 24 hours on Twitter, Tay had responded to other tweeters by seemingly becoming a genocidal Nazi, tweeting its admiration for Hitler. It was swiftly taken offline.

Playground, Woollven said, was a little freer than ChatGPT. One afternoon, I had a go. It was a reassuring exercise. I asked OpenAI’s Playground to write a negative review of Le Cinq, a Parisian Michelin three-star, in the style of me. My actual review in 2017 had caused a bit of an international incident. This one wouldn’t have raised a single Parisian eyebrow. “The presentation was lacklustre and the portions minuscule,” it said. “The waiters were the worst part of the experience.”

So far, so humdrum. I then asked it to write a description of the naked fireside wrestling scene from the 1969 movie Women in Love, replacing Oliver Reed and Alan Bates with myself and Gordon Ramsay. What can I say? I was restless. It rose to that challenge admirably. “The light from the roaring fire flickered off the sweaty limbs of restaurant critic Jay Rayner and chef Gordon Ramsay as they wrestled naked in front of the fireplace,” it began. “Both men were locked in a fierce battle, their arms and legs entwined as they grunted and groaned in an attempt to outdo each other.”

This, of course, was a developed exercise in completely missing the point. Microsoft did not invest $10bn in AI, sparking a tech war with Google which now has its own version, called Bard, so schmucks like me could get it to write mildly amusing cobblers like this. The Observer’s own perspicacious technology columnist John Naughton nailed it when he wrote that we “generally overestimate the short-term impact of new communication technologies, while grossly underestimating their long-term implications.”

No, ChatGPT was not going to develop sentience and take over the world as some had suggested. Nor was it going to replace hacks like me. As Woollven said to me, “The only way it can replicate you is because you exist. It can’t taste the food.” The musician Nick Cave reacted furiously when fans sent him song lyrics written by ChatGPT in the style of Nick Cave. “Songs arise out of suffering,” he wrote on his website, “by which I mean they are predicated upon the complex, internal human struggle of creation… as far as I know, algorithms don’t feel. Data doesn’t suffer.”

That doesn’t mean this technology won’t have a massive impact on how society functions. Naughton puts it on a par with the general adoption of the web itself in 1993. As he explained, “Google has become a prosthesis for memory. Remembering everything on the web is impossible so search engines do it for us. In the same way this is a prosthesis for something that many people find very difficult to do: writing competent prose.” Or as it was put to me by Willard McCarty, professor emeritus at the Department of Digital Humanities, King’s College London: “If I were a bureaucrat sitting in an office, I would be worried because that’s the sort of writing work it is adapted to do. Grammar is no longer difficult.” This is one of the most notable things about the output from ChatGPT. Forget the jollity of fake restaurant reviews and terrible faux Nick Cave lyrics. The prose is clean and tidy. The grammar and punctuation are all correct.

It’s a key point, which lies at the heart of the disquiet expressed by print journalists when writing about it. People like me find the business of writing straightforward. The majority of people do find it very hard. Hence, journalists could always comfort themselves that if we lost our jobs writing for high-profile national newspapers, we could make a living as copy writers for PR companies and the like. Not any more. With the advent of ChatGPT, that’s gone.

The automation of factory production lines made certain manual jobs obsolete. AI is going to make service sector jobs, like copywriting, completely obsolete, too. First the machines came for the working classes; now they are coming for the middle classes. The website BuzzFeed has already announced that some of its content will be created by OpenAI applications. Expect more of this. It will be monetised, partly to pay for the development costs and partly to pay for the enormous amount of computing power and therefore energy the output of AI requires. It will also become much more sophisticated. Those online chat bots will seem more and more human. As text-to-speech applications develop, you will have phone conversations with what seem to be real people, but aren’t. Educational assessment will fall apart because a machine can write an academic essay as well as any human. If it involves text in any way, it’s now in play.

Right now, ChatGPT has significant limitations. For a start it is what’s known as a closed-box model. When you ask it to write something, it does not go roaming across the web in search of the answer. It draws on those 175bn data points upon which it was trained, the vast volumes of text fed into it from across the internet, but only up until mid-2021. As a result, it’s not always accurate. I asked it to write a review in the style of me of chef Ollie Dabbous’s restaurant Hide, which I’ve never visited. It praised the king crab with smoked avocado and the turbot with brown shrimps and nasturtium. Neither dish is on Hide’s menu. It had simply made them up. OpenAI says that ChatGPT4 should, among other things, be more accurate.

The hugely successful film podcast Kermode & Mayo’s Take, presented by the Observer’s film critic Mark Kermode and the veteran broadcaster Simon Mayo, has been musing on all this. They too got ChatGPT to write reviews in the style of Kermode. They weren’t very convincing. “It did show me that I use the same phrases over and over again,” Kermode told me. He was, however, completely fooled by a reader email, written by the AI. “I didn’t spot it at all, though the greeting and sign-off were written by our producer, which I think is significant cheating.” ChatGPT didn’t say hello to Jason Isaacs.

Was he concerned? Up to a point but, he said, there was still a place for writers like us. “It can’t do unpredictable thought. I don’t think ChatGPT could have told you that the first time I saw Spielberg’s movie AI, I would hate it, and that the second time I would love it.” Simon Mayo, who is also a successful novelist, agreed but saw opportunities. “Most writing in popular culture is imitative, just like these AIs. Plot lines in movies and novels are similar because that’s what sells. Maybe these AIs will up the ante. Maybe it will force novelists to have more imaginative thoughts.”

One afternoon I asked ChatGPT to write a tabloid exposé, as authored by me, of cabinet minister Michael Gove’s inappropriate relationship with a 6ft teddy bear. The tabloid style was lousy, but everything else, well: “Rayner followed Gove to his home, where he caught the politician in a passionate embrace with the bear. When confronted by Rayner, Gove was unable to explain his actions. He simply stammered: ‘It was just a moment of weakness. I don’t know what came over me.’” It was a stupid thing to do on my part. It wasn’t clever. But it did make me laugh. And faced with the massive disruption to society threatened by these AIs, maniacal, inappropriate laughter seemed the only response.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.