Text generators have been around for years, but they’ve finally gotten good. Where previous iterations required a heavy edit, the latest version of ChatGPT from OpenAI writes with better grammar than I – sorry, me. But both chatbots and generative artificial intelligence (AI) tools like the newly launched GPT-4 also lack that creative spark.
This is largely down to how they work. ChatGPT’s machine learning system, for example, was trained on 570GB of text from Wikipedia, the open internet and books. Churning through those 300 billion words taught the system how sentences are structured, but it still doesn’t understand what it’s saying – and that means the outputted paragraphs read naturally, but might not be accurate.
While AI models can be trained on vast amounts of data and can recognise patterns and generate text that is coherent and coherently styled, they may not fully grasp the deeper meaning or implications of the words they are using. Additionally, AI is not capable of creativity in the same way that humans are. It can generate text based on patterns it has seen in data, but it cannot come up with new ideas or perspectives in the way that a human writer can. This can lead to a lack of originality and depth in the writing.
No wonder, then, that teachers are concerned, with the New York City Department of Education even banning access to ChatGPT via its network and an Australian school returning to in-person, paper-and-pencil exams. School is the one place where people are judged by their writing on well-trodden subjects. The answers to questions about themes in 19th-century novels and symbolism in Shakespeare are all already online, so ChatGPT can easily reproduce them. So, of course, can students.
It’s easy to buy pre-written essays, find example texts to nick bits from, or otherwise get answers on the internet – chuck “themes Pride & Prejudice” into Google and you’re set. Reading SparkNotes, Wikipedia or other literature explainers isn’t cheating; as any student should know, copying and pasting is plagiarism, but reading, thinking about the information, before paraphrasing, is the actual job.
For some students, writing in their own words is the hardest part, but there’s plenty of algorithmic assistance beyond Google’s search to lend a hand. Spell checkers are a lot smarter than they used to be, while tools such as Grammarly spot not just mistakes but awkward phrasing, helping to improve your writing as you bash your keyboard.
Teachers too have tools to hand to stave off AI cheating, beyond blocking websites on campus networks or forcing students to write pre-digital style in classrooms. Plagiarism checkers can spot some AI-generated text, as it’s largely been lifted from the internet and contains noticeable patterns and repetition – and that’s according to ChatGPT itself, as I asked it for advice hunting down text it’s written.
I also asked ChatGPT to write my column for me. Sadly my editor wasn’t keen on paying me to copy and paste several hundred words, but ChatGPT’s argument for why AI isn’t a good way to write a column was actually sound, but the system lacks nuance and can’t understand context. Plus, it’s unethical, as it puts me out of a job, and can’t generate new ideas.
Lastly, this AI isn’t thinking, it’s regurgitating. That we ask students to do the same so often that it can be replicated by AI is a fair criticism of our curriculum.
By introducing more randomness, AI can be used to smash ideas together to come up with something new. That’s what Janelle Shane does with AI Weirdness, a fabulous blog and newsletter that reveals plenty about the inner working of AI systems while also coming up with paint colours such as “turdly” and recipes for a chocolate brownie that includes a cup of horseradish.
Randomly placing letters one after the other into an approximation of a word, though, while often hilarious, isn’t actual creativity on the part of AI. Humans have a capacity for originality that machine learning lacks. AI can only reassemble what we’ve already done.
My favourite example is from Mark O’Connell’s annoyingly brilliant book To Be a Machine, where an unnamed futurist suggests to the author that much journalism can be easily replaced with AI. In retribution, O’Connell includes in his book a description of the futurist “retrieving a dropped pistachio from inside his expensive shirt – an act of petty and futile vengeance, and the kind of absurd irrelevance that would certainly be beneath the dignity and professional discipline of an automated writing AI.”
Of course, ChatGPT can include that example if it’s been trained on O’Connell’s book, though it can’t come up with its own similar absurdities. But just like the AI, I’m borrowing from others’ writing to pull together my argument, so while writers like O’Connell are perhaps irreplaceable, perhaps I’m not. That’s a tough conclusion to reach regarding my life’s work, so prove me wrong: spot the paragraph in this column that was written by ChatGPT. There’s no prize, but if you’re correct it might help shift this sinking feeling that I should retrain as a yoga instructor or park ranger or other job less easily replaced by AI.