Large Language Models such as GPT-4 can do grunt work, but they lack judgment and originality. Creativity, discernment, research skills, and high-level thinking are only going to get more important, and more valuable
Opinion: It’s difficult to overstate the impact that AI Large Language Models (LLMs) such as GPT-4 will have, but a lot of people are managing it anyway. Whether they’re spooking themselves with imaginary escape attempts or weeping with awe over its faux-deep “creation of new emotions”, their message is clear: this is the future. The world has been made anew, and everything is about to change.
To this, I say: whoa there. Steady on.
GPT-4 is not alive. It’s not trying to escape, or plotting to wipe out humanity, or the next stage in intellectual and emotional evolution. It's a big machine that puts words together in ways that words have been put together in the past. And that’s still really impressive! It turns out that applying sophisticated statistical techniques to billions of words produces something that can write ad copy, code Pac-Man, and translate your tweets into French. But it has limitations.
These models don’t really “understand” what they’re saying, and they’re not going to any time soon. That’s why they’re so prone to producing confident-sounding nonsense – to an LLM, truth and lies are just collections of words, distinguished only by statistics (and any guardrails its builders have seen fit to put in place). We’ll build something that seems as if it can think long before we build something that can.
For the foreseeable future, LLMs are essentially a cool toy and a very powerful productivity tool for anyone who works with text. Programers are already increasing their output massively using ChatGPT. There is a seemingly endless constellation of startups and services promising to ‘supercharge’ your creative and commercial writing.
Microsoft and Google are folding generative AI into their office work suites. This alone is going to have a big impact – fewer people will be needed to produce the same amount of work in the fields touched by LLMs, and while new jobs are likely to be created, they’re also certain to be lost.
READ MORE:
* AI chatbots will be a game-changer for educational assessment
* AI’s new frontier: Works of art and human-like chatbots
* Fear of ChatGPT in schools and universities is misplaced anxiety
Perhaps the best analogy for the impact of text-generating AI comes from David Joyner, executive director of Online Education and Online Master of Computer Science at Georgia Tech, who likened it to the calculator. The calculator didn’t eliminate mathematicians or data analysts, but it made some of the things they did available to ordinary people and changed what specialists spent their time doing.
Though we’ll probably adjust eventually, they’ll cost a lot of people a lot of jobs in the coming years. As with any powerful tool, LLMs will be used by those who already have a lot to accumulate more at the expense of others
LLMs can do grunt work, but they shouldn’t be relied on (least of all for maths), they lack human judgment, and, by their nature, they’re mired in clichés. Creativity, discernment, research skills, and high-level thinking are only going to get more important, and more valuable.
There are legitimate reasons to be concerned: LLMs have the potential to do great harm if used maliciously or carelessly, and the rush to get them to market is not going to help. Spammers and scammers have already latched on, with “influence peddlers” and propagandists not far behind.
Floods of low effort AI-generated stories have led at least one literary magazine to stop accepting submissions already. Generative AI products of all kinds have swept up swathes of copyrighted data to train on without permission. Their output can be dangerously misleading, and it is often biased in ways both subtle and gross.
They take vast amounts of energy to train and run, and can realistically only be built by those with millions – or billions – of dollars to spare. And though we’ll probably adjust eventually, they’ll cost a lot of people a lot of jobs in the coming years. As with any powerful tool, LLMs will be used by those who already have a lot to accumulate more at the expense of others.
Ultimately, it’s up to us – and especially those who work in tech and tech policy – to make sure LLMs are used more for good than for bad. That’ll be hard, complex work, but it’s work that’s worth doing.