Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Tristan Bove

A ChatGPT-written job application fooled job recruiters and was a top 20% candidate

A boy with glasses staring at a smart screen (Credit: Jasmin Merdan—Getty Images)

It may be some time before artificial intelligence can take over all of our jobs, but the A.I. models around today might already be able to edge out human candidates trying to nail an interview.

Last November, the artificial intelligence startup OpenAI made waves when it launched its latest creation: ChatGPT, an A.I.-powered tool that lets users have Q&A-style interactions with a very smart chatbot. People can type in a question, and ChatGPT will often respond within seconds with a detailed coherent answer that may as well have come from an expert in the field.

The tool crossed 1 million users within days of its launch, and has earned plaudits from Elon Musk and Bill Gates, just to name a few. Fortune CEO Alan Murray even called ChatGPT “the most important news event of 2022” in a December newsletter.

For all the praise, ChatGPT has also received its share of criticism, from teachers concerned their students might use it to cheat on exams, to watchdogs warning that hackers could use it to write malware and encrypted scripts. But another group that should keep an eye out for convincing A.I. may be job recruiters, based on the recent experience of a U.K. company.

When Neil Taylor was looking to hire new faces at Schwa, the communications company he founded and owns, he decided to conduct an experiment. He posed ChatGPT the same writing prompt all applicants for the position had to answer and anonymously included the bot’s entry in with the pool of candidates whose applications were to be reviewed by staff, Sky News reported Tuesday.

Fewer than 20% of candidates moved forward to the interview stage of the process, and ChatGPT’s application was among them.

But before you despair for the future labor market, it wasn’t smooth sailing for the young chatbot, which required a substantial amount of help before it was able to battle it out with human candidates.

A.I. is still missing some steps

ChatGPT is based on a large-language model, an A.I. system that can understand and generate text after being trained on gigabytes or even petabytes of data, resulting in a large trove of easily retrievable information that could be a disruptive game-changer for search engines like Google.

Challenging Google’s hegemony over Internet searches is part of the reason Microsoft is reportedly weighing a $10 billion investment in OpenAI, so ChatGPT could help power its own search engine Bing.

The myriad potential applications of ChatGPT in everyday life have become more apparent in the weeks since its debut, but the tool and other A.I. models like it will likely require more refining if they are to become the major disruptors some are predicting.

In Taylor’s experiment with ChatGPT at his company, the little bot was given the same writing prompt as all other applicants: 300 words to discuss the “secret of good writing.” But its first attempts at crafting a competitive answer were “competent but dull,” Taylor told Sky News, and required much more refining and clear direction before the tool was able to produce an acceptable answer.

Taylor said that the chatbot’s performance only improved when it was fed more detailed information on how to answer the prompt, including orders to emulate the style of Dave Trott, a copywriter and author.

“It was much better—it was punchier, it sounded more opinionated, and it got shortlisted,” Taylor said of the bot’s writing after he gave it more precise information.

Schwa did not immediately reply to Fortune’s request for further comment outside of working hours.

Taylor’s struggles with ChatGPT highlight an important drawback of the tool. Its knowledge, while vast, remains restrained, as OpenAI engineers only fed it with information dating up to 2021, meaning that ChatGPT’s capabilities are limited by what was posted on the Internet before 2021 and cannot create new information. 

ChatGPT has also run into trouble when attempting to answer broad questions, even falling into racial and gender biases if users’ directions are vague, as well as mixing up details for more technical questions.

The shortcomings of ChatGPT and A.I. models like it came to the fore again this week after tech publication CNET confirmed it had been experimenting with an “A.I. assist” to write basic financial explainer articles on its website. But while CNET’s A.I.-generated articles sounded authoritative and knowledgeable, upon further inspection they were found to be riddled with basic factual errors and mixed-up details. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.