Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Lloyd Coombes

How do you test AI that’s getting smarter than us? A new group is creating ‘humanity’s toughest exam’

Adobe Firefly 3 image of robots taking an exam.

As AI gets smarter and smarter (including breaking rules to prove how capable it is), it's getting a little trickier to stump. Tests that work to push GPT-4o to its limits are proving easy for o1-preview — and it is only going to improve.

There's an understandable train of thought that AI could get too clever for humanity's own good, and while we're perhaps some way off of Skynet-level catastrophe, the thought has clearly crossed the minds of some technology experts.

A non-profit called The Center for AI Safety (or CAIS) has sent out a call for some of the trickiest questions for AI to answer. The idea is that these difficult questions will form "Humanity's Last Exam", a more difficult bar for AI to reach.

Every major AI lab and big tech company with an AI research division also has an AI safety board or equivalent. Many have also signed up for external oversight of new frontier models before release. Finding questions and challenges that properly test them is an important part of that safety picture.

OpenAI's new model breaks the rules to show how far AI has come

The submission form says "Together, we are collecting the hardest and broadest set of questions ever." It asks users to "think of something you know that would stump current artificial intelligence (AI) systems." Which could then be used better to evaluate the capabilities of AI systems in the years to come.

As per Reuters, existing models are struggling with many of the questions included already, and the answers between them are scattershot at best.  For example, the question "How many positive integer Coxeter-Conway friezes of type G2 are there?" has resulted in answers of 14, 1, or 3 from three different AI models.

OpenAI's o1 family of models, currently in a preview and mini version, have demonstrated an IQ of around 120 and solve PhD-level problems relatively easily. Other models are going to catch up; this is the 'lightest' o1 model' with better to come next year, so finding challenging problems is a high priority for the AI safety community. 

According to Dan Hendrycks , Director of the Center for AI Safety, the questions will be used to create a new AI benchmark to test new models. The authors of those questions will be co-authors of the benchmark. The deadline is November 1 and the best questions get part of a $500,000 prize fund.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.