Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Technology
Vinay Patel

OpenAI Says It Will Pay Researchers To Make Superintelligent AI Systems Safe

OpenAI is paying researchers to stop superintelligent AI from becoming dangerous. (Credit: Pexels)

OpenAI is offering $10 million (about £7 million) in grants to facilitate research that focuses on controlling artificial intelligence (AI) systems that are more intelligent than an average human.

The American AI company hopes that the results from its Superalignment Fast Grants will help better understand how AI systems can be used to assess the output of newer AI systems and play a key role in building an AI lie detector.

Evaluating millions of lines of code

According to OpenAI, fully understanding superhuman AI systems will be an arduous task. Notably, humans will not be able to reliably evaluate whether a million lines of complicated codes generated by an AI model are safe to run.

Taking to X (formerly Twitter), OpenAI shared a post stating: "Figuring out how to ensure future superhuman AI systems are aligned and safe is one of the most important unsolved technical problems in the world."

The AI lab went on to suggest that the problem is solvable. "There is lots of low-hanging fruit, and new researchers can make enormous contributions," the company wrote.

While we are able to supervise current AI systems, OpenAI says the future superhuman AI systems will be much smarter than humans. So, the company is sparing no effort in a bid to figure out how humans can still be in charge.

Claiming the OpenAI grants

OpenAI is offering grants for individual researchers, non-profits and academic labs. Moreover, graduate students can avail of an OpenAI-sponsored $150K Superalignment Fellowship.

The company wants to support researchers who are willing to work on alignment for the first time. It is worth noting that no prior experience working on alignment is required and you can apply by February 18.

OpenAI's research implies there are 7 practices that help keep AI systems safe and accountable. Now, the company wants to fund further research that can find answers to some open questions that surfaced from the previous study.

"We are launching a program to award grants of between $10,000 - $100,000 (about £8,200 and £82,000) to fund research into the impacts of agentic AI systems and practices for making them safe," the company said.

Interestingly, OpenAI investor Vinod Khosla recently said that China is more likely to kill us than sentient AI.

Agentic AI system

Agentic AI systems allude to a superintelligent AI that can perform a wide range of actions. Moreover, it is sometimes capable of autonomously acting on complex goals on behalf of the user.

For instance, you can ask your agentic personal assistant to help with shopping for Christmas and it will print out a list of things you might want to buy and recommend websites and offline stores where you can buy these items.

Before taking advantage of agentic AI systems, OpenAI researchers suggest it is imperative for us to make them safe by minimising their failures, vulnerabilities and abuses.

This doesn't come as a surprise given that new research shows that cybercriminals can take advantage of OpenAI's AI-powered chatbot ChatGPT to commit malicious acts.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.