Get all your news in one place.
100’s of premium titles.
One app.
Start reading
LiveScience
LiveScience
Keumars Afifi-Sabet

3 scary breakthroughs AI will make in 2024

Artificial intelligence has been around for decades, but this year was a breakout for the spooky technology. (Image credit: Yaroslav Kushta via Getty Images)

Artificial intelligence (AI) has been around for decades, but this year was a breakout for the spooky technology, with OpenAI's ChatGPT creating accessible, practical AI for the masses. AI, however, has a checkered history, and today's technology was preceded by a short track record of failed experiments. 

For the most part, innovations in AI seem poised to improve things like medical diagnostics and scientific discovery. One AI model can, for example, detect whether you're at high risk of developing lung cancer by analyzing an X-ray scan. During COVID-19, scientists also built an algorithm that could diagnose the virus by listening to subtle differences in the sound of people's coughs. AI has also been used to design quantum physics experiments beyond what humans have conceived.

But not all the innovations are so benign. From killer drones to AI that threatens humanity's future, here are some of the scariest AI breakthroughs likely to come in 2024.

Q* — the age of Artificial General Intelligence (AGI)? 

Little is known about artificial general intelligence, but it could boost AI's capabilities. (Image credit: Andriy Onufriyenko via Getty Images)

We don't know why exactly OpenAI CEO Sam Altman was dismissed and reinstated in late 2023. But amid corporate chaos at OpenAI, rumors swirled of an advanced technology that could threaten the future of humanity. That OpenAI system, called Q* (pronounced Q-star) may embody the potentially groundbreaking realization of artificial general intelligence (AGI), Reuters reported. Little is known about this mysterious system, but should reports be true, it could kick AI's capabilities up several notches.

Related: AI is transforming every aspect of science. Here's how.

AGI is a hypothetical tipping point, also known as the "Singularity," in which AI becomes smarter than humans. Current generations of AI still lag in areas in which humans excel, such as context-based reasoning and genuine creativity. Most, if not all, AI-generated content is just regurgitating, in some way, the data used to train it. 

But AGI could potentially perform particular jobs better than most people, scientists have said. It could also be weaponized and used, for example, to create enhanced pathogens, launch massive cyber attacks, or orchestrate mass manipulation.

The idea of AGI has long been confined to science fiction, and many scientists believe we'll never reach this point. For OpenAI to have reached this tipping point already would certainly be a shock — but not beyond the realm of possibility. We know, for example, that Sam Altman was already laying the groundwork for AGI in February 2023, outlining OpenAI's approach to AGI in a blog post. We also know experts are beginning to predict an imminent breakthrough, including Nvidia's CEO Jensen Huang, who said in November that AGI is in reach within the next five years, Barrons reported. Could 2024 be the breakout year for AGI? Only time will tell. 

Election-rigging hyperrealistic deepfakes 

AI deepfake technology has the potential to swing elections. (Image credit: nemke via Getty Images)

One of the most pressing cyber threats is that of deepfakes — entirely fabricated images or videos of people that might misrepresent them, incriminate them or bully them. AI deepfake technology hasn't yet been good enough to be a significant threat, but that might be about to change. 

AI can now generate real-time deepfakes — live video feeds, in other words — and it is now becoming so good at generating human faces that people can no longer tell the difference between what's real or fake. Another study, published in the journal Psychological Science on Nov. 13, unearthed the phenomenon of "hyperrealism," in which AI-generated content is more likely to be perceived as "real" than actually real content. 

This would make it practically impossible for people to distinguish fact from fiction with the naked eye. Although tools could help people detect deepfakes, these aren't in the mainstream yet. Intel, for example, has built a real-time deepfake detector that works by using AI to analyze blood flow. But FakeCatcher, as it's known, has produced mixed results, according to the BBC.

As generative AI matures, one scary possibility is that people could deploy deepfakes to attempt to swing elections. The Financial Times (FT) reported, for example, that Bangladesh is bracing itself for an election in January that will be disrupted by deepfakes. As the U.S. gears up for a presidential election in November 2024, there's a possibility that AI and deepfakes could shift the outcome of this critical vote. UC Berkeley is monitoring AI usage in campaigning, for example, and NBC News also reported that many states lack the laws or tools to handle any surge in AI-generated disinformation. 

Mainstream AI-powered killer robots 

Governments around the world are incorporating AI into military systems. (Image credit: Ignatiev via Getty Images)

Governments around the world are increasingly incorporating AI into tools for warfare. The U.S. government announced on Nov. 22 that 47 states had endorsed a declaration on the responsible use of AI in the military — first launched at The Hague in February. Why was such a declaration needed? Because "irresponsible" use is a real and terrifying prospect. We've seen, for example, AI drones allegedly hunting down soldiers in Libya with no human input. 

AI can recognize patterns, self-learn, make predictions or generate recommendations in military contexts, and an AI arms race is already underway. In 2024, it's likely we'll not only see AI used in weapons systems but also in logistics and decision support systems, as well as research and development. In 2022, for instance, AI generated 40,000 novel, hypothetical chemical weapons. Various branches of the U.S. military have ordered drones that can perform target recognition and battle tracking better than humans. Israel, too, used AI to rapidly identify targets at least 50 times faster than humans can in the latest Israel-Hamas war, according to NPR

But one of the most feared development areas is that of lethal autonomous weapon systems (LAWS) — or killer robots. Several leading scientists and technologists have warned against killer robots, including Stephen Hawking in 2015 and Elon Musk in 2017, but the technology hasn't yet materialized on a mass scale. 

That said, some worrying developments suggest this year might be a breakout for killer robots. For instance, in Ukraine, Russia allegedly deployed the Zala KYB-UAV drone, which could recognize and attack targets without human intervention, according to a report from The Bulletin of the Atomic Scientists. Australia, too, has developed Ghost Shark — an autonomous submarine system that is set to be produced "at scale", according to Australian Financial Review.  The amount countries around the world are spending on AI is also an indicator — with China raising AI expenditure from a combined $11.6 million in 2010 to $141 million by 2019, according to Datenna, Reuters reported. This is because, the publication added, China is locked in a race with the U.S. to deploy LAWS. Combined, these developments suggest we're entering a new dawn of AI warfare. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.