Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times
International Business Times
World
Daniel Lawler and Pierre Celerier

Neural Networks, Machine Learning? Nobel-winning AI Science Explained

British-Canadian Geoffrey Hinton, known as a 'godfather of AI', and American John Hopfield were given 2024's Nobel Prize for Physics (Credit: AFP)

The Nobel Prize in Physics was awarded to two scientists on Tuesday for discoveries that laid the groundwork for the artificial intelligence used by hugely popular tools such as ChatGPT.

British-Canadian Geoffrey Hinton, known as a "godfather of AI," and US physicist John Hopfield were given the prize for "discoveries and inventions that enable machine learning with artificial neural networks," the Nobel jury said.

But what are those, and what does this all mean? Here are some answers.

Mark van der Wilk, an expert in machine learning at the University of Oxford, told AFP that an artificial neural network is a mathematical construct "loosely inspired" by the human brain.

Our brains have a network of cells called neurons, which respond to outside stimuli -- such as things our eyes have seen or ears have heard -- by sending signals to each other.

When we learn things, some connections between neurons get stronger, while others get weaker.

Unlike traditional computing, which works more like reading a recipe, artificial neural networks roughly mimic this process.

The biological neurons are replaced with simple calculations sometimes called "nodes" -- and the incoming stimuli they learn from is replaced by training data.

The idea is that this could allow the network to learn over time -- hence the term machine learning.

But before machines would be able to learn, another human trait was necessary: memory.

Ever struggle to remember a word? Consider the goose. You might cycle through similar words -- goon, good, ghoul -- before striking upon goose.

"If you are given a pattern that's not exactly the thing that you need to remember, you need to fill in the blanks," van der Wilk said.

"That's how you remember a particular memory."

This was the idea behind the "Hopfield network" -- also called "associative memory" -- which the physicist developed back in the early 1980s.

Hopfield's contribution meant that when an artificial neural network is given something that is slightly wrong, it can cycle through previously stored patterns to find the closest match.

This proved a major step forward for AI.

In 1985, Hinton revealed his own contribution to the field -- or at least one of them -- called the Boltzmann machine.

Named after 19th century physicist Ludwig Boltzmann, the concept introduced an element of randomness.

This randomness was ultimately why today's AI-powered image generators can produce endless variations to the same prompt.

Hinton also showed that the more layers a network has, "the more complex its behaviour can be".

This in turn made it easier to "efficiently learn a desired behaviour," French machine learning researcher Francis Bach told AFP.

Despite these ideas being in place, many scientists lost interest in the field in the 1990s.

Machine learning required enormously powerful computers capable of handling vast amounts of information. It takes millions of images of dogs for these algorithms to be able to tell a dog from a cat.

So it was not until the 2010s that a wave of breakthroughs "revolutionised everything related to image processing and natural language processing," Bach said.

From reading medical scans to directing self-driving cars, forecasting the weather to creating deepfakes, the uses of AI are now too numerous to count.

Hinton had already won the Turing award, which is considered the Nobel for computer science.

But several experts said his was a well-deserved Nobel win in the field of physics, which started science down the road that would lead to AI.

French researcher Damien Querlioz pointed out that these algorithms were originally "inspired by physics, by transposing the concept of energy onto the field of computing".

Van der Wilk said the first Nobel "for the methodological development of AI" acknowledged the contribution of the physics community, as well as the winners.

"There is no magic happening here," van der Wilk emphasised.

"Ultimately, everything in AI is multiplications and additions."

Physicist John Hopfield let neural networks 'store and reconstruct images and other types of patterns in data' (Credit: AFP)
Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.