Get all your news in one place.
100’s of premium titles.
One app.
Start reading
International Business Times UK
International Business Times UK
Technology
Callum Conway-Shaw

AI Will Take "Decades" to Reach Human-Level Intelligence, Says Godfather of AI

LeCun received the 2018 Turing Award, together with Yoshua Bengio and Geoffrey Hinton, for their work on deep learning. The three are sometimes referred to as the "Godfathers of AI" and "Godfathers of Deep Learning". (Credit: Wikimedia Commons)

Leading technology experts, scholars and industry leaders from around the world converged at a symposium in Hong Kong this week to share their insights on AI.

The event was jointly hosted by the Hong Kong University of Science and Technology (HKUST) and the Greater Bay Area Association of Academicians (GBAAA).

Among the speakers was Turing Prize winner Professor Yann LeCun, hailed by the media as one of the "Godfathers of AI".

In a keynote speech, he claimed it would potentially take "decades" for the development of AI to reach a human level of intelligence.

This contradicts some of the concerns expressed at last month's 'AI Summit' in Bletchley Park, Buckinghamshire, where world leaders and tech specialists gathered to discuss the benefits and dangers AI poses.

Hosted by UK Prime Minister Rishi Sunak, the conference concluded with the signature of the Bletchley Declaration, the agreement of countries including the UK, United States and China on the "need for international action to understand and collectively manage potential risks through a new joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community".

Prior to the event, a document signed by 23 tech experts claimed it was "utterly reckless" to pursue ever more powerful AI systems before understanding how to make them safe.

In a more optimistic perspective on AI's uses, LeCun suggested that objective-driven AI will be used to help predict the future, and the Joint Embedding Predictive Architecture (JEPA) model will make a paradigm shift in predictive modelling, bringing about a "new Renaissance".

Earlier this year, META announced the design of an Image Joint Embedding Predictive Architecture (I-JEPA), which learns by creating an internal model of the outside world, by comparing abstract representations of images (rather than comparing the pixels themselves).

I-JEPA delivers strong performance on multiple computer vision tasks, and it's much more computationally efficient than other widely used computer vision models. The representations learned by I-JEPA can also be used for many different applications without needing extensive fine-tuning.

According to LeCun, AI will become a "shared infrastructure" in the future, like the Internet today. That also means, he said, AI platforms must be made "open source".

"All interactions with the digital world will be mediated by AI assistants... They will constitute a repository of all human knowledge and culture," the leading AI expert said.

The past few months have seen several large tech companies develop their own versions of AI assistants, such as Google Bard and Amazon's Alexa.

The issue of the "open source" AI platform was further discussed at a "Fireside Chat" among Prof. LeCun, HKUST Council Chairman and AI expert Prof. Harry Shum and Director of HKUST's Center for AI Research Prof. Pascal Fung, at the symposium held at the Asia Society Hong Kong Center.

Admitting the topic was complex and controversial, Prof. LeCun said open-source platforms would promote diversity. Prof. Shum shared the view that "open source is a good thing", but added the AI industry would be divided on the idea in view of their different considerations in developing Large Language Models (LLM).

A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.

OpenAI's GPT models (e.g., GPT-3.5 and GPT-4, used in ChatGPT), Google's PaLM (used in Bard), and Meta's LLaMa, as well as BLOOM, Ernie 3.0 Titan and Anthropic's Claude 2 are all notable examples of this type of technology.

Professor Nancy Ip, HKUST President and GBAAA Council Chair who spearheaded the symposium, expressed her deep appreciation to the experts and scholars who shared their valuable insights at the event.

The symposium, she said, was aimed at "exploring the current AI research landscape and delving into the cutting-edge work being undertaken, to understand the potential AI holds as well as its limitations".

"The launch of Open AI's ChatGPT and other generative AI tools early this year has suddenly thrust AI into the forefront of our life and imagination," she added.

"As AI research progresses, we can anticipate even more transformative advancements that will redefine the way we work and live."

Since the release of ChatGPT in November 2022, AI has exploded into the mainstream discourse, and the technology has been developing at a rapid pace.

This has led some experts to express concern at the unregulated development of AI models and called for laws and guidelines to be introduced.

In a letter issued by the Future of Life Institute, the growing unease at AI's advancement was expressed, stating: "They should be developed only once we are confident that their effects will be positive and their risks will be manageable."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.