Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Hindu
The Hindu
Technology
Subhashis Banerjee

If there’s a theory of AI, computer science is unlikely to provide it

The popular understanding seems to be that the day is not far off when an artificial intelligence (AI) will be able to think like humans and interact, at least through languages, in a way that is indistinguishable from real humans. Such a day has been called “the singularity”, a pivotal moment for the human race. With the recent success of large language models (LLM) like ChatGPT, which are capable of interpreting language use and composing sentences, many think this day is imminent.

When confronted with such a possibility, Ludwig Wittgenstein, one of the most influential philosophers of the 20th century, famously said, “But a machine surely cannot think!” He perhaps meant the concepts of thinking and intelligence can only apply to living objects; it would be grammatically and logically incorrect otherwise. Nevertheless, machines can indeed share some traits of human behaviour, so even without precise definitions of these terms, their increasing use for machines is perhaps germane. In fact, in the eventuality that we do go past the “singularity” – a proposition that sounds frightening – a machine may have to be treated someday like a person.

The universal computer

Most folks trained in computer science believe such AI must be possible. This is because central to the accepted theory of computation – as obtained among others by Alan Turing in 1936 – is the existence of an abstract algorithmic concept of a universal computer, a device that can simulate the actions of all other computers.

At the risk of some over-simplification, we can think of this universal computer as one that can execute any program written in any modern programming language given unbounded memory and time. Of course, it may not be able to do so “efficiently”, but that is only because we may not yet have discovered a sufficiently efficient model of computation. Given adequate time and memory, the universal computer can, in principle, simulate with arbitrary precision all physical and chemical processes of the brain and other parts of the human body, and actually all of nature’s, provided their theories are understood. The physicist, philosopher, and computer scientist David Deutsch calls this a fundamental law of physics and computer science.

Of course, Turing fully understood universality and believed AI must be possible. If it is, it will also need sensorimotor perception because it cannot possibly rely on external intelligence to provide it with the essential methods to survive and exchange signals with the outside world. Turing also estimated the resources required to simulate a human brain, which he argued must also be a universal computer, will not be very large – in fact, less than that of a typical modern laptop. After all, the average size of the human brain is not all that much. And the fact that there must exist computational problems that can’t be solved by a universal computer – as established by Gödel’s incompleteness theorem and Turing’s own results on computability – did not deter his arguments because humans also can’t solve many problems.

He also formulated a test for AI where a human judge should be unable to tell whether it is a human or a program based on interacting with it. Many believe that current state-of-the art LLM-based AI software like ChatGPT, built using deep neural networks, may have come close to passing this Turing test.

What’s the right theory of intelligence?

Thus, the question arises: do we know how the brain works to be able to program a universal simulator for AI? That is, can a parametrised neural network model with parameters estimated using a purely data-driven inductive method become a program for the universal simulator? Unfortunately, the answers to these have to be a resounding ‘no’. We are not even close.

Indeed, as the example below demonstrates, ChatGPT does not seem to understand even the basic logic of numbers, despite its copious training from examples.

User: Can you give me a reason why 793 is smaller than 787?

ChatGPT: It appears to be a typo. 787 is smaller than 793.

User: Okay. Why is 787 smaller than 793?

ChatGPT: In numerical order, 787 is smaller than 793 because the digit in the hundreds place (7) is smaller in 787 compared to 793.

User: But the digit in the hundreds place is the same in both numbers!

ChatGPT: My apologies for the confusion in my previous response. I made an error in my explanation. Let me clarify: In both 787 and 793, the digit in the hundreds place is indeed the same (7). To determine the comparison, we need to look at the digits in the tens place. In this case, the digit in the tens place is 8 in both numbers. Since the tens place is the same, we then compare the digits in the units place. Here, 7 is smaller than 3. Therefore, 787 is smaller than 793.

In general, logical deduction – perhaps like most other cognitive tasks – can’t be extrapolated or generalised or inductively derived purely from data, which is what current state-of-the-art AI systems are based on.

For example, no amount of training data can give us a mathematical abstraction like the Pythagorean theorem. It had to be deduced logically using created representations like numbers. And even with logical deduction, there is a fundamental computational resource limitation problem. We know from theory of computation that most logical deductions are computationally intractable, and that there is an infinite hierarchy of logical deduction problems whose solutions will require ever increasing amounts of time and memory resources. We clearly do not have the right theory of intelligence as yet.

A stone, a watch, a frog

Further, scientific theories are not read from observations in nature. They are obtained through a process of abduction, by making hypotheses – sometimes with wild guesses – and critiquing and reasoning about them, often with physical experiments, but again not always. Indeed, we have obtained fantastic theories like quantum mechanics and gravitation based on curved spacetime only using such methods. They were only validated post-facto with observations from experiments and telescopes.

And despite its obvious appeal, the Turing test is inadequate for intelligence. It requires a judge to empirically decide whether an AI is indistinguishable from humans. However, judging a genuine AI will invariably require explanations of how it works. A purely behavioural test is bound to be insufficient because it is well known in probability theory that, in general, multiple, possibly infinite, internal configurations and explanations of systems will exist that can result in the same behavioural manifestations over the observables.

It is like a philosopher trying to tell a living object by just looking at a stone, a watch, and a frog. The test also does not reveal who is responsible for the AI’s behaviour. If it was an external designer, the program is not an AI.

Can machines think?

Nevertheless, it is the quest for passing the test that has brought AI systems to where they are. They are indeed impressive in their conversational coherence and there can certainly be many engineering applications where they can be used effectively. That will however require us to ensure they adhere to the conventional safety principles of engineering. In that sense, the Turing test has certainly been useful.

Programming intelligence requires us to cross new epistemological barriers. Pure empiricism and inductive reasoning from data, using fake-it-till-you-make-it types of optimisation or even logical deductions can’t possibly be adequate theories of intelligence. We don’t even know how to make wild guesses and hypotheses algorithmically, let alone critique and analyse them. We are also fairly clueless algorithmically about emotions, and feelings like pain and happiness, and of course about sensorimotor perceptions.

The ultimate test of AI will have to be based on explanatory theories of AI. And if we understand them, we should be able to program them. Ultimately, we have to admit even if reluctantly that if ever we discover a theory of AI, it is more likely to emerge from the discipline of philosophy than from computer science.

Subhashis Banerjee is Professor of Computer Science, Ashoka University, and the Centre for Digitalisation, AI and Society at the university. Views expressed are personal.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.