Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Samar Fatima, Research Fellow Enterprise AI and Data Analytics Hub, RMIT University

AI to Z: all the terms you need to know to keep up in the AI hype age

Deepmind/Unsplash/Artist: Champ Panupong Techawongthawon, CC BY-NC-SA

Artificial intelligence (AI) is becoming ever more prevalent in our lives. It’s no longer confined to certain industries or research institutions; AI is now for everyone.

It’s hard to dodge the deluge of AI content being produced, and harder yet to make sense of the many terms being thrown around. But we can’t have conversations about AI without understanding the concepts behind it.

We’ve compiled a glossary of terms we think everyone should know, if they want to keep up.

Algorithm

An algorithm is a set of instructions given to a computer to solve a problem or to perform calculations that transform data into useful information.

Alignment problem

The alignment problem refers to the discrepancy between our intended objectives for an AI system and the output it produces. A misaligned system can be advanced in performance, yet behave in a way that’s against human values. We saw an example of this in 2015 when an image-recognition algorithm used by Google Photos was found auto-tagging pictures of black people as “gorillas”.

Artificial General Intelligence (AGI)

Artificial general intelligence refers to a hypothetical point in the future where AI is expected to match (or surpass) the cognitive capabilities of humans. Most AI experts agree this will happen, but disagree on specific details such as when it will happen, and whether or not it will result in AI systems that are fully autonomous.


Read more: Will AI ever reach human-level intelligence? We asked five experts


Artificial Neural Network (ANN)

Artificial neural networks are computer algorithms used within a branch of AI called deep learning. They’re made up of layers of interconnected nodes in a way that mimics the neural circuitry of the human brain.

Big data

Big data refers to datasets that are much more massive and complex than traditional data. These datasets, which greatly exceed the storage capacity of household computers, have helped current AI models perform with high levels of accuracy.

Big data can be characterised by four Vs: “volume” refers to the overall amount of data, “velocity” refers to how quickly the data grow, “veracity” refers to how complex the data are, and “variety” refers to the different formats the data come in.

Chinese Room

The Chinese Room thought experiment was first proposed by American philosopher John Searle in 1980. It argues a computer program, no matter how seemingly intelligent in its design, will never be conscious and will remain unable to truly understand its behaviour as a human does.

This concept often comes up in conversations about AI tools such as ChatGPT, which seem to exhibit the traits of a self-aware entity – but are actually just presenting outputs based on predictions made by the underlying model.

Deep learning

Deep learning is a category within the machine-learning branch of AI. Deep-learning systems use advanced neural networks and can process large amounts of complex data to achieve higher accuracy.

These systems perform well on relatively complex tasks and can even exhibit human-like intelligent behaviour.

Diffusion model

A diffusion model is an AI model that learns by adding random “noise” to a set of training data before removing it, and then assessing the differences. The objective is to learn about the underlying patterns or relationships in data that are not immediately obvious.

These models are designed to self-correct as they encounter new data and are therefore particularly useful in situations where there is uncertainty, or if the problem is very complex.

Explainable AI

Explainable AI is an emerging, interdisciplinary field concerned with creating methods that will increase users’ trust in the processes of AI systems.

Due to the inherent complexity of certain AI models, their internal workings are often opaque, and we can’t say with certainty why they produce the outputs they do. Explainable AI aims to make these “black box” systems more transparent.

Generative AI

These are AI systems that generate new content – including text, image, audio and video content – in response to prompts. Popular examples include ChatGPT, DALL-E 2 and Midjourney.

Labelling

Data labelling is the process through which data points are categorised to help an AI model make sense of the data. This involves identifying data structures (such as image, text, audio or video) and adding labels (such as tags and classes) to the data.

Humans do the labelling before machine learning begins. The labelled data are split into distinct datasets for training, validation and testing.

The training set is fed to the system for learning. The validation set is used to verify whether the model is performing as expected and when parameter tuning and training can stop. The testing set is used to evaluate the finished model’s performance.

Large Language Model (LLM)

Large language models (LLM) are trained on massive quantities of unlabelled text. They analyse data, learn the patterns between words and can produce human-like responses. Some examples of AI systems that use large language models are OpenAI’s GPT series and Google’s BERT and LaMDA series.

Machine learning

Machine learning is a branch of AI that involves training AI systems to be able to analyse data, learn patterns and make predictions without specific human instruction.

Natural language processing (NLP)

While large language models are a specific type of AI model used for language-related tasks, natural language processing is the broader AI field that focuses on machines’ ability to learn, understand and produce human language.

Parameters

Parameters are the settings used to tune machine-learning models. You can think of them as the programmed weights and biases a model uses when making a prediction or performing a task.

Since parameters determine how the model will process and analyse data, they also determine how it will perform. An example of a parameter is the number of neurons in a given layer of the neural network. Increasing the number of neurons will allow the neural network to tackle more complex tasks – but the trade-off will be higher computation time and costs.

Responsible AI

The responsible AI movement advocates for developing and deploying AI systems in a human-centred way.

One aspect of this is to embed AI systems with rules that will have them adhere to ethical principles. This would (ideally) prevent them from producing outputs that are biased, discriminatory or could otherwise lead to harmful outcomes.

Sentiment analysis

Sentiment analysis is a technique in natural language processing used to identify and interpret the emotions behind a text. It captures implicit information such as, for example, the author’s tone and the extent of positive or negative expression.

Supervised learning

Supervised learning is a machine-learning approach in which labelled data are used to train an algorithm to make predictions. The algorithm learns to match the labelled input data to the correct output. After learning from a large number of examples, it can continue to make predictions when presented with new data.

Training data

Training data are the (usually labelled) data used to teach AI systems how to make predictions. The accuracy and representativeness of training data have a major impact on a model’s effectiveness.

Transformer

A transformer is a type of deep-learning model used primarily in natural language processing tasks.

The transformer is designed to process sequential data, such as natural language text, and figure out how the different parts relate to one another. This can be compared to how a person reading a sentence pays attention to the order of the words to understand the meaning of the sentence as a whole.

One example is the generative pre-trained transformer (GPT), which the ChatGPT chatbot runs on. The GPT model uses a transformer to learn from a large corpus of unlabelled text.

Turing Test

The Turing test is a machine intelligence concept first introduced by computer scientist Alan Turing in 1950.

It’s framed as a way to determine whether a computer can exhibit human intelligence. In the test, computer and human outputs are compared by a human evaluator. If the outputs are deemed indistinguishable, the computer has passed the test.

Google’s LaMDA and OpenAI’s ChatGPT have been reported to have passed the Turing test – although critics say the results reveal the limitations of using the test to compare computer and human intelligence.

Unsupervised learning

Unsupervised learning is a machine-learning approach in which algorithms are trained on unlabelled data. Without human intervention, the system explores patterns in the data, with the goal of discovering unidentified patterns that could be used for further analysis.

The Conversation

Kok-Leong Ong receives funding from NHMRC, MRFF and CSIRO.

Samar Fatima does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.