Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The New Daily
The New Daily
Technology
Ash Cant

With an incoming ChatGPT rival, is it time to embrace AI?

Just months after ChatGPT was released to the public, Google has announced its chatbot called Bard, in the latest move towards the normalisation of artificial intelligence. 

“Today, we’re taking another step forward by opening it [Bard] up to trusted testers ahead of making it more widely available to the public in the coming weeks,” Google CEO Sundar Pichai said on Tuesday. 

Dr Armin Alimardani, from the University of Wollongong’s School of Law, researches the social, ethical and legal impact of emerging technologies like AI.

He expressed concern about the potential negative consequences of Google’s long-mooted tech, questioning its readiness and whether it is being released prematurely due to competitive pressures in an interview with The New Daily.

Bard draws on knowledge from the internet and then provides high-quality responses, similar to ChatGPT, which took the world by storm in late 2022.

There has been a lot of buzz around AI technology like ChatGPT, bringing attention to how it could disrupt institutions, most notably, education.

Professors and researchers expressed concerns that ChatGPT could result in students cheating their way through education by simply getting the AI to spit out answers and essays.

ChatGPT has been banned in schools in Australia and abroad, but some experts think it’s time we embrace AI, not just in the context of education, but in everyday life.

ChatGPT trawls all corners of the internet, from recipes to YouTube comments, to generate its answers. Photo: Getty

Are we ready to embrace AI?

“I think embracing AI is probably going to take some time,” Professor Kok-Leong Ong, director of RMIT’s Enterprise AI and Data Analytics Hub, told The New Daily.

AI technology has been around for some time, he said, adding some people may have already embraced AI without even realising it.

For example, Grammarly is an AI tool used to help improve grammar, while there are transcription services that are done completely by AI.

Understandably, people don’t want AI to take over their jobs, but Dr Alimardani thinks in the near future AI could enhance many aspects of the work we do.

“We shouldn’t forget that one of the primary aims of AI research is to improve the quality of life for individuals by making tasks easier and more efficient,” he said.

Professor Ong said that while we don’t know what the future holds, we can look back at history when questioning advancing technology and job security.

“I think this question about whether a particular piece of technology is going to put us out of jobs, history has shown that that’s really not the case,” he said.

“I think it’s just the nature of the job that will transform and change.”

Should we be worried about the rise of AI?

Dr Alimardani says technology is unpredictable and the rise of technology like ChatGPT was not how he imagined.

He recognises the potential benefits of AI but also has some concerns, but his opinion on it is based on the information available to him and is subject to change.

“It’s really important to know how this kind of AI mechanism works and not take whatever it says as the truth,” he said.

Even ChatGPT is aware of the concerns it poses. Photo: ChatGPT

Dr Alimardani and Professor Ong acknowledged concerns of AI promoting existing biases, spitting out misinformation or people misusing it.

Dr Alimardani also warns that software like ChatGPT could be used for harm.

Given its ability to produce human-like responses, it could be used for malicious purposes such as creating large-scale scam messages or spreading misinformation on social media.

At present, ChatGPT is more like a preview of the product, and it has not yet been officially released to developers, Dr Alimardani said.

“OpenAI has made ChatGPT available to the public for free as a means of proactively identifying and mitigating any potential negative consequences of its use,” he said.

“It’s great that OpenAI is appearing to be responsible and anticipating and preventing any potential abuse of ChatGPT, but there may still be loopholes that could be exploited.”

It’s also not uncommon for people to be concerned about something new affecting our lives.

“I think it’s all about understanding the technology and precision of what the technology is capable of, and then respond to it in the right way,” Professor Ong said.

The AI loop

“As AI-generated content becomes more common on the internet, for instance many YouTubers are using AI to create content, there’s a potential risk for future AI models to be trained on data generated by previous AI models [instead of human generated data],” Dr Alimardani said.

“Now imagine what would happen if the internet were filled with AI-generated misinformation and fake news.”

He said more data has led to improving performance in many language models, but that also means AI is being fed more “toxic, sexist and racist” content.

“There is currently a movement encouraging use [of] high-quality and clean data instead of relying on a larger amount of data to reduce biases and improve the AI system’s overall performance,” he said.

“But finding a lot of clean and curated data is not that easy.”

Artificial intelligence can be used to make lives easier.

Education and AI

When ChatGPT was made public, people speculated it could be the death of search engines and student essays – however, Dr Alimardani thinks there are ways to use AI to enhance education.

He is working on safe-to-fail AI alongside UNSW Associate Professor Emma A Jane to help improve the quality of learning.

He acknowledges that success is not guaranteed for the safe-to-fail AI project.

“It is possible that AI may not meet the required standards or expectations for certain educational tools we’re developing, and that’s one of the aims of this project – to assess where it can meet our standards,” he said.

The aim of safe-to-fail AI is not to replace educators, but to aid them.

We’re not close to having AI replace humans in many contexts, he said.

He is willing to integrate AI when teaching, but not for first-year students.

The reason for this is that ChatGPT lacks the capacity to think critically and creatively, and these are crucial skills for students to have to effectively navigate their future careers.

By first mastering these skills, students can then leverage AI to enhance their capacity to be innovative and original thinkers.

“I will incorporate the use of AI in my subject for the third-year students as they are close to completing their degree and will soon be entering the job market,” he said.

“Having experience of using AI in their university assessment tasks will not only make them attractive candidates and increase their chances of finding a job, but also equip them with the skills to perform better in their future careers.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.