Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newcastle Herald
Newcastle Herald
Business

Australia needs to be a world leader in responsible and ethical AI

Australia's Human Rights Commissioner, Lorraine Finlay. Picture by Keegan Carroll

The human rights impact of new and emerging technologies has been a key focus at the Australian Human Rights Commission for a number of years. Technology is essential to our lives, but it needs to be fair. There has never been a greater need to pay attention to technology and seize the opportunities it presents.

But we also need to recognise that these technologies can pose significant risks to our human rights and cause serious harm to individuals. AI has the potential to help solve complex problems, boost productivity and efficiency, reduce human error and democratise information. The uses that have been canvassed in areas such as healthcare and education highlight the potential of this technology to enhance human rights.

One example of the potential benefits for human rights was seen in India in 2018 where the Delhi police used AI-based facial recognition technology to reunite almost 3000 children with their parents in just four days. This pilot program reunited 10,561 missing children with their families after only 15 months in operation.

At the private enterprise level, AI products are readily being adopted in business models to improve efficiencies and outcomes for clients. But these tools are not without risks.

It is likely that AI systems will deepen privacy intrusions in new and concerning ways. AI products must be trained on incredibly large amounts of data. Social media companies operate on a business model that is incredibly reliant on the collection and monetisation of massive amounts of personal information. The collection of data to train AI products will only heighten issues.

Despite the importance of the right to privacy, many enterprises that build and deploy large language models like ChatGPT have been reluctant to reveal much detail about the data used for training, or that data's providence. It is also unlikely that these organisations have sought and received permission, or paid, for the use of internet data used to train AI products.

AI products effectively seek to "understand" human patterns of behaviour and, with access to the appropriate data sets, these AI tools can do so, drawing conclusions about all aspects of our lives. It is one thing to have AI store details about the music I listen to or movies that I watch. But AI can also reach more intrusive inferences about individuals, including about their mental and physical condition, political leanings or even their sexual orientation.

Further Reading: Government government 'playing catch-up' on AI

While AI allows large amounts of relevant information to be considered in decision-making processes and may encourage efficient, data-driven decision making, its regulation is becoming increasingly important.

Algorithmic bias can entrench unfairness, or even result in unlawful discrimination. Several AI products promise to recommend the best applicant for a job based on past hiring data. These systems may unintentionally produce discrimination. One example was the use of AI by Amazon that discriminated against women applying for technical jobs because the existing pool of Amazon software engineers was, by majority, male.

Cautionary tales are emerging of AI chatbots hallucinating, spreading misinformation, producing biased content and engaging in hate speech. Generative AI is a game-changer in terms of it now being cheaper and easier than ever before to run mass disinformation campaigns, and distinguishing between fact and fiction will become increasingly difficult. Even knowing whether we are interacting with a human or a machine may become increasingly challenging. These are particularly critical challenges for democracies such as Australia, where we rely on our citizens being informed and engaged.

We need to focus on is how we can harness the benefits of new and emerging technologies without causing harm and undermining human rights.

A Business Hunter summit entitled "AI - Friend or Foe"? was held on Thursday to examine the potential benefits and dangers of AI. Human Rights Commissioner Lorraine Finlay was one of the speakers. Picture by Marina Neil

Humanity needs to be placed at the heart of our engagement with AI. At all stages of a product's lifespan - from the initial concept through to its use by the consumer - we need to be asking not just what the technology is capable of doing, but why we want it to do those things. Technology has to be developed and used in responsible and ethical ways so that our fundamental rights and freedoms are protected.

Some governments and businesses are engaging proactively with these questions. However, far too many are not.

The recent release of the Department of Industry, Science and Resources AI discussion paper is a welcome step in the right direction by the Australian government. Other initiatives that could be implemented immediately to help protect human rights would be the introduction of an AI Safety Commissioner by government, and the effective use of human rights impact assessments by businesses.

Australia needs to be a world leader in responsible and ethical AI. The truth is that AI itself is neither friend nor foe. The more important question is whether the people using AI tools are using them in positive or negative ways. Unless both government and business are prepared to step up and show leadership in this area, my fear is that we will see the risks to human rights increase exponentially.

Lorraine Finlay is Australia's Human Rights Commissioner.

This is an edited version of her speech to the AI conference held in Newcastle on Thursday

WHAT DO YOU THINK? Join the discussion in the comment section below.

Find out how to register or become a subscriber here.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.