Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Ehsan Nabavi, Senior Lecturer in Technology and Society, Responsible Innovation Lab, Australian National University

AI is set to transform science – but will we understand the results?

Artificial intelligence (AI) has taken centre stage in basic science. The five winners of the 2024 Nobel Prizes in Chemistry and Physics shared a common thread: AI.

Indeed, many scientists – including the Nobel committees – are celebrating AI as a force for transforming science.

As one of the laureates put it, AI’s potential for accelerating scientific discovery makes it “one of the most transformative technologies in human history”. But what will this transformation really mean for science?

AI promises to help scientists do more, faster, with less money. But it brings a host of new concerns, too – and if scientists rush ahead with AI adoption they risk transforming science into something that escapes public understanding and trust, and fails to meet the needs of society.

The illusions of understanding

Experts have already identified at least three illusions that can ensnare researchers using AI.

The first is the “illusion of explanatory depth”. Just because an AI model excels at predicting a phenomenon — like AlphaFold, which won the Nobel Prize in Chemistry for its predictions of protein structures — that doesn’t mean it can accurately explain it. Research in neuroscience has already shown that AI models designed for optimised prediction can lead to misleading conclusions about the underlying neurobiological mechanisms.

Second is the “illusion of exploratory breadth”. Scientists might think they are investigating all testable hypotheses in their exploratory research, when in fact they are only looking at a limited set of hypotheses that can be tested using AI.

Finally, the “illusion of objectivity”. Scientists may believe AI models are free from bias, or that they can account for all possible human biases. In reality, however, all AI models inevitably reflect the biases present in their training data and the intentions of their developers.

Cheaper and faster science

One of the main reasons for AI’s increasing appeal in science is its potential to produce more results, faster, and at a much lower cost.

An extreme example of this push is the “AI Scientist” machine recently developed by Sakana AI Labs. The company’s vision is to develop a “fully AI-driven system for automated scientific discovery”, where each idea can be turned into a full research paper for just US$15 – though critics said the system produced “endless scientific slop”.

Do we really want a future where research papers can be produced with just a few clicks, simply to “accelerate” the production of science? This risks inundating the scientific ecosystem with papers with no meaning and value, further straining an already overburdened peer-review system.

We might find ourselves in a world where science, as we once knew it, is buried under the noise of AI-generated content.

A lack of context

The rise of AI in science comes at a time when public trust in science and scientists is still fairly high , but we can’t take it for granted. Trust is complex and fragile.

As we learned during the COVID pandemic, calls to “trust the science” can fall short because scientific evidence and computational models are often contested, incomplete, or open to various interpretations.

However, the world faces any number of problems, such as climate change, biodiversity loss, and social inequality, that require public policies crafted with expert judgement. This judgement must also be sensitive to specific situations, gathering input from various disciplines and lived experiences that must be interpreted through the lens of local culture and values.

As an International Science Council report published last year argued, science must recognise nuance and context to rebuild public trust. Letting AI shape the future of science may undermine hard-won progress in this area.

If we allow AI to take the lead in scientific inquiry, we risk creating a monoculture of knowledge that prioritises the kinds of questions, methods, perspectives and experts best suited for AI.

This can move us away from the transdisciplinary approach essential for responsible AI, as well as the nuanced public reasoning and dialogue needed to tackle our social and environmental challenges.

A new social contract for science

As the 21st century began, some argued scientists had a renewed social contract in which scientists focus their talents on the most pressing issues of our time in exchange for public funding. The goal is to help society move toward a more sustainable biosphere – one that is ecologically sound, economically viable and socially just.

The rise of AI presents scientists with an opportunity not just to fulfil their responsibilities but to revitalise the contract itself. However, scientific communities will need to address some important questions about the use of AI first.

For example, is using AI in science a kind of “outsourcing” that could compromise the integrity of publicly funded work? How should this be handled?

What about the growing environmental footprint of AI? And how can researchers remain aligned with society’s expectations while integrating AI into the research pipeline?

The idea of transforming science with AI without first establishing this social contract risks putting the cart before the horse.

Letting AI shape our research priorities without input from diverse voices and disciplines can lead to a mismatch with what society actually needs and result in poorly allocated resources.

Science should benefit society as a whole. Scientists need to engage in real conversations about the future of AI within their community of practice and with research stakeholders. These discussions should address the dimensions of this renewed social contract, reflecting shared goals and values.

It’s time to actively explore the various futures that AI for science enables or blocks – and establish the necessary standards and guidelines to harness its potential responsibly.

The Conversation

Ehsan Nabavi does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.