Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
David Benady

‘We need to act now’: how the University of Toronto is answering the call for safer, ethical AI

COLLABORATION AT AI EVENT - DZ2 2820
The Schwartz Reisman Institute brings together scholars and students from diverse fields to study the impacts of technology on society. Photograph: Lisa Sakulensky

For many, the seminal work of computer scientist Geoffrey Hinton and his students on deep learning established the University of Toronto as a birthplace for modern AI. Today, AI is rapidly becoming a key tool in our lives, embedded into activities from work and leisure to social policy, the justice system and even warfare. But AI’s meteoric rise, while offering huge benefits to society, is creating significant concerns, with world leaders and AI experts – including Hinton himself – warning of multiple dangers and calling for greater regulation of the technology.

AI’s risks range from the disruption of employment and embedding systemic bias into decision making, to the spread of disinformation and the existential threat of a “superintelligence” acting beyond human control.

To tackle such threats and harness the beneficial powers of this technology, the University of Toronto’s Schwartz Reisman Institute for Technology and Society is supporting interdisciplinary research, debate and educational programmes in this area. The aim is to encourage AI developers and policymakers to take a holistic, ethical approach, for the overall benefit of humanity, to the AI systems they develop and implement.

“Education must adapt to this new world. We need a whole slew of people working together to figure out the right way to control and regulate AI. The important thing is that we act now,” says David Lie, director of the Schwartz Reisman Institute and a cyber security expert.

The Schwartz Reisman Institute brings together scholars and students from diverse fields such as computing, philosophy, psychology, law and public policy to study the impacts of technology on society. The institute, which has recently appointed renowned AI safety experts Roger Grosse and David Duvenaud as Schwartz Reisman chairs in technology and society, seeks to keep policymakers, technologists and the public informed about the regulation and ethical standards of AI and other technologies. The institute uses a multidisciplinary approach to outline practical solutions to the problems and risks that technology presents to humanity.

AI is an area of growing capability and concern, with stark warnings issued by Hinton – University of Toronto professor emeritus and “godfather of AI” – about the dangers of the technology he helped invent. In a recent paper in the journal Science, he joined 25 experts – including Sheila McIlraith, professor in the Department of Computer Science at the University of Toronto and associate director at Schwartz Reisman – in highlighting critical problems with AI that needed to be addressed.

The experts called for a greater focus on research and development to ensure AI systems align with human values, for more informed regulation, better oversight of AI systems, and greater understanding of how AI makes decisions.

In a collaborative initiative to address these issues, the Schwartz Reisman Institute has teamed up with the University of Toronto’s Department of Computer Science to create a series of course modules on ethics that are embedded into the university’s existing computer science undergraduate curriculum. The Embedded Ethics Education Initiative (E3I) aims to implant the seeds of a more thoughtful approach to AI and other technology in the minds of the technology leaders of the future.

“We want students to develop ethical sensitivity because they will be on the frontlines when they enter the workforce,” says McIlraith. “They’ll be the ones writing the code, developing the systems and using the data. It is imperative that ethical considerations are part of the fundamental design principles.”

McIlraith says that undergraduates have limited life experiences and may be inclined to build technology that pushes boundaries, rather than contemplating the diversity of stakeholders and end users who may be affected by it, and designing the technology for them. Many of these students go on to work at the big tech firms once they graduate, so learning to identify stakeholders and build technology that benefits this diverse collective is crucial.

“We wanted to start the conversations about the ethical side of AI early on, but we absolutely did not want to proselytise. Our aim has been to broaden their thinking about AI’s impacts on society through the insertion of ethics modules into select computer science courses rather than to lay down the law,” she says.

The Embedded Ethics Education Initiative integrates ethical concepts into existing courses on specific areas of computing. For instance, a course on computer games includes a discussion about drawing a line between designing games that are highly engaging from those that are addictive. Another course raises questions of privacy in the use of facial recognition technology and how the risks can be mitigated.

The initiative began in 2020 and is being expanded to other courses beyond computer science. In the 2023/24 academic year, total enrolment in computer science courses with E3I programming has exceeded 8,000 students, and another 1,500 students participated in E3I programming in courses outside computer science.

Another initiative is a new course to be run next year by Schwartz Reisman Institute director Lie with associate director Lisa Austin, who is also chair in law and technology at the University of Toronto. The course, digital privacy and privacy regulation, will bring together law and engineering students to jointly tackle problems, learning from each other about the technological and legal implications of a variety of developments in AI and technology.

“This multidisciplinary approach to education is a departure from how things are normally done in universities,” says Lie. “But given the way technology is expanding across society, it doesn’t make sense to teach technology separately from everything else that we do.”

Lie believes that a cross-disciplinary strategy for training AI developers and policymakers is the key to creating a society at ease with AI.

“A measure of success would be the creation of a healthy ecosystem where, on balance, we are better off with the AI systems than without them. It will be a society where regulation is carefully and intelligently written and well implemented,” he says.

“My vision is to make us one of the leaders. Canada has already contributed greatly to machine learning and AI through the contributions of previous scholars such as Hinton, and I think we have an important role to play in this technology going forward.”

Meet the extraordinary community that’s pushing the boundaries of what’s possible

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.