Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Politics
Blayne Haggart, Associate Professor of Political Science, Brock University

The Canadian government's poor track record on public consultations undermines its ability to regulate new technologies

The C.D. Howe Building is the home of Innovation, Science and Economic Development Canada, the department of the federal government responsible for regulating industry. (Shutterstock)

Over the last five years, Canada’s federal government has announced a litany of much-needed plans to regulate big tech, on issues ranging from social media harms, Canadian culture and online news to the right-to-repair of software-connected devices, and artificial intelligence (AI).

As digital governance scholars who have just published a book on the transformative social effects of data and digital technologies, we welcome the government’s focus on these issues.

Difficult conversations

By engaging with the public and experts in an open setting, governments can “kick the tires” on various ideas and build a social consensus on these policies, with the aim of creating sound, politically stable outcomes. When done well, a good public consultation can take the mystery out of policy.

For all their plans, the Liberal government’s public-consultation record related to digital policy has been abysmal. Its superficial engagements with the public and experts alike have undermined essential parts of the policymaking process, while also neglecting their responsibility to raise public awareness and educate the public on complex, often controversial, technical issues.

Messing up generative AI consultations

The most recent case of a less-than-optimal consultation has to do with Innovation, Science and Economic Development Canada’s (ISED) attempts to stake out a regulatory position on generative AI.

The government apparently started consultations about generative AI in early August, but news about them didn’t become public until Aug. 11. The government later confirmed on Aug. 14 that ISED “is conducting a brief consultation on generative AI with AI experts, including from academia, industry, and civil society on a voluntary code of practice intended for Canadian AI companies.”

The consultations are slated to close on Sept. 14.

Holding a short, unpublicized consultation in the depths of summer is almost guaranteed to not engage anyone outside of well-funded industry groups. Invitation-only consultations can potentially lead to biased policymaking that run the risk of not engaging with all Canadian interests.

Defining the problem

The lack of effective consultation is particularly egregious given the novelty and controversy surrounding generative AI, the technology that burst into public consciousness last year with the unveiling of OpenAI’s ChatGPT chatbot.

Limited stakeholder consultations are not appropriate when there exists, as is the case with generative AI, a dramatic lack of consensus regarding its potential benefits and harms.

A loud contingent of engineers claim that they’ve created a new form of intelligence, rather than a powerful, pattern-matching autocomplete machine.

Meanwhile, more grounded critics argue that generative AI has the potential to disrupt entire sectors, from education and the creative arts to software coding.


Read more: AI art is everywhere right now. Even experts don't know what it will mean


This consultation is taking place in the context of an AI-focused bubble-like investment craze, even as a growing number of experts question its long-term reliability. These experts point to generative AI’s penchant for generating errors (or “hallucinations”) and its negative environmental impact.

Generative AI is poorly understood by policymakers, the public and experts themselves. Invitation-only consultations are not the way to set government policy in such an area.

CTV looks at the launch of OpenAI’s ChatGPT app.

Poor track record

Unfortunately, the federal government has developed bad public-consultation habits on digital-policy issues. The government’s 2018 “national consultations on digital and data transformation” were unduly limited to the economic effects of data collection, not its broader social consequences, and problematically excluded governmental use of data.


Read more: Why the public needs more say on data consultations


The generative AI consultation followed the government’s broader efforts to regulate AI in C-27, The Digital Charter Implementation Act, a bill that academics have sharply critiqued for lacking effective consultation.

Even worse has been the government’s nominal consultations toward an online harms bill. On July 29, 2021 — again, in the depths of summer — the government released a discussion guide that presented Canadians with a legislative agenda, rather than surveying them about the issue and highlighting potential options.

At the time, we argued that the consultations narrowly conceptualized both the problem of online harms caused by social media companies and potential remedies.

Neither the proposal nor the faux consultations satisfied anyone, and the government withdrew its paper. However, the government’s response showed that it had failed to learn its lesson. Instead of engaging in public consultations, the government held a series of “roundtables” with — again — a number of hand-picked representatives of Canadian society.

Fixing mistakes

In 2018, we outlined practical steps the Canadian government could take from Brazil’s very successful digital-consultation process and subsequent implementation of its 2014 Internet Bill of Rights.

First, as Brazil did, the government needs to properly define, or frame, the problem. This is a not straightforward task when it pertains to new, rapidly evolving technology like generative AI and large language models. But it is a necessary step to setting the terms of the debate and educating Canadians.

It’s imperative that we understand how AI operates, where and how it obtains its data, its accuracy and reliability, and importantly, possible benefits and risks.

Second, the government should only propose specific policies once the public and policymakers have a good grasp on the issue, and once the public has been canvassed on the benefits and challenges of generative AI. Instead of doing this, the government has led with their proposed outcome: voluntary regulation.

Crucially, throughout this process, industry organizations that operate these technologies should not, as they have been in these stakeholder consultations, be the primary actors shaping the parameters of regulation.

Government regulation is both legitimate and necessary to address issues like online harms, data protection and preserving Canadian culture. But the Canadian government’s deliberate hobbling of its consultation processes is hurting its regulatory agenda and its ability to give Canadians the regulatory framework we need.

The federal government needs to engage in substantive consultations to help Canadians understand and regulate artificial intelligence, and the digital sphere in general, in the public interest.

The Conversation

Blayne Haggart receives funding from the Social Sciences and Humanities Research Council of Canada. He is a Senior Fellow with the Centre for International Governance Innovation, and a Fellow at the Balsillie School of International Affairs at the University of Waterloo.

Natasha Tusikov receives funding from the Social Sciences and Humanities Research Council. She is a senior fellow with the Balsillie School of International Affairs in Waterloo and a fellow with the Justice and Technoscience (JusTech) Lab at the School of Regulation and Global Governance (RegNet) at the Australian National University.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.