Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Bangkok Post
Bangkok Post
Business

Cities Take the Lead in Setting Rules Around How AI Is Used

Amsterdam has a website that documents how the city government uses algorithms to deliver services. (Photo: Reuters)

As cities and states roll out algorithms to help them provide services like policing and traffic management, they are also racing to come up with policies for using this new technology.

AI, at its worst, can disadvantage already marginalized groups, adding to human-driven bias in hiring, policing and other areas. And its decisions can often be opaque -- making it difficult to tell how to fix that bias, as well as other problems.

The Wall Street Journal discussed calls for regulation of AI, or at least greater transparency about how the systems work, with three experts.

Cities are looking at a number of solutions to these problems. Some require disclosure when an AI model is used in decisions, while others mandate audits of algorithms, track where AI causes harm or seek public input before putting new AI systems in place.

It will take time for cities and local bureaucracies to build expertise in these areas and figure out how to craft the best regulations, says Joanna Bryson, a professor of ethics and technology at the Hertie School in Berlin.

But such efforts could provide a model for other cities, and even nations that are trying to craft standards of their own, she says. "People tend to notice what works and then try to shift efforts there."

Here are some ways cities are redefining how AI will work within their borders and beyond.

Explaining the algorithms: Amsterdam and Helsinki

One of the biggest complaints against AI is that it makes decisions that can't be explained, which can lead to complaints about arbitrary or even biased results.

To let their citizens know more about the technology already in use in their cities, Amsterdam and Helsinki collaborated on websites that document how each city government uses algorithms to deliver services.

The registry includes information on the data sets used to train an algorithm, a description of how an algorithm is used, how public servants use the results, the human oversight involved and how the city checks the technology for problems like bias.

Amsterdam has six algorithms fully explained -- with a goal of 50 to 100 -- on the registry website, including how the city's automated parking-control and trash-complaint reports work. Helsinki, which is only focusing on the city's most advanced algorithms, also has six listed on its site, with another 10 to 20 left to put up.

"We needed to assess the risk ourselves," says Linda van de Fliert, an adviser at Amsterdam's Chief Technology Office. "And we wanted to show the world that it is possible to be transparent."

The registries don't give citizens personalized information explaining their individual bills or fees. But they provide citizens with a way to give feedback on algorithms, and the name, city department and contact information of the person responsible for the deployment of a particular algorithm.

So far, at least one Amsterdam man who was displeased about getting an automated text about an overdue electricity bill used the registry to find out why the government contacted him.

Ms. van de Fliert has lost count of how many cities have reached out to learn more about the registry, and says she hopes that others pick up the project.

"It doesn't make sense to do this just for Amsterdam and Helsinki," she says. "We all have the same needs."

Auditing the AI: New York

Some cities are looking at ways to remove potential bias from algorithms.

In January, the New York City Council passed a law -- to go into effect in 2023 -- covering companies that sell AI software that screens potential employees. The businesses must obtain audits to ensure they don't discriminate against job candidates on the basis of race, sex or national origin.

The new rule also requires companies using AI for hiring or promotion decisions to disclose its use to job seekers and employees.

"Hiring is a really high-stakes domain," says Julia Stoyanovich, an associate professor of computer science and engineering at New York University and the director of the NYU Tandon Center for Responsible AI, who consulted on the regulation. "And we are using a lot of tools without any oversight."

The New York bill isn't exhaustive, says Dr. Stoyanovich -- for one thing, it doesn't detail what constitutes an audit.

She suggests making the AI display something like nutritional labels on food, with the data points used in the hiring decision broken down like nutrients and ingredients.

Dr. Stoyanovich says ensuring that the audits are helpful to the public will be the next challenge.

"We want to be careful about how these audits are done, who does them, and what they contain," she says. "Companies will want to do less rather than more."

Giving communities more power: Santa Clara County

Another effort to cut down on bias is giving communities a say in how law enforcement uses AI.

Working with the American Civil Liberties Union, California's Santa Clara County passed a law mandating community control over police surveillance (or CCOPS) in 2017.

The law requires any agency within Santa Clara County's jurisdiction that wants to use surveillance technology to submit it for public input at an open Board of Supervisors meeting.

The agency must present a policy detailing how the technology would be used, including how any data collected would be stored or shared. If the Board of Supervisors approves the purchase, the agency is responsible for a yearly impact report to prove the technology meets agreed-upon specifications.

Since the Santa Clara law passed, the Board of Supervisors has approved the use of roughly 100 technologies. The one exception: a proposal on facial-recognition technology, because of concerns including the potential for false positives.

"I'm a tech enthusiast," says Joe Simitian, a member of the county's Board of Supervisors. "But there was significant potential for this to be abused without a robust set of policies."

There are now 22 cities with a version of CCOPS on the books that covers 17.7 million people, according to Chad Marlow, a senior advocacy and policy counsel at the ACLU who is also overseeing the CCOPS effort.

Community-control laws cover all sorts of police surveillance that doesn't include AI, but several cities' laws explicitly address or ban facial-recognition technology. Each city ends up tweaking the law for its specific needs.

Cooperating with other cities: Amsterdam, Barcelona, London

A group of cities are pushing an AI effort to help educate other cities on best practices on deploying AI systems more effectively and ethically. That is what Amsterdam, Barcelona and London hope to achieve with the Global Observatory of Urban AI.

"We want to become a knowledge source for both cities and researchers," says Laia Bonet, Barcelona's deputy mayor for digital transitions, mobility and international relations.

The three cities agreed on five principles -- fairness and nondiscrimination, transparency and openness, safety and cybersecurity, privacy protection, and sustainability -- that lawmakers need to consider when procuring or building AI systems.

To show how those principles look in practice, the Observatory plans to put out research this year, including an atlas of best practices for AI already in place in cities around the world, which will include Amsterdam's guidelines on what cities should demand from private AI providers and Barcelona's system for creating recommendations systems for social services.

Other papers will explore how the technology was deployed and how cities have navigated the relationship between public and private.

These principles and papers are meant to help cities develop their own standards around all AI applications. Even with this collaboration, there are different approaches in the Observatory's founding cities.

London, for example, supports using facial-recognition technology in some cases, while Barcelona and Amsterdam don't.

Ms. Bonet says the cities agree on how important their goals are and that sharing information can create better AI across the world.

"We have tried to ensure that every step we do is a step toward a just transition," she says.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.