Get all your news in one place.
100’s of premium titles.
One app.
Start reading
360info
360info
Technology
Merih Angin and Jack Loveridge, Koç University

Developing economies risk exclusion as 'age of AI' dawns

It’s estimated artificial intelligence (AI) will add as much as US$15.7 trillion to the global economy by 2030

If current trends continue, much of this new wealth will be owned and controlled by corporations and individuals based in China and the United States, as well as by the national governments that represent them. But technological superiority by great powers undermines the positive potential of AI for the majority of the world’s population, particularly in developing economies.

The US and China account for more than 94 percent of funding for AI startups over the past five years, and half of the world’s hyperscale data centres. The two countries possess roughly 90 percent of the market capitalisation of the world’s 70 largest digital platforms, controlling a large proportion of cross-border data flows.

Along with their allies, the nations that own and control AI platforms and the data that powers them stand to dominate the global economy for decades to come. Experts in the field are also mostly from developed economies. They enjoy a disproportionate representation in the industry bodies that develop the standards and technical protocols that shape the international regulations for AI, often at the expense of the differing needs of developing economies.

Over 160 sets of AI ethics and governance frameworks have so far been developed by policymakers, think tanks, and activists. Still, there are no platforms to coordinate these initiatives, or measures to ensure national governments align AI regulations and norms across international boundaries.

The growing divide has implications for developing economies marginalised by the emerging AI sector.

Establishing a global database to track and monitor emerging AI legislation and regulations will capture and compare approaches and debates, particularly from developing economies. The OECD's Artificial Intelligence Policy Observatory, a platform for policy discussions on AI, is a promising start but it can be built upon. 

A recently released report from a working group convened by the Paris Peace Forum says an open, international dialogue on equitable AI governance could help set up global regulations. These would consider human rights and equal opportunities relevant to the needs of developing economies. And address rapidly-increasing socioeconomic inequality, meeting the challenges of sustainable development while achieving robust economic growth, and dismantling the enduring structures of colonialism.

This dialogue aspires toward a set of universal AI principles developed by a transparent, informed, and widely-recognised international process. They could serve as a reference point for policies and legislation across national contexts and eventually translate into enforceable standards.

For example, it would be sensible for governments in developing economies to ensure corporate accountability when they procure AI-based services. Compulsory social impact assessment risk analysis for any AI services offered by foreign corporations is one solution.

Such approaches, including mandatory source code disclosures, can motivate compliance with domestic laws and protect rights while discouraging market abuses. When source code is accessible to the public — and particularly to vigilant developers — platform owners are less likely to support designs that permit or profit from illegal activities. 

Governments of developing economies can remedy the widening imbalance between data providers and data collectors by creating incentives for foreign tech companies to invest in domestic research and development facilities to amplify local AI capabilities.

It is also important to deter ‘brain drain’, where top experts leave their homes to pursue international opportunities, by promoting incentives such as funds for innovation and R&D to retain and further develop domestic talent. In an emerging AI economy, an exodus may prove particularly detrimental in exacerbating the financial imbalance between developed and developing economies.

The benefits of AI are plenty, but mitigating the potential harm is crucial. An international dialogue, focused on results, can create an equitable distribution of AI technologies. 

Merih Angin is an Assistant Professor of International Relations & the Director of MA-Computational Social Sciences Lab at Koç University. She works in the areas of international development, computational social sciences and artificial intelligence governance. Dr. Angin co-chaired the ‘Initiate: Digital Rights in Society Algorithmic Governance’ working group, convened by the Paris Peace Forum.

Jack Loveridge is a Research Associate at Koç University’s Center for Globalization, Peace, and Democratic Governance (GLODEM). He is also a co-founder of Initiate: Digital Rights in Society and Senior Policy Adviser to the Paris Peace Forum on algorithmic governance issues. Dr. Loveridge co-chaired the ‘Initiate: Digital Rights in Society Algorithmic Governance’ working group, convened by the Paris Peace Forum.

The Working Group was supported by a grant from Luminate. The authors declare no conflict of interest.

Originally published under Creative Commons by 360info™.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.