Between fancy global summits, OpenAI’s boardroom drama and rumoured technical breakthroughs, the world has recently been paying close attention to the frontiers of AI research. But last month the White House Office of Management and Budget (OMB) in the US released a memo on the use of more mundane AI systems in government that is likely, in the near term at least, to be equally consequential.
From its use tracking undocumented migrants, to the predictive algorithms police departments deploy to surveil populations and allocate resources, AI is now a common tool of the US government to cut costs – but at the expense of subjecting society’s most vulnerable to arbitrary rule without due process, and with predictably discriminatory outcomes. Researchers, journalists and activists have been calling attention to this for years. That call is at last being answered.
The OMB is the largest unit in the US president’s executive office. Though little discussed, it is extremely powerful. It oversees other government agencies, ensuring their actions are aligned with the president’s programme. And last month, the OMB director, Shalanda Young, released a memo that could revolutionise the use of AI in government in the US.
The proposed policy – it is still a draft, and could be watered down – would require each department to appoint a chief AI officer and have them provide a register of existing AI use cases. This alone is a significant win for transparency. But in addition, the officer must identify systems that will potentially impact people’s safety or rights, which are subject to further meaningful constraints.
There are new requirements to weigh AI systems’ risks to rights and safety against their claimed benefits. Agencies will also have to verify the quality of the data they use, and to more closely monitor the systems once they have been deployed. Crucially, those affected by AI systems are to receive plain language explanations of how AI is being used, and the opportunity to contest AI decisions.
These might all seem like obvious steps, but they are not currently being taken, and the result is more harm to the most vulnerable in society.
The policy would also compel departments to actively ensure that AI systems will advance equity, dignity and fairness in how they are deployed, calling attention to the inherent unjust biases in models and the unrepresentative data on which models are often trained. Government departments must also consult affected groups before deploying such systems, and provide options for human consideration and remedy, allowing people to contest decisions that adversely affect them, instead of being subject to Kafkaesque “algorithmic blind spots”.
Impressively, the memo even addresses the deeply unsexy but important question of government procurement of AI. So many societal problems with AI start with inexperienced government agencies adopting new software that they don’t adequately understand, which is oversold by its vendor, and which ultimately fails in ways that affect the worst off most severely. Describing and requiring best practices for procurement of AI systems is one of the most significant things government departments can do right now.
The OMB memo is a case study in research and civil society-led policymaking. Current attempts to regulate frontier AI models (which can perform a wide range of tasks, including language and image processing), especially in the EU, could learn something here. In early 2023, surprised by the popularity of ChatGPT, the EU parliament attempted to bolt on regulations for frontier AI systems to its (already flawed) AI Act. In the trilogue negotiations now taking place – where the parliament, the European Commission and the European Council try to reconcile their different proposals for the act – France and Germany recently pushed back, as they realised the possible implications of these proposals for their fast-rising domestic AI companies (Mistral and Aleph Alpha respectively).
This disarray was predictable. GPT-4, the most capable frontier AI model, had barely been released when the first regulatory proposals were brought forward. Regulating a field that is seeing frequent research breakthroughs is hard. The ecosystem for deploying these systems is also fast-changing; will AI companies operate like platforms, tending to monopoly or duopoly power? Or will there be a robust competitive market for frontier AI models? We don’t yet know. There hasn’t been time for public interest research to offer balanced policy proposals, or for a well-grounded and robust civil society debate to take place.
Some ideas for regulating frontier systems are no-brainers – the EU and others should clearly require far more transparency from the leading AI labs, especially with respect to any dangerous capabilities revealed by the next generation of AI systems. But beyond this, the wisest course may be not to rush things, and to foster the kind of civil society debate and in-depth research that grounds more mature policy, such as the OMB memo.
Seth Lazar is a professor of philosophy at the Australian National University and a distinguished research fellow at the Oxford Institute for Ethics in AI