Follow Datanami:
May 7, 2021

Oxford Researchers Propose Framework for Governmental AI Projects

The increasing accessibility of AI is leading to growing adoption not just for enterprises, but also for governments, with applications ranging from urban planning to criminal justice. With increased governmental use of AI come increased stakes: for an average person, AI is influencing more and more of their daily life outside of their computers and phones — and with those stakes come functional growing pains, as well as ethical concerns. Now, researchers from the Oxford Commission on AI and Good Governance (OxCAIGG) at the University of Oxford have released a study providing “practical guidance for government officials responsible for designing and delivering AI projects.”

“Governments around the world are launching projects that embed AI in the delivery of public services,” said Godofredo Ramizo Jr., a PhD student at Oxford and the lead author of the study. “These range from AI-driven management of internal systems to smart city solutions for urban problems. Yet many of these projects fail due to lack of financial resources, poor oversight or knowledge gaps. We believe there is a clear need for a succinct framework that will help government decision-making navigate the complexities of AI projects, avoid pitfalls and uphold the public good.”

The study is based on an extensive literature review of 103 sources and in-depth interviews with three government officials and AI practitioners from Hong Kong, Malaysia, and Singapore. It introduces a framework for government AI projects, dividing those projects into “aspirant,” “adventurer,” “reformer,” and “steward” projects. These divisions are drawn on two axes: the amount of resources available to the projects and the importance of the issues addressed by the projects. Reformer projects tackle a core issue and are equipped with substantial resources – for instance, South Korea’s transformation of reclaimed land into a smart city. Meanwhile, steward projects – such as Hong Kong’s regularly updated urban mobility datasets – tackle non-critical areas using sufficient resources. Aspirant projects like the Philippines’ New Clark Smart City are ambitious, but lack the resources for internal follow-through. Finally, adventurer projects, equipped with few resources and not directed at a core problem, represent the lowest-risk proposition.

Using this system, the researchers worked out five principles aimed at helping governments manage these diverse projects while minimizing risk and protecting the public interest.


  1. Determine appropriate solutions

“AI and the costly infrastructure involved,” the researchers write, “are not always the optimal solution to governance problems. Governments must carefully assess whether creative, non-AI solutions will meet the set objectives with less complexity and cost.” By way of example, they explain one project in Jakarta that used non-AI Twitter analytics to model flood impact. Beyond the AI vs. non-AI decision, the researchers highlight how project leaders should consult the public and general stakeholders to identify the range of appropriate tools.

  1. Include a multi-step assessment process

“[A] detailed feasibility study of any proposed AI solution must be undertaken by in-house experts who can make assessments with minimal reliance on external expertise, which might not always provide objective advice,” the study reads. This, the researchers say, includes the importance of conducting a pilot study and developing rigorous assessment criteria.

  1. Strengthen the government’s bargaining position

“AI projects almost always require governments to strike deals with external partners and vendors of AI technologies,” the researchers advise. “Governments must strengthen their bargaining position to better uphold the public interest.” This includes a wide variety of factors, including the need to stipulate that contracted companies must provide interoperable solutions (rather than a “walled garden” solution) and, where possible, working through bulk tenders rather than piecemeal agreements.

  1. Ensure sustainability

“Without a sufficient stream of home-grown talent,” the authors say, “government-led AI projects are not sustainable. Having technical experts allows governments to properly assess the feasibility of technological solutions and competently negotiate with technology companies.” This human capital is so important, they explain, that the Malaysian government implemented policies to grow the country’s pool of data scientists from 100 in 2014 to 14,000 in 2020. Beyond the technical talent, project leaders should also exercise caution in developing overambitious projects that require snowballing investments and maintaining failsafe non-AI solutions.

  1. Manage data, cybersecurity, and confidentiality

“[Governments] must mitigate the risk of inadvertently leaking confidential information,” the researchers write. “This includes not only the government’s intellectual property and the personal information of citizens, but also the government’s strategic intentions, information with relevance to national security, and confidential management techniques.” The researchers recommend anonymizing data, removing personal identifiers, encrypting, and other techniques. Further, the researchers recommend building public trust by being open about data-sharing and security policies.


“In our study, we have shown how certain practical principles of good governance can be deployed to mitigate the risk or pursue advantages inherent in different types of AI projects,” Ramizo concluded. “By following this approach, we hope that government officials will benefit from a greater awareness of the risks, opportunities and strategies suitable for their particular project.  Ultimately, we hope our study serves to contribute to a future where government-led AI projects indeed serve the public good.”

To read the report, titled “Practical Lessons for Government AI Projects,” click here. This study is the third such report from OxCAIGG, which has also released two other reports advising government officials on the use of AI: “Four Principles on Integrating AI & Good Governance” and “Global Attitudes Towards AI, Machine Learning & Automated Decision Making.”

Datanami