New UK report on Use of AI in Government contains exportable lessons

The UK House of Commons Public Accounts Committee has published a new report on ‘Use of AI in Government’ (2024-25, HC 356).

The report focuses on the specific situation in the UK and addresses issues closely related to the UK Government’s current ambitions to quickly roll out AI across the public sector.

However, most recommendations target general obstacles and pitfalls for AI deployment, acquisition, and assurance, and will thus be of interest in other countries.

The key conclusions of the report — which I would bet are largely applicable to most countries — include:

  • Out–of–date legacy technology and poor data quality and data–sharing is putting AI adoption in the public sector at risk.

  • Public trust is being jeopardised by slow progress on embedding transparency and establishing robust standards for AI adoption in the public sector.

  • There are persistent digital skills shortages in the public sector and current plans to address the skills gap may not be enough.

  • There is no systematic mechanism for bringing together learning from (failed) pilots and there are few examples of successful at–scale adoption across government.

  • There is a a long way to go to strengthen government’s approach to digital procurement to ensure value for money and a thriving AI supplier market.

  • Realising the benefits of AI across the public sector will require strong leadership.

The key recommendations in the report focus on the need to:

  • Deal with legacy technology and ICT systems before AI is overlaid on it.

  • Address the risks resulting from barriers to data–sharing and poor data quality.

  • Boost compliance with algorithmic transparency and disclosure requirements.

  • Strengthen spend controls for high–risk AI use cases to support safe and ethical roll–out.

  • Put effective plans in place to boost public sector digital skills sustainably.

  • Set up a mechanism for systematically gathering and disseminating intelligence on pilots and their evaluation.

  • Set out how to will identify common and scalable AI products and support their development and roll–out at scale.

  • Develop an effective procurement strategy that leverages buying power to the possible extent.

  • Ensure those taking procurement decisions across government have access to the right digital skills and knowledge.

  • Develop effective governance, leadership and ownership within central government.

Somehow, I am glad to see that these recommendations directly map onto the same areas of concern I have been highlighting in my recent research (eg here, here and here) and talks about these issues. The big question now is whether the (UK) government will find ways to meaningfully address (and fund!) the changes required if AI readiness in a real, practical sense is to be brought closer to the aspirations surrounding public sector AI use.

An incomplete overview of (the promises of) GovTech: some thoughts on Engin & Treleaven (2019)

I have just read the interesting paper by Z Engin & P Treleaven, 'Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies' (2019) 62(3) The Computer Journal 448–460, https://doi.org/10.1093/comjnl/bxy082 (available on open access). The paper offers a very useful, but somehow inaccurate and slightly incomplete, overview of data science automation being deployed by governments world-wide (ie GovTech), including the technologies of artificial intelligence (AI), Internet of Things (IoT), big data, behavioral/predictive analytics, and blockchain. I found their taxonomy of GovTech services particularly thought-provoking.

Source: Engin & Treleaven (2019: 449).

Source: Engin & Treleaven (2019: 449).

In the eyes of a lawyer, the use of the word ‘Government’ to describe all these activities is odd, in particular concerning the category ‘Statutes and Compliance’ (at least on the Statutes part). Moving past that conceptual issue—which reminds us once more of the need for more collaboration between computer scientist and social scientists, including lawyers—the taxonomy still seems difficult to square with an analysis of the use of GovTech for public procurement governance and practice. While some of its aspects could be subsumed as tools to ‘Support Civil Servants’ or under ‘National Public Records’, the transactional aspects of public procurement and the interaction with public contractors seem more difficult to place in this taxonomy (even if the category of ‘National Physical Infrastructure’ is considered). Therefore, either additional categories or more granularity is needed in order to have a more complete view of the type of interactions between technology and public sector activity (broadly defined).

The paper is also very limited regarding LawTech, as it primarily concentrates on online dispute resolution (ODR) mechanisms, which is only a relatively small aspect of the potential impact of data science automation on the practice of law. In that regard, I would recommend reading the (more complex, but very useful) book by K D Ashley, Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age (Cambridge, CUP, 2017).

I would thus recommend reading Engin & Treleaven (2019) with an open mind, and using it more as a collection of examples than a closed taxonomy.