The UK’s Digital Regulation Cooperation Forum (DRCF) has published a report on Transparency in the procurement of algorithmic systems (for short, the ‘AI procurement report’). Some of DRCF’s findings in the AI procurement report are astonishing, and should attract significant attention. The one finding that should definitely not go unnoticed is that, according to DRCF, ‘Buyers can lack the technical expertise to effectively scrutinise the [algorithmic systems] they are procuring, whilst vendors may limit the information they share with buyers’ (at 9). While this is not surprising, the ‘normality’ with which this finding is reported evidences the simple fact that, at least in the UK, it is accepted that the AI field is dominated by technology providers, that all institutional buyers are ‘AI consumers’, and that regulators do not seem to see a need to intervene to rebalance the situation.
The report is not specifically about public procurement of AI, but its content is relevant to assessing the conditions surrounding the acquisition of AI by the public sector. First, the report covers algorithmic systems other than AI—that is, automation based on simpler statistical techniques—but the issues it raises can only be more acute in relation to AI than in relation to simpler algorithmic systems (as the report itself highlights, at 9). Second, the report does not make explicit whether the mix of buyers from which it draws evidence includes public as well as private buyers. However, given the public sector’s digital skills gap, there is no reason to believe that the limited knowledge and asymmetries of information documented in the AI procurement report are less acute for public buyers than private buyers.
Moreover, the AI procurement report goes as far as to suggest that public sector procurement is somewhat in a better position than private sector procurement of AI because there are multiple guidelines focusing on public procurement (notably, the Guidelines for AI procurement). Given the shortcomings in those guidelines (see here for earlier analysis), this can hardly provide any comfort.
The AI procurement report evidences that UK (public and private) buyers are procuring AI they do not understand and cannot adequately monitor. This is extremely worrying. The AI procurement report presents evidence gathered by DRCF in two workshops with 23 vendors and buyers of algorithmic systems in Autumn 2022. The evidence base is qualitative and draws from a limited sample, so it may need to be approached with caution. However, its findings are sufficiently worrying as to require a much more robust policy intervention that the proposals in the recently released White Paper ‘AI regulation: a pro-innovation approach’ (for discussion, see here). In this blog post, I summarise the findings of the AI procurement report I find more problematic and link this evidence to the failing attempt at using public procurement to regulate the acquisition of AI by the public sector in the UK.
Misinformed buyers with limited knowledge and no ability to oversee
In its report, DRCF stresses that ‘some buyers lacked understanding of [algorithmic systems] and could struggle to recognise where an algorithmic process had been integrated into a system they were procuring’, and that ‘[t]his issue may be compounded where vendors fail to note that a solution includes AI or its subset, [machine learning]’ (at 9). The report goes on to stress that ‘[w]here buyers have insufficient information about the development or testing of an [algorithmic system], there is a risk that buyers could be deploying an [algorithmic system] that is unlawful or unethical. This risk is particularly acute for high-risk applications of [algorithmic systems], for example where an [algorithmic system] determines a person's access to employment or housing or where the application is in a highly regulated sector such as finance’ (at 10). Needless to say, however, this applies to a much larger set of public sector areas of activity, and the problems are not limited to high-risk applications involving individual rights, but also to those that involve high stakes from a public governance perspective.
Similarly, DRCF stresses that while ‘vendors use a range of performance metrics and testing methods … without appropriate technical expertise or scrutiny, these metrics may give buyers an incomplete picture of the effectiveness of an [algorithmic system]’; ‘vendors [can] share performance metrics that overstate the effectiveness of their [algorithmic system], whilst omitting other metrics which indicate lower effectiveness in other areas. Some vendors raised concerns that their competitors choose the most favourable (i.e., the highest) performance metric to win procurement contracts‘, while ‘not all buyers may have the technical knowledge to understand which performance metrics are most relevant to their procurement decision’ (at 10). This demolishes any hope that buyers facing this type of knowledge gap and asymmetry of information can compare algorithmic systems in a meaningful way.
The issue is further compounded by the lack of standards and metrics. The report stresses this issue: ‘common or standard metrics do not yet exist within industry for the evaluation of [algorithmic systems]. For vendors, this can make it more challenging to provide useful information, and for buyers, this lack of consistency can make it difficult to compare different [algorithmic systems]. Buyers also told us that they would find more detail on the performance of the [algorithmic system] being procured helpful - including across a range of metrics. The development of more consistent performance metrics could also help regulators to better understand how accurate an [algorithmic system] is in a specific context’ (at 11).
Finally, the report also stresses that vendors have every incentive to withhold information from buyers, both because ‘sharing too much technical detail or knowledge could allow buyers to re-develop their product’ and because ‘they remain concerned about revealing commercially sensitive information to buyers’ (at 10). In that context, given the limited knowledge and understanding documented above, it can even be difficult for a buyer to ascertain which information it has not been given.
The DRCF AI procurement report then focuses on mechanisms that could alleviate some of the issues it identifies, such as standardisation, certification and audit mechanisms, as well as AI transparency registers. However, these mechanisms raise significant questions, not only in relation to their practical implementation, but also regarding the continued reliance on the AI industry (and thus, AI vendors) for the development of some of its foundational elements—and crucially, standards and metrics. To a large extent, the AI industry would be setting the benchmark against which their processes, practices and performance is to be measured. Even if a third party is to carry out such benchmarking or compliance analysis in the context of AI audits, the cards can already be stacked against buyers.
Not the way forward for the public sector (in the UK)
The DRCF AI procurement report should give pause to anyone hoping that (public) buyers can drive the process of development and adoption of these technologies. The AI procurement report clearly evidences that buyers with knowledge disadvantages and information asymmetries are at the merci of technology providers—and/or third-party certifiers (in the future). The evidence in the report clearly suggests that this a process driven by technology providers and, more worryingly, that (most) buyers are in no position to critically assess and discipline vendor behaviour.
The question arises why would any buyer acquire and deploy a technology it does not understand and is in no position to adequately assess. But the hype and hard-selling surrounding AI, coupled with its abstract potential to generate significant administrative and operational advantages seem to be too hard to resist, both for private sector entities seeking to gain an edge (or at least not lag behind competitors) in their markets, and by public sector entities faced with AI’s policy irresistibility.
In the public procurement context, the insights from DRCF’s AI procurement report stress that the fundamental imbalance between buyers and vendors of digital technologies undermines the regulatory role that public procurement is expected to play. Only a buyer that had equal or superior technical ability and that managed to force full disclosure of the relevant information from the technology provider would be in a position to (try to) dictate the terms of the acquisition and deployment of the technology, including through the critical assessment and, if needed, modification of emerging technical standards that could well fall short of the public interest embedded in the process of public sector digitalisation—though it would face significant limitations.
This is an ideal to which most public buyers cannot aspire. In fact, in the UK, the position is the reverse and the current approach is to try to facilitate experimentation with digital technologies for public buyers with no knowledge or digital capability whatsoever—see the Crown Commercial Service’s Artificial Intelligence Dynamic Purchasing System (CCS AI DPS), explicitly targeting inexperienced and digitally novice, to put it politely, public buyers by stressing that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation’.
Given the evidence in the DRCF AI report, this approach can only inflate the number of public sector buyers at the merci of technology providers. Especially because, while the CCS AI DPS tries to address some issues, such as ethical risks (though the effectiveness of this can also be queried), it makes clear that ‘quality, price and cultural fit (including social value) can be assessed based on individual customer requirements’. With ‘AI quality’ capturing all the problematic issues mentioned above (and, notably, AI performance), the CCS AI DPS is highly problematic.
If nothing else, the DRCF AI procurement report gives further credence to the need to change regulatory tack. Most importantly, the report evidences that there is a very real risk that public sector entities are currently buying AI they do not understand and are in no position to effectively control post-deployment. This risk needs to be addressed if the UK public is to trust the accelerating process of public sector digitalisation. As formulated elsewhere, this calls for a series of policy and regulatory interventions.
Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens requires new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector, to address the issue of lack of standards and metrics but without reliance on their development by and within the AI industry. Primary legislation would need to be developed by statutory guidance of a much more detailed and actionable nature than eg the current Guidelines for AI procurement. These developed requirements can then be embedded into public contracts by reference, and thus protect public buyers from vendor standard cherry-picking, as well as providing a clear benchmark against which to assess tenders.
Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence. The primary role of AIPSA would be to constrain the process of adoption of AI by the public sector, especially where the public buyer lacks digital capacity and is thus at risk of capture or overpowering by technological vendors.
In that regard, and until sufficient in-house capability is built to ensure adequate understanding of the technologies being procured (especially in the case of complex AI), and adequate ability to manage digital procurement governance requirements independently, AIPSA would have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases, and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification in accordance with benchmarks set by AIPSA, or certification by AIPSA itself.
In parallel, it would also be necessary for the Government to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).
None of this features in the recently released White Paper ‘AI regulation: a pro-innovation approach’. However, DRCF’s AI procurement report further evidences that these policy interventions are necessary. Else, the UK will be a jurisdiction where the public sector acquires and deploys technology it does not understand and cannot control. Surely, this is not the way to go.