Testing the limits of ChatGPT’s procurement knowledge (and stubbornness) – guest post by Džeina Gaile

Following up on the discussion whether public sector use of ChatGPT should be banned, in this post, Džeina Gaile* shares an interesting (and at points unnerving) personal experiment with the tool. Džeina asked a few easy questions on the topic of her PhD research (tender clarifications).

The answers – and the ‘hallucinations’, that is, the glaring mistakes – and the tone are worth paying attention to. I find the bit of the conversation on the very existence of Article 56 and the content of Article 56(3) Directive 2014/24/EU particularly (ahem) illuminating. Happy reading!

PS. If you take Džeina up on her provocation and run your own procurement experiment on ChatGPT (or equivalent), I will be delighted to publish it here as well.

Liar, liar, pants on fire – what ChatGPT did not teach me
about my own PhD research topic

 DISCLAIMER: The views provided here are just a result of an experiment by some random procurement expert that is not a specialist in IT law or any other AI-related law field.

If we consider law as a form of art, as lawyers, words are our main instrument. Therefore, we have a special respect for language as well as the facts that our words represent. We know the liability that comes with the use of the wrong words. One problem with ChatGPT is - it doesn't. 

This brings us to an experiment that could be performed by anyone having at least basic knowledge of the internet and some in-depth knowledge in some specific field, or at least an idea of the information that could be tested on the web. What can you do? Ask ChatGPT (or equivalent) some questions you already know the answers to. It would be nice if the (expected) answers include some facts, numbers, or people you can find on Google. Just remember to double-check everything. And see how it goes.

My experiment was performed on May 3rd, 4th and 17th, 2023, mostly in the midst of yet another evening spent trying to do something PhD related. (As you may know, the status of student upgrades your procrastination skills to a level you never even knew before, despite your age. That is how this article came about).

I asked ChatGPT a few questions on my research topic for fun and possible insights. At the end of this article, you can see quite long excerpts from our conversation, where you will find that maybe you can get the right information (after being very persuasive with your questions!), but not always, as in the case of the May 4th and 17th interactions. And you can get very many apologies during that (if you are into that).[1]

However, such a need for persuasion oughtn’t be necessary if the information is relatively easy to find, since, well, we all have used Google and it already knows how to find things. Also, you can call the answers given on May 4th and 17th misleading, or even pure lies. This, consequently, casts doubt on any information that is provided by this tool (at least, at this moment), if we follow the human logic that simpler things (such as finding the right article or paragraph in law) are easier done than complex things (such as giving an opinion on difficult legal issues). As can be seen from the chat, we don’t even know what ChatGPT’s true sources are and how it actually works when it tells you something that is not true (while still presenting it as a fact). 

Maybe some magic words like “as far as I know” or “prima facie” in the answers could have provided me with more empathy regarding my chatty friend. The total certainty with which the information is provided also gives further reasons for concern. What if I am a normal human being and don’t know the real answer, have forgotten or not noticed the disclaimer at the bottom of the chat (as it happens with the small letter texts), or don’t have any persistence to check the info? I may include the answers in my homework, essay, or even in my views on the issue at work—since, as you know, we are short of time and need everything done by yesterday. The path of least resistance is one of the most tempting. (And in the case of AI we should be aware of a thing inherent to humans called “anthropomorphizing”, i.e., attributing human form or personality to things not human, so we might trust something a bit more or more easily than we should.)

The reliability of the information provided by State institutions as well as lawyers has been one of the cornerstones of people’s belief in the justice system. Therefore, it could be concluded that either I had bad luck, or one should be very careful when introducing AI in state institutions. And such use should be limited only to cases where only information about facts is provided (with the possibility to see and check the resources) until the credibility of AI opinions could be reviewed and verified. At this moment you should believe the disclaimers of its creators and use AI resources with quite (legitimate) mistrust and treat it somewhat as a child that has done something wrong but will not admit it, no matter how long you interrogate them. And don’t take it for something it is not, even if it sounds like you should listen to it.**

May 3rd, 2023

[Reminder: Article 56(3) of the Directive 2014/24/EU: Where information or documentation to be submitted by economic operators is or appears to be incomplete or erroneous or where specific documents are missing, contracting authorities may, unless otherwise provided by the national law implementing this Directive, request the economic operators concerned to submit, supplement, clarify or complete the relevant information or documentation within an appropriate time limit, provided that such requests are made in full compliance with the principles of equal treatment and transparency.]

[...]

[… a quite lengthy discussion about the discretion of the contracting authority to ask for the information ...]

[The author did not get into a discussion about the opinion of ChatGPT on this issue, because that was not the aim of the chat, however, this could be done in some other conversation.]

[…]

[… long explanation ...]

[...]

May 4th, 2023

[Editor’s note: apologies that some of the screenshots appear in a small font…].

[…]

Both links that the ChatGPT gave are correct:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32014L0024

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014L0024&from=EN

However, both citations are wrong.

May 17th, 2023

[As you will see, ChatGPT doesn’t give links anymore, so it could have learned a bit within these few weeks].

[Editor’s note: apologies again that the remainder of the screenshots appear in a small font…].

[...]

[Not to be continued.]

DŽEINA GAILE

My name is Džeina Gaile and I am a doctoral student at the University of Latvia. My research focuses on clarification of a submitted tender, but I am interested in many aspects of public procurement. Therefore, I am supplementing my knowledge as often as I can and have a Master of Laws in Public Procurement Law and Policy with Distinction from the University of Nottingham. I also have been practicing procurement and am working as a lawyer for a contracting authority. In a few words, a bit of a “procurement geek”. In my free time, I enjoy walks with my dog, concerts, and social dancing.

________________

** This article was reviewed by Grammarly. Still, I hope it will not tell anything to the ChatGPT… [Editor’s note – the draft was then further reviewed by a human, yours truly].

[1] To be fair, I must stress that at the bottom of the chat page, there is a disclaimer: “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 3 Version” or “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 12 Version” later. And, when you join the tool, there are several announcements that this is a work in progress.


ChatGPT in the Public Sector -- should it be banned?

In ‘ChatGPT in the Public Sector – overhyped or overlooked?’ (24 Apr 2023), the Analysis and Research Team (ART) of the General Secretariat of the Council of the European Union provides a useful and accessible explanation of how ChatGPT works, as well interesting analysis of the risks and pitfalls of rushing to embed generative artificial intelligence (GenAI), and large language models (LLMs) in particular, in the functioning of the public administration.

The analysis stresses the risks stemming from ‘inaccurate, biased, or nonsensical’ GenAI outputs and, in particular, that ‘the key principles of public administration such as accountability, transparency, impartiality, or reliability need to be considered thoroughly in the [GenAI] integration process’.

The paper provides a helpful introduction to how LLMs work and their technical limitations. It then maps potential uses in the public administration, assesses the potential impact of their use on the European principles of public sector administration, and then suggests some measures to mitigate the relevant risks.

This analysis is helpful but, in my view, it is already captured by the presumption that LLMs are here to stay and that what regulators can do is just try to minimise their potential negative impacts—which implies accepting that there will remain unaddressed impacts. By referring to general principles of public administration, rather than eg the right to good administration under the EU Charter of Fundamental Rights, the analysis is also unnecessarily lenient.

I find this type of discourse dangerous and troubling because it facilitates the adoption of digital technologies that cannot meet current legal requirements and guarantees of individual rights. This is clear from the paper itself, although the implications of part of the analysis are not sufficiently explored, in my view.

The paper has a final section where it explicitly recognises that, while some risks might be mitigated by technological advancements, other risks are of a more structural nature and cannot be fully corrected despite best efforts. The paper then lists a very worrying panoply of such structural issues (at 16):

  • ‘This is the case for detecting and removing biases in training data and model outputs. Efforts to sanitize datasets can even worsen biases’.

  • ‘Related to biases is the risk of a perpetuation of the status quo. LLMs mirror the values, habits and attitudes that are present in their training data, which does not leave much space for changing or underrepresented societal views. Relying on LLMs that have been trained with previously produced documents in a public administration severely limits the scope for improvement and innovation and risks leaving the public sector even less flexible than it is already perceived to be’.

  • ‘The ‘black box’ issue, where AI models arrive at conclusions or decisions without revealing the process of how they were reached is also primarily structural’.

  • ‘Regulating new technologies will remain a cat-and-mouse game. Acceleration risk (the emergence of a race to deploy new AI as quickly as possible at the expense of safety standards) is also an area of concern’.

  • ‘Finally […] a major structural risk lies in overreliance, which may be bolstered by rapid technological advances. This could lead to a lack of critical thinking skills needed to adequately assess and oversee the model’s output, especially amongst a younger generation entering a workforce where such models are already being used’.

In my view, beyond the paper’s suggestion that the way forward is to maintain human involvement to monitor the way LLMs (mal)function in the public sector, we should be discussing the imposition of a ban on the adoption of LLMs (and other digital technologies) by the public sector unless it can be positively proven that their deployment will not affect individual rights and more diffuse public interests, and that any residual risks are adequately mitigated.

The current state of affairs is unacceptable in that the lack of regulation allows for a quickly accelerating accumulation of digital deployments that generate risks to social and individual rights and goods. The need to reverse this situation underlies my proposal to permission the adoption of digital technologies by the public sector. Unless we take a robust approach to slowing down and carefully considering the implications of public sector digitalisation, we may be undermining public governance in ways that will be very difficult or impossible to undo. It is not too late, but it may be soon.

Source: https://www.thetimes.co.uk/article/how-we-...

Free registration open for two events on procurement and artificial intelligence

Registration is now open for two free events on procurement and artificial intelligence (AI).

First, a webinar where I will be participating in discussions on the role of procurement in contributing to the public sector’s acquisition of trustworthy AI, and the associated challenges, from an EU and US perspective.

Second, a public lecture where I will present the findings of my research project on digital technologies and public procurement.

Please scroll down for details and links to registration pages. All welcome!

1. ‘Can Procurement Be Used to Effectively Regulate AI?’ | Free online webinar
30 May 2023 2pm BST / 3pm CET-SAST / 9am EST (90 mins)
Co-organised by University of Bristol Law School and George Washington University Law School.

Artificial Intelligence (“AI”) regulation and governance is a global challenge that is starting to generate different responses in the EU, US, and other jurisdictions. Such responses are, however, rather tentative and politically contested. A full regulatory system will take time to crystallise and be fully operational. In the meantime, despite this regulatory gap, the public sector is quickly adopting AI solutions for a wide range of activities and public services.

This process of accelerated AI adoption by the public sector places procurement as the (involuntary) gatekeeper, tasked with ‘AI regulation by contract’, at least for now. The procurement function is expected to design tender procedures and contracts capable of attaining goals of AI regulation (such as trustworthiness, explainability, or compliance with data protection and human and fundamental rights) that are so far eluding more general regulation.

This webinar will provide an opportunity to take a hard look at the likely effectiveness of AI regulation by contract through procurement and its implications for the commercialisation of public governance, focusing on key issues such as:

  • The interaction between tender design, technical standards, and negotiations.

  • The challenges of designing, monitoring, and enforcing contractual clauses capable of delivering effective ‘regulation by contract’ in the AI space.

  • The tension between the commercial value of tailored contractual design and the regulatory value of default clauses and standard terms.

  • The role of procurement disputes and litigation in shaping AI regulation by contract.

  • The alternative regulatory option of establishing mandatory prior approval by an independent regulator of projects involving AI adoption by the public sector.

This webinar will be of interest to those working on or researching the digitalisation of the public sector and AI regulation in general, as the discussion around procurement gatekeeping mirrors the main issues arising from broader trends.

I will have the great opportunity of discussing my research with Aris Georgopoulos (Nottingham), Scott Simpson (Digital Transformation Lead at U.S. Department of Homeland Security), and Liz Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army). Jessica Tillipman (GW Law) will moderate the discussion and Q&A.

Registration: https://law-gwu-edu.zoom.us/webinar/register/WN_w_V9s_liSiKrLX9N-krrWQ.

2. ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ | Free in-person public lecture
4 July 2023 2pm BST, Reception Room, Wills Memorial Building, University of Bristol
Organised by University of Bristol Law School, Centre for Global Law and Innovation

The public sector is quickly adopting artificial intelligence (AI) to manage its interactions with citizens and in the provision of public services – for example, using chatbots in official websites, automated processes and call-centres, or predictive algorithms.

There are inherent high stakes risks to this process of public governance digitalisation, such as bias and discrimination, unethical deployment, data and privacy risks, cyber security risks, or risks of technological debt and dependency on proprietary solutions developed by (big) tech companies.

However, as part of the UK Government’s ‘light touch’ ‘pro-innovation’ approach to digital technology regulation, the adoption of AI in the public sector remains largely unregulated. 

In this public lecture, I will present the findings of my research funded by the British Academy, analysing how, in this deregulatory context, the existing rules on public procurement fall short of protecting the public interest.

An alternative approach is required to create mechanisms of external independent oversight and mandatory standards to embed trustworthy AI requirements and to mitigate against commercial capture in the acquisition of AI solutions. 

Registration: https://www.eventbrite.co.uk/e/can-procurement-promote-trustworthy-ai-and-avoid-commercial-capture-tickets-601212712407.

External oversight and mandatory requirements for public sector digital technology adoption

© Mateo Mulder-Graells (2023).

I thought the time would never come, but the last piece of my book project puzzle is now more or less in place. After finding that procurement is not the right regulatory actor and does not have the best tools of ‘digital regulation by contract’, in this last draft chapter, I explore how to discharge procurement of the assigned digital regulation role to increase the likelihood of effective enforcement of desirable goals of public sector digital regulation.

I argue that this should be done through two inter-related regulatory interventions consisting of developing (1) a regulator tasked with the external oversight of the adoption of digital technologies by the public sector, as well as (2) a suite of mandatory requirements binding both public entities seeking to adopt digital technologies and technology providers, and both in relation to the digital technologies to be adopted by the public sector and the applicable governance framework.

Detailed analysis of these issues would require much more extensive treatment than this draft chapter can offer. The modest goal here is simply to stress the key attributes and functions that each of these two regulatory interventions should have to make a positive contribution to governing the transition towards a new model of public digital governance. In this blog post, I summarise the main arguments.

As ever, I would be most grateful for feedback: a.sanchez-graells@bristol.ac.uk. Especially as I will now turn my attention to seeing how the different pieces of the puzzle fit together, while I edit the manuscript for submission before end of July 2023.

Institutional deficit and risk of capture

In the absence of an alternative institutional architecture (or while it is put in place), procurement is expected to develop a regulatory gatekeeping role in relation to the adoption of digital technologies by the public sector, which is in turn expected to have norm-setting and market-shaping effects across the economy. This could be seen as a way of bypassing or postponing decisions on regulatory architecture.

However, earlier analysis has shown that the procurement function is not the right institution to which to assign a digital regulation role, as it cannot effectively discharge such a duty. This highlights the existence of an institutional deficit in the process of public sector digitalisation, as well as in relation to digital technology regulation more broadly. An alternative approach to institutional design is required, and it can be delivered through the creation of a notional ‘AI in Public Sector Authority’ (AIPSA).

Earlier analysis has also shown that there are pervasive risks of regulatory capture and commercial determination of the process of public sector digitalisation stemming from reliance on standards and benchmarks created by technology vendors or by bodies heavily influenced by the tech industry. AIPSA could safeguard against such risk through controls over the process of standard adoption. AIPSA could also guard against excessive experimentation with digital technologies by creating robust controls to counteract their policy irresistibility.

Overcoming the institutional deficit through AIPSA

The adoption of digital technologies in the process of public sector digitalisation creates regulatory challenges that require external oversight, as procurement is unable to effectively regulate this process. A particularly relevant issue concerns whether such oversight should be entrusted to a new regulator (broad approach), or whether it would suffice to assign new regulatory tasks to existing regulators (narrow approach).

I submit that the narrow approach is inadequate because it perpetuates regulatory fragmentation and can lead to undesirable spillovers or knock-on effects, whether the new regulatory tasks are assigned to data protection authorities, (quasi)regulators with a ‘sufficiently close’ regulatory remit in relation with information and communications technologies (ICT) (such as eg the Agency for Digital Italy (AgID), or the Dutch Advisory Council on IT assessment (AcICT)), or newly created centres of expertise in algorithmic regulation (eg the French PEReN). Such ‘organic’ or ‘incremental’ approach to institutional development could overshadow important design considerations, as well embed biases due to the institutional drivers of the existing (quasi)regulators.

To avoid these issues, I advocate a broader or more joined up approach in the proposal for AIPSA. AIPSA would be an independent authority with the statutory function of promoting overarching goals of digital regulation, and specifically tasked with regulating the adoption and use of digital technologies by the public sector, whether through in-house development or procurement from technology providers. AIPSA would also absorb regulatory functions in cognate areas, such as the governance of public sector data, and integrate work in areas such as cyber security. It would also serve a coordinating function with the data protection authority.

In the draft chapter, I stress three fundamental aspects of AIPSA’s institutional design: regulatory coherence, independence and expertise. Independence and expertise would be the two most crucial factors. AIPSA would need to be designed in a way that ensured both political and industry independence, with the issue of political independence having particular salience and requiring countervailing accountability mechanisms. Relatedly, the importance of digital capabilities to effectively exercise a digital regulation role cannot be overemphasised. It is not only important in relation to the active aspects of the regulatory role—such as control of standard setting or permissioning or licencing of digital technology use (below)—but also in relation to the passive aspects of the regulatory role and, in particular, in relation to reactive engagement with industry. High levels of digital capability would be essential to allow AIPSA to effectively scrutinise claims from those that sought to influence its operation and decision-making, as well as reduce AIPSA’s dependence on industry-provided information.

safeguard against regulatory capture and policy irresistibility

Regulating the adoption of digital technologies in the process of public sector digitalisation requires establishing the substantive requirements that such technology needs to meet, as well as the governance requirements need to ensure its proper use. AIPSA’s role in setting mandatory requirements for public sector digitalisation would be twofold.

First, through an approval or certification mechanism, it would control the process of standardisation to neutralise risks of regulatory capture and commercial determination. Where no standards were susceptible of approval or certification, AIPSA would develop them.

Second, through a permissioning or licencing process, AIPSA would ensure that decisions on the adoption of digital technologies by the public sector are not driven by ‘policy irresistibility’, that they are supported by clear governance structures and draw on sufficient resources, and that adherence to the goals of digital regulation is sustained throughout the implementation and use of digital technologies by the public sector and subject to proactive transparency requirements.

The draft chapter provides more details on both issues.

If not AIPSA … then clearly not procurement

There can be many objections to the proposals developed in this draft chapter, which would still require further development. However, most of the objections would likely also apply to the use of procurement as a tool of digital regulation. The functions expected of AIPSA closely match those expected of the procurement function under the approach to ‘digital regulation by contract’. Challenges to AIPSA’s ability to discharge such functions would be applicable to any public buyer seeking to achieve the same goals. Similarly, challenges to the independence or need for accountability of AIPSA would be similarly applicable to atomised decision-making by public buyers.

While the proposal is necessarily imperfect, I submit that it would improve upon the emerging status quo and that, in discharging procurement of the digital regulation role, it would make a positive contribution to the governance of the transition to a new model of digital public governance.

The draft chapter is available via SSRN: Albert Sanchez-Graells, ‘Discharging procurement of the digital regulation role: external oversight and mandatory requirements for public sector digital technology adoption’.

Procuring AI without understanding it. Way to go?

The UK’s Digital Regulation Cooperation Forum (DRCF) has published a report on Transparency in the procurement of algorithmic systems (for short, the ‘AI procurement report’). Some of DRCF’s findings in the AI procurement report are astonishing, and should attract significant attention. The one finding that should definitely not go unnoticed is that, according to DRCF, ‘Buyers can lack the technical expertise to effectively scrutinise the [algorithmic systems] they are procuring, whilst vendors may limit the information they share with buyers’ (at 9). While this is not surprising, the ‘normality’ with which this finding is reported evidences the simple fact that, at least in the UK, it is accepted that the AI field is dominated by technology providers, that all institutional buyers are ‘AI consumers’, and that regulators do not seem to see a need to intervene to rebalance the situation.

The report is not specifically about public procurement of AI, but its content is relevant to assessing the conditions surrounding the acquisition of AI by the public sector. First, the report covers algorithmic systems other than AI—that is, automation based on simpler statistical techniques—but the issues it raises can only be more acute in relation to AI than in relation to simpler algorithmic systems (as the report itself highlights, at 9). Second, the report does not make explicit whether the mix of buyers from which it draws evidence includes public as well as private buyers. However, given the public sector’s digital skills gap, there is no reason to believe that the limited knowledge and asymmetries of information documented in the AI procurement report are less acute for public buyers than private buyers.

Moreover, the AI procurement report goes as far as to suggest that public sector procurement is somewhat in a better position than private sector procurement of AI because there are multiple guidelines focusing on public procurement (notably, the Guidelines for AI procurement). Given the shortcomings in those guidelines (see here for earlier analysis), this can hardly provide any comfort.

The AI procurement report evidences that UK (public and private) buyers are procuring AI they do not understand and cannot adequately monitor. This is extremely worrying. The AI procurement report presents evidence gathered by DRCF in two workshops with 23 vendors and buyers of algorithmic systems in Autumn 2022. The evidence base is qualitative and draws from a limited sample, so it may need to be approached with caution. However, its findings are sufficiently worrying as to require a much more robust policy intervention that the proposals in the recently released White Paper ‘AI regulation: a pro-innovation approach’ (for discussion, see here). In this blog post, I summarise the findings of the AI procurement report I find more problematic and link this evidence to the failing attempt at using public procurement to regulate the acquisition of AI by the public sector in the UK.

Misinformed buyers with limited knowledge and no ability to oversee

In its report, DRCF stresses that ‘some buyers lacked understanding of [algorithmic systems] and could struggle to recognise where an algorithmic process had been integrated into a system they were procuring’, and that ‘[t]his issue may be compounded where vendors fail to note that a solution includes AI or its subset, [machine learning]’ (at 9). The report goes on to stress that ‘[w]here buyers have insufficient information about the development or testing of an [algorithmic system], there is a risk that buyers could be deploying an [algorithmic system] that is unlawful or unethical. This risk is particularly acute for high-risk applications of [algorithmic systems], for example where an [algorithmic system] determines a person's access to employment or housing or where the application is in a highly regulated sector such as finance’ (at 10). Needless to say, however, this applies to a much larger set of public sector areas of activity, and the problems are not limited to high-risk applications involving individual rights, but also to those that involve high stakes from a public governance perspective.

Similarly, DRCF stresses that while ‘vendors use a range of performance metrics and testing methods … without appropriate technical expertise or scrutiny, these metrics may give buyers an incomplete picture of the effectiveness of an [algorithmic system]’; ‘vendors [can] share performance metrics that overstate the effectiveness of their [algorithmic system], whilst omitting other metrics which indicate lower effectiveness in other areas. Some vendors raised concerns that their competitors choose the most favourable (i.e., the highest) performance metric to win procurement contracts‘, while ‘not all buyers may have the technical knowledge to understand which performance metrics are most relevant to their procurement decision’ (at 10). This demolishes any hope that buyers facing this type of knowledge gap and asymmetry of information can compare algorithmic systems in a meaningful way.

The issue is further compounded by the lack of standards and metrics. The report stresses this issue: ‘common or standard metrics do not yet exist within industry for the evaluation of [algorithmic systems]. For vendors, this can make it more challenging to provide useful information, and for buyers, this lack of consistency can make it difficult to compare different [algorithmic systems]. Buyers also told us that they would find more detail on the performance of the [algorithmic system] being procured helpful - including across a range of metrics. The development of more consistent performance metrics could also help regulators to better understand how accurate an [algorithmic system] is in a specific context’ (at 11).

Finally, the report also stresses that vendors have every incentive to withhold information from buyers, both because ‘sharing too much technical detail or knowledge could allow buyers to re-develop their product’ and because ‘they remain concerned about revealing commercially sensitive information to buyers’ (at 10). In that context, given the limited knowledge and understanding documented above, it can even be difficult for a buyer to ascertain which information it has not been given.

The DRCF AI procurement report then focuses on mechanisms that could alleviate some of the issues it identifies, such as standardisation, certification and audit mechanisms, as well as AI transparency registers. However, these mechanisms raise significant questions, not only in relation to their practical implementation, but also regarding the continued reliance on the AI industry (and thus, AI vendors) for the development of some of its foundational elements—and crucially, standards and metrics. To a large extent, the AI industry would be setting the benchmark against which their processes, practices and performance is to be measured. Even if a third party is to carry out such benchmarking or compliance analysis in the context of AI audits, the cards can already be stacked against buyers.

Not the way forward for the public sector (in the UK)

The DRCF AI procurement report should give pause to anyone hoping that (public) buyers can drive the process of development and adoption of these technologies. The AI procurement report clearly evidences that buyers with knowledge disadvantages and information asymmetries are at the merci of technology providers—and/or third-party certifiers (in the future). The evidence in the report clearly suggests that this a process driven by technology providers and, more worryingly, that (most) buyers are in no position to critically assess and discipline vendor behaviour.

The question arises why would any buyer acquire and deploy a technology it does not understand and is in no position to adequately assess. But the hype and hard-selling surrounding AI, coupled with its abstract potential to generate significant administrative and operational advantages seem to be too hard to resist, both for private sector entities seeking to gain an edge (or at least not lag behind competitors) in their markets, and by public sector entities faced with AI’s policy irresistibility.

In the public procurement context, the insights from DRCF’s AI procurement report stress that the fundamental imbalance between buyers and vendors of digital technologies undermines the regulatory role that public procurement is expected to play. Only a buyer that had equal or superior technical ability and that managed to force full disclosure of the relevant information from the technology provider would be in a position to (try to) dictate the terms of the acquisition and deployment of the technology, including through the critical assessment and, if needed, modification of emerging technical standards that could well fall short of the public interest embedded in the process of public sector digitalisation—though it would face significant limitations.

This is an ideal to which most public buyers cannot aspire. In fact, in the UK, the position is the reverse and the current approach is to try to facilitate experimentation with digital technologies for public buyers with no knowledge or digital capability whatsoever—see the Crown Commercial Service’s Artificial Intelligence Dynamic Purchasing System (CCS AI DPS), explicitly targeting inexperienced and digitally novice, to put it politely, public buyers by stressing that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation’.

Given the evidence in the DRCF AI report, this approach can only inflate the number of public sector buyers at the merci of technology providers. Especially because, while the CCS AI DPS tries to address some issues, such as ethical risks (though the effectiveness of this can also be queried), it makes clear that ‘quality, price and cultural fit (including social value) can be assessed based on individual customer requirements’. With ‘AI quality’ capturing all the problematic issues mentioned above (and, notably, AI performance), the CCS AI DPS is highly problematic.

If nothing else, the DRCF AI procurement report gives further credence to the need to change regulatory tack. Most importantly, the report evidences that there is a very real risk that public sector entities are currently buying AI they do not understand and are in no position to effectively control post-deployment. This risk needs to be addressed if the UK public is to trust the accelerating process of public sector digitalisation. As formulated elsewhere, this calls for a series of policy and regulatory interventions.

Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens requires new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector, to address the issue of lack of standards and metrics but without reliance on their development by and within the AI industry. Primary legislation would need to be developed by statutory guidance of a much more detailed and actionable nature than eg the current Guidelines for AI procurement. These developed requirements can then be embedded into public contracts by reference, and thus protect public buyers from vendor standard cherry-picking, as well as providing a clear benchmark against which to assess tenders.

Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence. The primary role of AIPSA would be to constrain the process of adoption of AI by the public sector, especially where the public buyer lacks digital capacity and is thus at risk of capture or overpowering by technological vendors.

In that regard, and until sufficient in-house capability is built to ensure adequate understanding of the technologies being procured (especially in the case of complex AI), and adequate ability to manage digital procurement governance requirements independently, AIPSA would have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases, and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification in accordance with benchmarks set by AIPSA, or certification by AIPSA itself.

In parallel, it would also be necessary for the Government to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

None of this features in the recently released White Paper ‘AI regulation: a pro-innovation approach’. However, DRCF’s AI procurement report further evidences that these policy interventions are necessary. Else, the UK will be a jurisdiction where the public sector acquires and deploys technology it does not understand and cannot control. Surely, this is not the way to go.

UK's 'pro-innovation approach' to AI regulation won't do, particularly for public sector digitalisation

Regulating artificial intelligence (AI) has become the challenge of the time. This is a crucial area of regulatory development and there are increasing calls—including from those driving the development of AI—for robust regulatory and governance systems. In this context, more details have now emerged on the UK’s approach to AI regulation.

Swimming against the tide, and seeking to diverge from the EU’s regulatory agenda and the EU AI Act, the UK announced a light-touch ‘pro-innovation approach’ in its July 2022 AI regulation policy paper. In March 2023, the same approach was supported by a Report of the Government Chief Scientific Adviser (the ‘GCSA Report’), and is now further developed in the White Paper ‘AI regulation: a pro-innovation approach’ (the ‘AI WP’). The UK Government has launched a public consultation that will run until 21 June 2023.

Given the relevance of the issue, it can be expected that the public consultation will attract a large volume of submissions, and that the ‘pro-innovation approach’ will be heavily criticised. Indeed, there is an on-going preparatory Parliamentary Inquiry on the Governance of AI that has already collected a wealth of evidence exploring the pros and cons of the regulatory approach outlined there. Moreover, initial reactions eg by the Public Law Project, the Ada Lovelace Institute, or the Royal Statistical Society have been (to different degrees) critical of the lack of regulatory ambition in the AI WP—while, as could be expected, think tanks closely linked to the development of the policy, such as the Alan Turing Institute, have expressed more positive views.

Whether the regulatory approach will shift as a result of the expected pushback is unclear. However, given that the AI WP follows the same deregulatory approach first suggested in 2018 and is strongly politically/policy entrenched—for the UK Government has self-assessed this approach as ‘world leading’ and claims it will ‘turbocharge economic growth’—it is doubtful that much will necessarily change as a result of the public consultation.

That does not mean we should not engage with the public consultation, but the opposite. In the face of the UK Government’s dereliction of duty, or lack of ideas, it is more important than ever that there is a robust pushback against the deregulatory approach being pursued. Especially in the context of public sector digitalisation and the adoption of AI by the public administration and in the provision of public services, where the Government (unsurprisingly) is unwilling to create regulatory safeguards to protect citizens from its own action.

In this blogpost, I sketch my main areas of concern with the ‘pro-innovation approach’ in the GCSA Report and AI WP, which I will further develop for submission to the public consultation, building on earlier views. Feedback and comments would be gratefully received: a.sanchez-graells@bristol.ac.uk.

The ‘pro-innovation approach’ in the GCSA Report — squaring the circle?

In addition to proposals on the intellectual property (IP) regulation of generative AI, the opening up of public sector data, transport-related, or cyber security interventions, the GCSA Report focuses on ‘core’ regulatory and governance issues. The report stresses that regulatory fragmentation is one of the key challenges, as is the difficulty for the public sector in ‘attracting and retaining individuals with relevant skills and talent in a competitive environment with the private sector, especially those with expertise in AI, data analytics, and responsible data governance‘ (at 5). The report also further hints at the need to boost public sector digital capabilities by stressing that ‘the government and regulators should rapidly build capability and know-how to enable them to positively shape regulatory frameworks at the right time‘ (at 13).

Although the rationale is not very clearly stated, to bridge regulatory fragmentation and facilitate the pooling of digital capabilities from across existing regulators, the report makes a central proposal to create a multi-regulator AI sandbox (at 6-8). The report suggests that it could be convened by the Digital Regulatory Cooperation Forum (DRCF)—which brings together four key regulators (the Information Commissioner’s Office (ICO), Office of Communications (Ofcom), the Competition and Markets Authority (CMA) and the Financial Conduct Authority (FCA))—and that DRCF should look at ways of ‘bringing in other relevant regulators to encourage join up’ (at 7).

The report recommends that the AI sandbox should operate on the basis of a ‘commitment from the participant regulators to make joined-up decisions on regulations or licences at the end of each sandbox process and a clear feedback loop to inform the design or reform of regulatory frameworks based on the insights gathered. Regulators should also collaborate with standards bodies to consider where standards could act as an alternative or underpin outcome-focused regulation’ (at 7).

Therefore, the AI sandbox would not only be multi-regulator, but also encompass (in some way) standard-setting bodies (presumably UK ones only, though), without issues of public-private interaction in decision-making implying the exercise of regulatory public powers, or issues around regulatory capture and risks of commercial determination, being considered at all. The report in general is extremely industry-orientated, eg in stressing in relation to the overarching pacing problem that ‘for emerging digital technologies, the industry view is clear: there is a greater risk from regulating too early’ (at 5), without this being in any way balanced with clear (non-industry) views that the biggest risk is actually in regulating too late and that we are collectively frog-boiling into a ‘runaway AI’ fiasco.

Moreover, confusingly, despite the fact that the sandbox would be hosted by DRCF (of which the ICO is a leading member), the GCSA Report indicates that the AI sandbox ‘could link closely with the ICO sandbox on personal data applications’ (at 8). The fact that the report is itself unclear as to whether eg AI applications with data protection implications should be subjected to one or two sandboxes, or the extent to which the general AI sandbox would need to be integrated with sectoral sandboxes for non-AI regulatory experimentation, already indicates the complexity and dubious practical viability of the suggested approach.

It is also unclear why multiple sector regulators should be involved in any given iteration of a single AI sandbox where there may be no projects within their regulatory remit and expertise. The alternative approach of having an open or rolling AI sandbox mechanism led by a single AI authority, which would then draw expertise and work in collaboration with the relevant sector regulator as appropriate on a per-project basis, seems preferable. While some DRCF members could be expected to have to participate in a majority of sandbox projects (eg CMA and ICO), others would probably have a much less constant presence (eg Ofcom, or certainly the FCA).

Remarkably, despite this recognition of the functional need for a centralised regulatory approach and a single point of contact (primarily for industry’s convenience), the GCSA Report implicitly supports the 2022 AI regulation policy paper’s approach to not creating an overarching cross-sectoral AI regulator. The GCSA Report tries to create a ‘non-institutionalised centralised regulatory function’, nested under DRCF. In practice, however, implementing the recommendation for a single AI sandbox would create the need for the further development of the governance structures of the DRCF (especially if it was to grow by including many other sectoral regulators), or whichever institution ‘hosted it’, or else risk creating a non-institutional AI regulator with the related difficulties in ensuring accountability. This would add a layer of deregulation to the deregulatory effect that the sandbox itself creates (see eg Ranchordas (2021)).

The GCSA Report seems to try to square the circle of regulatory fragmentation by relying on cooperation as a centralising regulatory device, but it does this solely for the industry’s benefit and convenience, without paying any consideration to the future effectiveness of the regulatory framework. This is hard to understand, given the report’s identification of conflicting regulatory constraints, or in its terminology ‘incentives’: ‘The rewards for regulators to take risks and authorise new and innovative products and applications are not clear-cut, and regulators report that they can struggle to trade off the different objectives covered by their mandates. This can include delivery against safety, competition objectives, or consumer and environmental protection, and can lead to regulator behaviour and decisions that prioritise further minimising risk over supporting innovation and investment. There needs to be an appropriate balance between the assessment of risk and benefit’ (at 5).

This not only frames risk-minimisation as a negative regulatory outcome (and further feeds into the narrative that precautionary regulatory approaches are somehow not legitimate because they run against industry goals—which deserves strong pushback, see eg Kaminski (2022)), but also shows a main gap in the report’s proposal for the single AI sandbox. If each regulator has conflicting constraints, what evidence (if any) is there that collaborative decision-making will reduce, rather than exacerbate, such regulatory clashes? Are decisions meant to be arrived at by majority voting or in any other way expected to deactivate (some or most) regulatory requirements in view of (perceived) gains in relation to other regulatory goals? Why has there been no consideration of eg the problems encountered by concurrency mechanisms in the application of sectoral and competition rules (see eg Dunne (2014), (2020) and (2021)), as an obvious and immediate precedent of the same type of regulatory coordination problems?

The GCSA report also seems to assume that collaboration through the AI sandbox would be resource neutral for participating regulators, whereas it seems reasonable to presume that this additional layer of regulation (even if not institutionalised) would require further resources. And, in any case, there does not seem to be much consideration as to the viability of asking of resource-strapped regulators to create an AI sandbox where they can (easily) be out-skilled and over-powered by industry participants.

In my view, the GCSA Report already points at significant weaknesses in the resistance to creating any new authorities, despite the obvious functional need for centralised regulation, which is one of the main weaknesses, or the single biggest weakness, in the AI WP—as well as in relation to a lack of strategic planning around public sector digital capabilities, despite well-recognised challenges (see eg Committee of Public Accounts (2021)).

The ‘pro-innovation approach’ in the AI WP — a regulatory blackhole, privatisation of ai regulation, or both

The AI WP envisages an ‘innovative approach to AI regulation [that] uses a principles-based framework for regulators to interpret and apply to AI within their remits’ (para 36). It expects the framework to ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ (para 37). As will become clear, however, such ‘innovative approach’ solely amounts to the formulation of high-level, broad, open-textured and incommensurable principles to inform a soft law push to the development of regulatory practices aligned with such principles in a highly fragmented and incomplete regulatory landscape.

The regulatory framework would be built on four planks (para 38): [i] an AI definition (paras 39-42); [ii] a context-specific approach (ie a ‘used-based’ approach, rather than a ‘technology-led’ approach, see paras 45-47); [iii] a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities (paras 48-54); and [iv] new central functions to support regulators to deliver the AI regulatory framework (paras 70-73). In reality, though, there will be only two ‘pillars’ of the regulatory framework and they do not involve any new institutions or rules. The AI WP vision thus largely seems to be that AI can be regulated in the UK in a world-leading manner without doing anything much at all.

AI Definition

The UK’s definition of AI will trigger substantive discussions, especially as it seeks to build it around ‘the two characteristics that generate the need for a bespoke regulatory response’: ‘adaptivity’ and ‘autonomy’ (para 39). Discussing the definitional issue is beyond the scope of this post but, on the specific identification of the ‘autonomy’ of AI, it is worth highlighting that this is an arguably flawed regulatory approach to AI (see Soh (2023)).

No new institutions

The AI WP makes clear that the UK Government has no plans to create any new AI regulator, either with a cross-sectoral (eg general AI authority) or sectoral remit (eg an ‘AI in the public sector authority’, as I advocate for). The Ministerial Foreword to the AI WP already stresses that ‘[t]o ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’ (at p2). The AI WP further stresses that ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators’ (para 47). This however seems to presume that a new cross-sector AI regulator would be unable to coordinate with existing regulators, despite the institutional architecture of the regulatory framework foreseen in the AI WP entirely relying on inter-regulator collaboration (!).

No new rules

There will also not be new legislation underpinning regulatory activity, although the Government claims that the WP AI, ‘alongside empowering regulators to take a lead, [is] also setting expectations‘ (at p3). The AI WP claims to develop a regulatory framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: [i] Safety, security and robustness; [ii] Appropriate transparency and explainability; [iii] Fairness; [iv] Accountability and governance; and [v] Contestability and redress (para 10). However, they will not be put on a statutory footing (initially); ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’ (para 11). While there is some detail on the intended meaning of these principles (see para 52 and Annex A), the principles necessarily lack precision and, worse, there is a conflation of the principles with other (existing) regulatory requirements.

For example, it is surprising that the AI WP describes fairness as implying that ‘AI systems should (sic) not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes‘ (emphasis added), and stresses the expectation ‘that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation’ (para 52). This encapsulates the risks that principles-based AI regulation ends up eroding compliance with and enforcement of current statutory obligations. A principle of AI fairness cannot modify or exclude existing legal obligations, and it should not risk doing so either.

Moreover, the AI WP suggests that, even if the principles are supported by a statutory duty for regulators to have regard to them, ‘while the duty to have due regard would require regulators to demonstrate that they had taken account of the principles, it may be the case that not every regulator will need to introduce measures to implement every principle’ (para 58). This conflates two issues. On the one hand, the need for activity subjected to regulatory supervision to comply with all principles and, on the other, the need for a regulator to take corrective action in relation to any of the principles. It should be clear that regulators have a duty to ensure that all principles are complied with in their regulatory remit, which does not seem to entirely or clearly follow from the weaker duty to have due regard to the principles.

perpetuating regulatory gaps, in particular regarding public sector digitalisation

As a consequence of the lack of creation of new regulators and the absence of new legislation, it is unclear whether the ‘regulatory strategy’ in the AI WP will have any real world effects within existing regulatory frameworks, especially as the most ambitious intervention is to create ‘a statutory duty on regulators requiring them to have due regard to the principles’ (para 12)—but the Government may decide not to introduce it if ‘monitoring of the effectiveness of the initial, non-statutory framework suggests that a statutory duty is unnecessary‘ (para 59).

However, what is already clear that there is no new AI regulation in the horizon despite the fact that the AI WP recognises that ‘some AI risks arise across, or in the gaps between, existing regulatory remits‘ (para 27), that ‘there may be AI-related risks that do not clearly fall within the remits of the UK’s existing regulators’ (para 64), and the obvious and worrying existence of high risks to fundamental rights and values (para 4 and paras 22-25). The AI WP is naïve, to say the least, in setting out that ‘[w]here prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions. This may include identifying iterations to the framework such as changes to regulators’ remits, updates to the Regulators’ Code, or additional legislative intervention’ (para 65).

Hoping that such risk identification and gap analysis will take place without assigning specific responsibility for it—and seeking to exempt the Government from such responsibility—seems a bit too much to ask. In fact, this is at odds with the graphic depiction of how the AI WP expects the system to operate. As noted in (1) in the graph below, it is clear that the identification of risks that are cross-cutting or new (unregulated) risks that warrant intervention is assigned to a ‘central risk function’ (more below), not the regulators. Importantly, the AI WP indicates that such central function ‘will be provided from within government’ (para 15 and below). Which then raises two questions: (a) who will have the responsibility to proactively screen for such risks, if anyone, and (b) how has the Government not already taken action to close the gaps it recognises exists in the current legal landscape?

AI WP Figure 2: Central risks function activities.

This perpetuates the current regulatory gaps, in particular in sectors without a regulator or with regulators with very narrow mandates—such as the public sector and, to a large extent, public services. Importantly, this approach does not create any prohibition of impermissible AI uses, nor sets any (workable) set of minimum requirements for the deployment of AI in high-risk uses, specially in the public sector. The contrast with the EU AI Act could not be starker and, in this aspect in particular, UK citizens should be very worried that the UK Government is not committing to any safeguards in the way technology can be used in eg determining access to public services, or by the law enforcement and judicial system. More generally, it is very worrying that the AI WP does not foresee any safeguards in relation to the quickly accelerating digitalisation of the public sector.

Loose central coordination leading to ai regulation privatisation

Remarkably, and in a similar functional disconnect as that of the GCSA Report (above), the decision not to create any new regulator/s (para 15) is taken in the same breath as the AI WP recognises that the small coordination layer within the regulatory architecture proposed in the 2022 AI regulation policy paper (ie, largely, the approach underpinning the DRCF) has been heavily criticised (para 13). The AI WP recognises that ‘the DRCF was not created to support the delivery of all the functions we have identified or the implementation of our proposed regulatory framework for AI’ (para 74).

The AI WP also stresses how ‘[w]hile some regulators already work together to ensure regulatory coherence for AI through formal networks like the AI and digital regulations service in the health sector and the Digital Regulation Cooperation Forum (DRCF), other regulators have limited capacity and access to AI expertise. This creates the risk of inconsistent enforcement across regulators. There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty’ (para 29), which points at a strong functional need for a centralised approach to AI regulation.

To try and mitigate those regulatory risks and shortcomings, the AI WP proposes the creation of ‘a number of central support functions’, such as [i} a central monitoring function of overall regulatory framework’s effectiveness and the implementation of the principles; [ii] central risk monitoring and assessment; [iii] horizon scanning; [iv] supporting testbeds and sandboxes; [v] advocacy, education and awareness-raising initiatives; or [vi] promoting interoperability with international regulatory frameworks (para 14, see also para 73). Cryptically, the AI WP indicates that ‘central support functions will initially be provided from within government but will leverage existing activities and expertise from across the broader economy’ (para 15). Quite how this can be effectively done outwith a clearly defined, adequately resourced and durable institutional framework is anybody’s guess. In fact, the AI WP recognises that this approach ‘needs to evolve’ and that Government needs to understand how ‘existing regulatory forums could be expanded to include the full range of regulators‘, what ‘additional expertise government may need’, and the ‘most effective way to convene input from across industry and consumers to ensure a broad range of opinions‘ (para 77).

While the creation of a regulator seems a rather obvious answer to all these questions, the AI WP has rejected it in unequivocal terms. Is the AI WP a U-turn waiting to happen? Is the mention that ‘[a]s we enter a new phase we will review the role of the AI Council and consider how best to engage expertise to support the implementation of the regulatory framework’ (para 78) a placeholder for an imminent project to rejig the AI Council and turn it into an AI regulator? What is the place and role of the Office for AI and the Centre for Data Ethics and Innovation in all this?

Moreover, the AI WP indicates that the ‘proposed framework is aligned with, and supplemented by, a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. Government will promote the use of such tools’ (para 16). Relatedly, the AI WP relies on those mechanisms to avoid addressing issues of accountability across AI life cycle, indicating that ‘[t]ools for trustworthy AI like assurance techniques and technical standards can support supply chain risk management. These tools can also drive the uptake and adoption of AI by building justified trust in these systems, giving users confidence that key AI-related risks have been identified, addressed and mitigated across the supply chain’ (para 84). Those tools are discussed in much more detail in part 4 of the AI WP (paras 106 ff). Annex A also creates a backdoor for technical standards to directly become the operationalisation of the general principles on which the regulatory framework is based, by explicitly identifying standards regulators may want to consider ‘to clarify regulatory guidance and support the implementation of risk treatment measures’.

This approach to the offloading of tricky regulatory issues to the emergence of private-sector led standards is simply an exercise in the transfer of regulatory power to those setting such standards, guidance and assurance techniques and, ultimately, a privatisation of AI regulation.

A different approach to sandboxes and testbeds?

The Government will take forward the GCSA recommendation to establish a regulatory sandbox for AI, which ‘will bring together regulators to support innovators directly and help them get their products to market. The sandbox will also enable us to understand how regulation interacts with new technologies and refine this interaction where necessary’ (p2). This thus is bound to hardwire some of the issues mentioned above in relation to the GCSA proposal, as well as being reflective of the general pro-industry approach of the AI WP, which is obvious in the framing that the regulators are expected to ‘support innovators directly and help them get their products to market’. Industrial policy seems to be shoehorned and mainstreamed across all areas of regulatory activity, at least in relation to AI (but it can then easily bleed into non-AI-related regulatory activities).

While the AI WP indicates the commitment to implement the AI sandbox recommended in the GCSA Report, it is by no means clear that the implementation will be in the way proposed in the report (ie a multi-regulator sandbox nested under DRCF, with an expectation that it would develop a crucial coordination and regulatory centralisation effect). The AI WP indicates that the Government still has to explore ‘what service focus would be most useful to industry’ in relation to AI sandboxes (para 96), but it sets out the intention to ‘focus an initial pilot on a single sector, multiple regulator sandbox’ (para 97), which diverges from the approach in the GCSA Report, which would be that of a sandbox for ‘multiple sectors, multiple regulators’. While the public consultation intends to gather feedback on which industry sector is the most appropriate, I would bet that the financial services sector will be chosen and that the ‘regulatory innovation’ will simply result in some closer cooperation between the ICO and FCA.

Regulator capabilities — ai regulation on a shoestring?

The AI WP turns to the issue of regulator capabilities and stresses that ‘While our approach does not currently involve or anticipate extending any regulator’s remit, regulating AI uses effectively will require many of our regulators to acquire new skills and expertise’ (para 102), and that the Government has ‘identified potential capability gaps among many, but not all, regulators’ (para 103).

To try to (start to) address this fundamental issue in the context of a devolved and decentralised regulatory framework, the AI WP indicates that the Government will explore, for example, whether it is ‘appropriate to establish a common pool of expertise that could establish best practice for supporting innovation through regulatory approaches and make it easier for regulators to work with each other on common issues. An alternative approach would be to explore and facilitate collaborative initiatives between regulators – including, where appropriate, further supporting existing initiatives such as the DRCF – to share skills and expertise’ (para 105).

While the creation of ‘common regulatory capacity’ has been advocated by the Alan Turing Institute, and while this (or inter-regulator secondments, for example) could be a short term fix, it seems that this tries to address the obvious challenge of adequately resourcing regulatory bodies without a medium and long-term strategy to build up the digital capability of the public sector, and to perpetuate the current approach to AI regulation on a shoestring. The governance and organisational implications arising from the creation of common pool of expertise need careful consideration, in particular as some of the likely dysfunctionalities are only marginally smaller than current over-reliance on external consultants, or the ‘salami-slicing’ approach to regulatory and policy interventions that seems to bleed from the ’agile’ management of technological projects into the realm of regulatory activity, which however requires institutional memory and the embedding of knowledge and expertise.

Digital procurement, PPDS and multi-speed datafication -- some thoughts on the March 2023 PPDS Communication

The 2020 European strategy for data ear-marked public procurement as a high priority area for the development of common European data spaces for public administrations. The 2020 data strategy stressed that

Public procurement data are essential to improve transparency and accountability of public spending, fighting corruption and improving spending quality. Public procurement data is spread over several systems in the Member States, made available in different formats and is not easily possible to use for policy purposes in real-time. In many cases, the data quality needs to be improved.

To address those issues, the European Commission was planning to ‘Elaborate a data initiative for public procurement data covering both the EU dimension (EU datasets, such as TED) and the national ones’ by the end of 2020, which would be ‘complemented by a procurement data governance framework’ by mid 2021.

With a 2+ year delay, details for the creation of the public procurement data space (PPDS) were disclosed by the European Commission on 16 March 2023 in the PPDS Communication. The procurement data governance framework is now planned to be developed in the second half of 2023.

In this blog post, I offer some thoughts on the PPDS, its functional goals, likely effects, and the quickly closing window of opportunity for Member States to support its feasibility through an ambitious implementation of the new procurement eForms at domestic level (on which see earlier thoughts here).

1. The PPDS Communication and its goals

The PPDS Communication sets some lofty ambitions aligned with those of the closely-related process of procurement digitalisation, which the European Commission in its 2017 Making Procurement Work In and For Europe Communication already saw as not only an opportunity ‘to streamline and simplify the procurement process’, but also ‘to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised … [to seize] a unique chance to reshape the relevant systems and achieve a digital transformation’ (at 11-12).

Following the same rhetoric of transformation, the PPDS Communication now stresses that ‘Integrated data combined with the use of state-of the-art and emerging analytics technologies will not only transform public procurement, but also give new and valuable insights to public buyers, policy-makers, businesses and interested citizens alike‘ (at 2). It goes further to suggest that ‘given the high number of ecosystems concerned by public procurement and the amount of data to be analysed, the impact of AI in this field has a potential that we can only see a glimpse of so far‘ (at 2).

The PPDS Communication claims that this data space ‘will revolutionise the access to and use of public procurement data:

  • It will create a platform at EU level to access for the first time public procurement data scattered so far at EU, national and regional level.

  • It will considerably improve data quality, availability and completeness, through close cooperation between the Commission and Member States and the introduction of the new eForms, which will allow public buyers to provide information in a more structured way.

  • This wealth of data will be combined with an analytics toolset including advanced technologies such as Artificial Intelligence (AI), for example in the form of Machine Learning (ML) and Natural Language Processing (NLP).’

A first comment or observation is that this rhetoric of transformation and revolution not only tends to create excessive expectations on what can realistically be delivered by the PPDS, but can also further fuel the ‘policy irresistibility’ of procurement digitalisation and thus eg generate excessive experimentation or investment into the deployment of digital technologies on the basis of such expectations around data access through PPDS (for discussion, see here). Policy-makers would do well to hold off on any investments and pilot projects seeking to exploit the data presumptively pooled in the PPDS until after its implementation. A closer look at the PPDS and the significant roadblocks towards its full implementation will shed further light on this issue.

2. What is the PPDS?

Put simply, the PPDS is a project to create a single data platform to bring into one place ‘all procurement data’ from across the EU—ie both data on above threshold contracts subjected to mandatory EU-wide publication through TED (via eForms from October 2023), and data on below threshold contracts, which publication may be required by the domestic laws of the Member States, or entirely voluntary for contracting authorities.

Given that above threshold procurement data is already (in the process of being) captured at EU level, the PPDS is very much about data on procurement not covered by the EU rules—which represents 80% of all public procurement contracts. As the PPDS Communication stresses

To unlock the full potential of public procurement, access to data and the ability to analyse it are essential. However, data from only 20% of all call for tenders as submitted by public buyers is available and searchable for analysis in one place [ie TED]. The remaining 80% are spread, in different formats, at national or regional level and difficult or impossible to re-use for policy, transparency and better spending purposes. In order (sic) words, public procurement is rich in data, but poor in making it work for taxpayers, policy makers and public buyers.

The PPDS thus intends to develop a ‘technical fix’ to gain a view on the below-threshold reality of procurement across the EU, by ‘pulling and pooling’ data from existing (and to be developed) domestic public contract registers and transparency portals. The PPDS is thus a mechanism for the aggregation of procurement data currently not available in (harmonised) machine-readable and structured formats (or at all).

As the PPDS Communication makes clear, it consists of four layers:
(1) A user interface layer (ie a website and/or app) underpinned by
(2) an analytics layer, which in turn is underpinned by (3) an integration layer that brings together and minimally quality-assures the (4) data layer sourced from TED, Member State public contract registers (including those at sub-national level), and data from other sources (eg data on beneficial ownership).

The two top layers condense all potential advantages of the PPDS, with the analytics layer seeking to develop a ‘toolset including emerging technologies (AI, ML and NLP)‘ to extract data insights for a multiplicity of purposes (see below 3), and the top user interface seeking to facilitate differential data access for different types of users and stakeholders (see below 4). The two bottom layers, and in particular the data layer, are the ones doing all the heavy lifting. Unavoidably, without data, the PPDS risks being little more than an empty shell. As always, ‘no data, no fun’ (see below 5).

Importantly, the top three layers are centralised and the European Commission has responsibility (and funding) for developing them, while the bottom data layer is decentralised, with each Member State retaining responsibility for digitalising its public procurement systems and connecting its data sources to the PPDS. Member States are also expected to bear their own costs, although there is EU funding available through different mechanisms. This allocation of responsibilities follows the limited competence of the EU in this area of inter-administrative cooperation, which unfortunately heightens the risks of the PPDS becoming little more than an empty shell, unless Member States really take the implementation of eForms and the collaborative approach to the construction of the PPDS seriously (see below 6).

The PPDS Communication foresees a progressive implementation of the PPDS, with the goal of having ‘the basic architecture and analytics toolkit in place and procurement data published at EU level available in the system by mid-2023. By the end of 2024, all participating national publication portals would be connected, historic data published at EU level integrated and the analytics toolkit expanded. As of 2025, the system could establish links with additional external data sources’ (at 2). It will most likely be delayed, but that is not very important in the long run—especially as the already accrued delays are the ones that pose a significant limitation on the adequate rollout of the PPDS (see below 6).

3. PPDS’ expected functionality

The PPDS Communication sets expectations around the functionality that could be extracted from the PPDS by different agents and stakeholders.

For public buyers, in addition to reducing the burden of complying with different types of (EU-mandated) reporting, the PPDS Communication expects that ‘insights gained from the PPDS will make it much easier for public buyers to

  • team up and buy in bulk to obtain better prices and higher quality;

  • generate more bids per call for tenders by making calls more attractive for bidders, especially for SMEs and start-ups;

  • fight collusion and corruption, as well as other criminal acts, by detecting suspicious patterns;

  • benchmark themselves more accurately against their peers and exchange knowledge, for instance with the aim of procuring more green, social and innovative products and services;

  • through the further digitalisation and emerging technologies that it brings about, automate tasks, bringing about considerable operational savings’ (at 2).

This largely maps onto my analysis of likely applications of digital technologies for procurement management, assuming the data is there (see here).

The PPDS Communication also expects that policy-makers will ‘gain a wealth of insights that will enable them to predict future trends‘; that economic operators, and SMEs in particular, ‘will have an easy-to-use portal that gives them access to a much greater number of open call for tenders with better data quality‘, and that ‘Citizens, civil society, taxpayers and other interested stakeholders will have access to much more public procurement data than before, thereby improving transparency and accountability of public spending‘ (at 2).

Of all the expected benefits or functionalities, the most important ones are those attributed to public buyers and, in particular, the possibility of developing ‘category management’ insights (eg potential savings or benchmarking), systems of red flags in relation to corruption and collusion risks, and the automation of some tasks. However, unlocking most of these functionalities is not dependent on the PPDS, but rather on the existence of procurement data at the ‘right’ level.

For example, category management or benchmarking may be more relevant or adequate (as well as more feasible) at national than at supra-national level, and the development of systems of red flags can also take place at below-EU level, as can automation. Importantly, the development of such functionalities using pan-EU data, or data concerning more than one Member State, could bias the tools in a way that makes them less suited, or unsuitable, for deployment at national level (eg if the AI is trained on data concerning solely jurisdictions other than the one where it would be deployed).

In that regard, the expected functionalities arising from PPDS require some further thought and it can well be that, depending on implementation (in particular in relation to multi-speed datafication, as below 5), Member States are better off solely using domestic data than that coming from the PPDS. This is to say that PPDS is not a solid reality and that its enabling character will fluctuate with its implementation.

4. Differential procurement data access through PPDS

As mentioned above, the PPDS Communication stresses that ‘Citizens, civil society, taxpayers and other interested stakeholders will have access to much more public procurement data than before, thereby improving transparency and accountability of public spending’ (at 2). However, this does not mean that the PPDS will be (entirely) open data.

The Communication itself makes clear that ‘Different user categories (e.g. Member States, public buyers, businesses, citizens, NGOs, journalists and researchers) will have different access rights, distinguishing between public and non-public data and between participating Member States that share their data with the PPDS (PPDS members, …) and those that need more time to prepare’ (at 8). Relatedly, ‘PPDS members will have access to data which is available within the PPDS. However, even those Member States that are not yet ready to participate in the PPDS stand to benefit from implementing the principles below, due to their value for operational efficiency and preparing for a more evidence-based policy’ (at 9). This raises two issues.

First, and rightly, the Communication makes clear that the PPDS moves away from a model of ‘fully open’ or ‘open by default’ procurement data, and that access to the PPDS will require differential permissioning. This is the correct approach. Regardless of the future procurement data governance framework, it is clear that the emerging thicket of EU data governance rules ‘requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions’ (see here). This will however raise significant issues for the implementation of the PPDS, as it will generate some constraints or disincentives for an ambitions implementation of eForms at national level (see below 6).

Second, and less clearly, the PPDS Communication evidences that not all Member States will automatically have equal access to PPDS data. The design seems to be such that Member States that do not feed data into PPDS will not have access to it. While this could be conceived as an incentive for all Member States to join PPDS, this outcome is by no means guaranteed. As above (3), it is not clear that Member States will be better off—in terms of their ability to extract data insights or to deploy digital technologies—by having access to pan-EU data. The main benefit resulting from pan-EU data only accrues collectively and, primarily, by means of facilitating oversight and enforcement by the European Commission. From that perspective, the incentives for PPDS participation for any given Member State may be quite warped or internally contradictory.

Moreover, given that plugging into PPDS is not cost-free, a Member State that developed a data architecture not immediately compatible with PPDS may well wonder whether it made sense to shoulder the additional costs and risks. From that perspective, it can only be hoped that the existence of EU funding and technical support will be maximised by the European Commission to offload that burden from the (reluctant) Member States. However, even then, full PPDS participation by all Member States will still not dispel the risk of multi-speed datafication.

5. No data, no fun — and multi-speed datafication

Related to the risk that some EU Member States will become PPDS members and others not, there is a risk (or rather, a reality) that not all PPDS members will equally contribute data—thus creating multi-speed datafication, even within the Member States that opt in to the PPDS.

First, the PPDS Communication makes it clear that ‘Member States will remain in control over which data they wish to share with the PPDS (beyond the data that must be published on TED under the Public Procurement Directives)‘ (at 7), It further specifies that ‘With the eForms, it will be possible for the first time to provide data in notices that should not be published, or not immediately. This is important to give assurance to public buyers that certain data is not made publicly available or not before a certain point in time (e.g. prices)’ (at 7, fn 17).

This means that each Member State will only have to plug whichever data it captures and decides to share into PPDS. It seems plain to see that this will result in different approaches to data capture, multiple levels of granularity, and varying approaches to restricting access to the date in the different Member States, especially bearing in mind that ‘eForms are not an “off the shelf” product that can be implemented only by IT developers. Instead, before developers start working, procurement policy decision-makers have to make a wide range of policy decisions on how eForms should be implemented’ in the different Member States (see eForms Implementation Handbook, at 9).

Second, the PPDS Communication is clear (in a footnote) that ‘One of the conditions for a successful establishment of the PPDS is that Member States put in place automatic data capture mechanisms, in a first step transmitting data from their national portals and contract registers’ (at 4, fn 10). This implies that Member States may need to move away from manually inputted information and that those seeking to create new mechanisms for automatic procurement data capture can take an incremental approach, which is very much baked into the PPDS design. This relates, for example, to the distinction between pre- and post-award procurement data, with pre-award data subjected to higher demands under EU law. It also relates to above and below threshold data, as only above threshold data is subjected to mandatory eForms compliance.

In the end, the extent to which a (willing) Member State will contribute data to the PPDS depends on its decisions on eForms implementation, which should be well underway given the October 2023 deadline for mandatory use (for above threshold contracts). Crucially, Member States contributing more data may feel let down when no comparable data is contributed to PPDS by other Member States, which can well operate as a disincentive to contribute any further data, rather than as an incentive for the others to match up that data.

6. Ambitious eForms implementation as the PPDS’ Achilles heel

As the analysis above has shown, the viability of the PPDS and its fitness for purpose (especially for EU-level oversight and enforcement purposes) crucially depends on the Member States deciding to take an ambitious approach to the implementation of eForms, not solely by maximising their flexibility for voluntary uses (as discussed here) but, crucially, by extending their mandatory use (under national law) to all below threshold procurement. It is now also clear that there is a need for as much homogeneity as possible in the implementation of eForms in order to guarantee that the information plugged into PPDS is comparable—which is an aspect of data quality that the PPDS Communication does not seem to have at all considered).

It seems that, due to competing timings, this poses a bit of a problem for the rollout of the PPDS. While eForms need to be fully implemented domestically by October 2023, the PPDS Communication suggests that the connection of national portals will be a matter for 2024, as the first part of the project will concern the top two layers and data connection will follow (or, at best, be developed in parallel). Somehow, it feels like the PPDS is being built without a strong enough foundation. It would be a shame (to put it mildly) if Member States having completed a transition to eForms by October 2023 were dissuaded from a second transition into a more ambitious eForms implementation in 2024 for the purposes of the PPDS.

Given that the most likely approach to eForms implementation is rather minimalistic, it can well be that the PPDS results in not much more than an empty shell with fancy digital analytics limited to very superficial uses. In that regard, the two-year delay in progressing the PPDS has created a very narrow (and quickly dwindling) window of opportunity for Member States to engage with an ambitions process of eForms implementation

7. Final thoughts

It seems to me that limited and slow progress will be attained under the PPDS in coming years. Given the undoubted value of harnessing procurement data, I sense that Member States will progress domestically, but primarily in specific settings such as that of their central purchasing bodies (see here). However, whether they will be onboarded into PPDS as enthusiastic members seems less likely.

The scenario seems to resemble limited voluntary cooperation in other areas (eg interoperability; for discussion see here). It may well be that the logic of EU competence allocation required this tentative step as a first move towards a more robust and proactive approach by the Commission in a few years, on grounds that the goal of creating the European data space could not be achieved through this less interventionist approach.

However, given the speed at which digital transformation could take place (and is taking place in some parts of the EU), and the rhetoric of transformation and revolution that keeps being used in this policy area, I can’t but feel let down by the approach in the PPDS Communication, which started with the decision to build the eForms on the existing regulatory framework, rather than more boldly seeking a reform of the EU procurement rules to facilitate their digital fitness.

Should FTAs open and close (or only open) international procurement markets?

I have recently had some interesting discussions on the role of Free Trade Agreements (FTAs) in liberalising procurement-related international trade. The standard analysis is that FTAs serve the purpose of reciprocally opening up procurement markets to the economic operators of the signatory parties, and that the parties negotiate them on the basis of reciprocal concessions so that the market access given by A to economic operators from B roughly equates that given by B to economic operators from A (or is offset by other concessions from B in other chapters of the FTA with an imbalance in A’s favour).

Implicitly, this analysis assumes that A’s and B’s markets are (relatively) closed to third party economic operators, and that they will remain as such. The more interesting question is, though, whether FTAs should also close procurement markets to non-signatory parties in order to (partially) protect the concessions mutually granted, as well as to put pressure for further procurement-related trade liberalisation.

Let’s imagine that A, a party with several existing FTAs with third countries covering procurement, manages to negotiate the first FTA signed by B liberalising the latter’s procurement markets. It could seem that economic operators from A would have preferential access to B’s markets over any other economic operators (other than B’s, of course).

However, it can well be that, in practice, once the protectionist boat has sailed, B decides to entertain tenders coming from economic operators from C, D … Z for, once B’s domestic industries are not protected, B’s public buyers may well want to browse through the entire catalogue offered by the world market—especially if A does not have the most advanced industry for a specific type of good, service or technology (and I have a hunch this may well be a future scenario concerning digital technologies and AI in particular).

A similar issue can well arise where B already has other FTAs covering procurement and this generates a situation where it is difficult or complex for B’s public buyers to assess whether an economic operator from X does or not have guaranteed market access under the existing FTAs, which can well result in B’s public buyers granting access to economic operators from any origin to avoid legal risks resulting from an incorrect analysis of the existing legal commitments (once open for some, de facto open for all).

I am sure there are more situations where the apparent preferential access given by B to A in the notional FTA can be quickly eroded despite assumptions on how international trade liberalisation operates under FTAs. This thus begs the question whether A should include in its FTAs a clause binding B (and itself!) to unequal treatment (ie exclusion) of economic operators not covered by FTAs (either existing or future) or multilateral agreements. In that way, the concessions given by B to A may be more meaningful and long-lasting, or at least put pressure on third countries to participate in bilateral (and multilateral — looking at the WTO GPA) procurement-related liberalisation efforts.

In the EU’s specific case, the adoption of such requirements in its FTAs covering procurement would be aligned with the policy underlying the 2019 guidelines on third country access to procurement markets, the International Procurement Instrument, and the Foreign Subsidies Regulation.

It may be counter-intuitive that instruments of trade liberalisation should seek to close (or rather keep closed) some markets, but I think this is an issue where FTAs could be used more effectively not only to bilaterally liberalise trade, but also to generate further dynamics of trade liberalisation—or at least to avoid the erosion of bilateral commitments in situations of regulatory complexity or market dynamics pushing for ‘off-the-books’ liberalisation through the practical acceptance of tenders coming from anywhere.

This is an issue I would like to explore further after my digital tech procurement book, so I would be more than interested in thoughts and comments!

Revisiting the Fosen-Linjen Saga on threshold for procurement damages

I had the honour of being invited to contribute to a future publication to celebrate the EFTA Court’s 30th Anniversary in 2024. I was asked to revisit the Fosen-Linjen Saga on the EFTA Court’s interpretation of the threshold for liability in damages arising from breaches of EU/EEA procurement law.

The abstract of my chapter is as follows:

The 2017-2019 Fosen-Linjen Saga saw the EFTA Court issue diametrically opposed views on the threshold for damages liability arising from breaches of EEA/EU public procurement law. Despite the arguably clear position under EU law following the European Court of Justice’s 2010 Judgment in Spijker—ie that liability in damages under the Remedies Directive only arises when the breach is ‘sufficiently serious’—Fosen-Linjen I stated that a ‘simple breach of public procurement law is in itself sufficient to trigger the liability of the contracting authority’. Such an approach would have created divergence between EEA and EU procurement law and generated undesired effects on the administration of procurement procedures and excessive litigation. Moreover, Fosen-Linjen I showed significant internal and external inconsistencies, which rendered it an unsafe interpretation of the existing rules, tainted by judicial activism on the part of the EFTA Court under its then current composition. Taking the opportunity of a rare second referral, and under a different Court composition, Fosen-Linjen II U-turned and stated that the Remedies Directive ‘does not require that any breach of the rules governing public procurement in itself is sufficient to award damages’. This realigned EEA law with EU law in compliance with the uniform interpretation goal to foster legal homogeneity. This chapter revisits the Fosen-Linjen Saga and offers additional reflections on its implications, especially for a long-overdue review of the Remedies Directive.

The full chapter is available as: A Sanchez-Graells, ‘The Fosen-Linjen Saga: not so simple after all?’ in The EFTA Court and the EEA: 30 Years On (Oxford, Hart Publishing, forthcoming): https://ssrn.com/abstract=4388938.

Two roles of procurement in public sector digitalisation: gatekeeping and experimentation

In a new draft chapter for my monograph, I explore how, within the broader process of public sector digitalisation, and embroiled in the general ‘race for AI’ and ‘race for AI regulation’, public procurement has two roles. In this post, I summarise the main arguments (all sources, included for quoted materials, are available in the draft chapter).

This chapter frames the analysis in the rest of the book and will be fundamental in the review of the other drafts, so comments would be most welcome (a.sanchez-graells@bristol.ac.uk).

Public sector digitalisation is accelerating in a regulatory vacuum

Around the world, the public sector is quickly adopting digital technologies in virtually every area of its activity, including the delivery of public services. States are not solely seeking to digitalise their public sector and public services with a view to enhance their operation (internal goal), but are also increasingly willing to use the public sector and the construction of public infrastructure as sources of funding and spaces for digital experimentation, to promote broader technological development and boost national industries in a new wave of (digital) industrial policy (external goal). For example, the European Commission clearly seeks to make the ‘public sector a trailblazer for using AI’. This mirrors similar strategic efforts around the globe. The process of public sector digitalisation is thus embroiled in the broader race for AI.

Despite the fact that such dynamic of public sector digitalisation raises significant regulatory risks and challenges, well-known problems in managing uncertainty in technology regulation—ie the Collingridge dilemma or pacing problem (‘cannot effectively regulate early on, so will probably regulate too late’)—and different normative positions, interact with industrial policy considerations to create regulatory hesitation and side-line anticipatory approaches. This creates a regulatory gap —or rather a laissez faire environment—whereby the public sector is allowed to experiment with the adoption of digital technologies without clear checks and balances. The current strategy is by and large one of ‘experiment first, regulate later’. And while there is little to no regulation, there is significant experimentation and digital technology adoption by the public sector.

Despite the emergence of a ‘race for AI regulation’, there are very few attempts to regulate AI use in the public sector—with the EU’s proposed EU AI Act offering a (partial) exception—and general mechanisms (such as judicial review) are proving slow to adapt. The regulatory gap is thus likely to remain, at least partially, in the foreseeable future—not least, as the effective functioning of new rules such as the EU AI Act will not be immediate.

Procurement emerges as a regulatory gatekeeper to plug that gap

In this context, proposals have started to emerge to use public procurement as a tool of digital regulation. Or, in other words, to use the acquisition of digital technologies by the public sector as a gateway to the ‘regulation by contract’ of their use and governance. Think tanks, NGOs, and academics alike have stressed that the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’, and that procurement ‘is a central policy tool governments can deploy to catalyse innovation and influence the development of solutions aligned with government policy and society’s underlying values’. Public procurement is thus increasingly expected to play a crucial gatekeeping role in the adoption of digital technologies for public governance and the delivery of public services.

Procurement is thus seen as a mechanism of ‘regulation by contract’ whereby the public buyer can impose requirements seeking to achieve broad goals of digital regulation, such as transparency, trustworthiness, or explainability, or to operationalise more general ‘AI ethics’ frameworks. In more detail, the Council of Europe has recommended using procurement to: (i) embed requirements of data governance to avoid violations of human rights norms and discrimination stemming from faulty datasets used in the design, development, or ongoing deployment of algorithmic systems; (ii) ‘ensure that algorithmic design, development and ongoing deployment processes incorporate safety, privacy, data protection and security safeguards by design’; (iii) require ‘public, consultative and independent evaluations of the lawfulness and legitimacy of the goal that the [procured algorithmic] system intends to achieve or optimise, and its possible effects in respect of human rights’; (iv) require the conduct of human rights impact assessments; or (v) promote transparency of the ‘use, design and basic processing criteria and methods of algorithmic systems’.

Given the absence of generally applicable mandatory requirements in the development and use of digital technologies by the public sector in relation to some or all of the stated regulatory goals, the gatekeeping role of procurement in digital ‘regulation by contract’ would mostly involve the creation of such self-standing obligations—or at least the enforcement of emerging non-binding norms, such as those developed by (voluntary) standardisation bodies or, more generally, by the technology industry. In addition to creating risks of regulatory capture and commercial determination, this approach may overshadow the difficulties in using procurement for the delivery of the expected regulatory goals. A closer look at some selected putative goals of digital regulation by contract sheds light on the issue.

Procurement is not at all suited to deliver incommensurable goals of digital regulation

Some of the putative goals of digital regulation by contract are incommensurable. This is the case in particular of ‘trustworthiness’ or ‘responsibility’ in AI use in the public sector. Trustworthiness or responsibility in the adoption of AI can have several meanings, and defining what is ‘trustworthy AI’ or ‘responsible AI’ is in itself contested. This creates a risk of imprecision or generality, which could turn ‘trustworthiness’ or ‘responsibility’ into mere buzzwords—as well as exacerbate the problem of AI ethics-washing. As the EU approach to ‘trustworthy AI’ evidences, the overarching goals need to be broken down to be made operational. In the EU case, ‘trustworthiness’ is intended to cover three requirements for lawful, ethical, and robust AI. And each of them break down into more detailed or operationalizable requirements.

In turn, some of the goals into which ‘trustworthiness’ or ‘responsibility’ breaks down are also incommensurable. This is notably the case of ‘explainability’ or interpretability. There is no such thing as ‘the explanation’ that is required in relation to an algorithmic system, as explanations are (technically and legally) meant to serve different purposes and consequently, the design of the explainability of an AI deployment needs to take into account factors such as the timing of the explanation, its (primary) audience, the level of granularity (eg general or model level, group-based, or individual explanations), or the level of risk generated by the use of the technical solution. Moreover, there are different (and emerging) approaches to AI explainability, and their suitability may well be contingent upon the specific intended use or function of the explanation. And there are attributes or properties influencing the interpretability of a model (eg clarity) for which there are no evaluation metrics (yet?). Similar issues arise with other putative goals, such as the implementation of a principle of AI minimisation in the public sector.

Given the way procurement works, it is ill-suited for the delivery of incommensurable goals of digital regulation.

Procurement is not well suited to deliver other goals of digital regulation

There are other goals of digital regulation by contract that are seemingly better suited to delivery through procurement, such as those relating to ‘technical’ characteristics such as neutrality, interoperability, openness, or cyber security, or in relation to procurement-adjacent algorithmic transparency. However, the operationalisation of such requirements in a procurement context will be dependent on a range of considerations, such as judgements on the need to keep information confidential, judgements on the state of the art or what constitutes a proportionate and economically justified requirement, the generation of systemic effects that are hard to evaluate within the limits of a procurement procedure, or trade-offs between competing considerations. The extent to which procurement will be able to operationalise the desired goals of digital regulation will depend on its institutional embeddedness and on the suitability of procurement tools to impose specific regulatory approaches. Additional analysis conducted elsewhere (see here and here) suggests that, also in relation to these regulatory goals, the emerging approach to AI ‘regulation by contract’ cannot work well.

Procurement digitalisation offers a valuable case study

The theoretical analysis of the use of procurement as a tool of digital ‘regulation by contract’ (above) can be enriched and further developed with an in-depth case study of its practical operation in a discrete area of public sector digitalisation. To that effect, it is important to identify an area of public sector digitalisation which is primarily or solely left to ‘regulation by contract’ through procurement—to isolate it from the interaction with other tools of digital regulation (such as data protection, or sectoral regulation). It is also important for the chosen area to demonstrate a sufficient level of experimentation with digitalisation, so that the analysis is not a mere concretisation of theoretical arguments but rather grounded on empirical insights.

Public procurement is itself an area of public sector activity susceptible to digitalisation. The adoption of digital tools is seen as a potential source of improvement and efficiency in the expenditure of public funds through procurement, especially through the adoption of digital technology solutions developed in the context of supply chain management and other business operations in the private sector (or ‘ProcureTech’), but also through the adoption of digital tools tailored to the specific goals of procurement regulation, such as the prevention of corruption or collusion. There is emerging evidence of experimentation in procurement digitalisation, which is shedding light on regulatory risks and challenges.

In view of its strategic importance and the current pace of procurement digitalisation, it is submitted that procurement is an appropriate site of public sector experimentation in which to explore the shortcomings of the approach to AI ‘regulation by contract’. Procurement is an adequate case study because, being a ‘back-office’ function, it does not concern (likely) high-risk uses of AI or other digital technologies, and it is an area where data protection regulation is unlikely to provide a comprehensive regulatory framework (eg for decision automation) because the primary interactions are between public buyers and corporate institutions.

Procurement therefore currently represents an unregulated digitalisation space in which to test and further explore the effectiveness of the ‘regulation by contract’ approach to governing the transition to a new model of digital public governance.

* * * * * *

The full draft is available on SSRN as: Albert Sanchez-Graells, ‘The two roles of procurement in the transition towards digital public governance: procurement as regulatory gatekeeper and as site for public sector experimentation’ (March 10, 2023): https://ssrn.com/abstract=4384037.

Procurement centralisation, digital technologies and competition (new working paper)

Source: Wikipedia.

I have just uploaded on SSRN the new working paper ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’, which I will present at the conference on “Centralization and new trends" to be held at the University of Copenhagen on 25-26 April 2023 (there is still time to register!).

The paper builds on my ongoing research on digital technologies and procurement governance, and focuses on the interaction between the strategic goals of procurement centralisation and digitalisation set by the European Commission in its 2017 public procurement strategy.

The paper identifies different ways in which current trends of procurement digitalisation and the challenges in procuring digital technologies push for further procurement centralisation. This is in particular to facilitate the extraction of insights from big data held by central purchasing bodies (CPBs); build public sector digital capabilities; and boost procurement’s regulatory gatekeeping potential. The paper then explores the competition implications of this technology-driven push for further procurement centralisation, in both ‘standard’ and digital markets.

The paper concludes by stressing the need to bring CPBs within the remit of competition law (which I had already advocated eg here), the opportunity to consider allocating CPB data management to a separate competent body under the Data Governance Act, and the related need to develop an effective system of mandatory requirements and external oversight of public sector digitalisation processes, specially to constrain CPBs’ (unbridled) digital regulatory power.

The full working paper reference is: A Sanchez-Graells, Albert, ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’ (March 2, 2023), Available at SSRN: https://ssrn.com/abstract=4376037. As always, any feedback most welcome: a.sanchez-graells@bristol.ac.uk.

Procurement tools for AI regulation by contract. Not the sharpest in the shed

I continue exploring the use of public procurement as a tool of digital regulation (or ‘AI regulation by contract’ as shorthand)—ie as a mechanism to promote transparency, explainability, cyber security, ethical and legal compliance leading to trustworthiness, etc in the adoption of digital technologies by the public sector.

After analysing procurement as a regulatory actor, a new draft chapter for my book project focuses on the procedural and substantive procurement tools that could be used for AI regulation by contract, to assess their suitability for the task.

The chapter considers whether procurement could effectively operationalise digital regulation goals without simply transferring regulatory decisions to economic operators. The chapter stresses how the need to prevent a transfer or delegation (ie a privatisation) of regulatory decisions as a result of the operation of the procurement rules is crucial, as technology providers are the primary target in proposals to use procurement for digital regulation by contract. In this post, I summarise the main arguments and insights in the chapter. As always, any feedback will be most warmly received: a.sanchez-graells@bristol.ac.uk.

Background

A first general consideration is that using procurement as a tool of digital regulation requires high levels of digital and commercial skills to understand the technologies being procured and the processes influencing technological design and deployment (as objects of regulation), and the procurement rules themselves (as regulatory tools). Gaps in those capabilities will jeopardise the effectiveness of using procurement as a tool of AI regulation by contract, beyond the limitations and constraints deriving from the relevant legal framework. However, to assess the (abstract) potential of procurement as a regulatory tool, it is worth distinguishing between practical and legal challenges, and to focus on legal challenges that would be present at all levels of public buyer capability.

A second general consideration is that this use of procurement could be seen as either a tool of ‘command and control’ regulation, or a tool of responsive regulation. In that regard, while there can be some space for a ‘command and control’ use of procurement as a tool of digital regulation, in the absence of clear (rules-based) regulatory benchmarks and legally-established mandatory requirements, the responsive approach to the use of procurement as a tool to enforce self-regulatory mechanisms seems likely to be predominant —in the sense that procurement requirements are likely to focus on the tenderers’ commitment to sets of practices and processes seeking to deliver (to the largest possible extent) the relevant regulatory attributes by reference to (technical) standards.

For example, it is hard to imagine the imposition of an absolute requirement for a digital solution to be ‘digitally secure’. It is rather more plausible for the tender and contract to seek to bind the technology provider to practices and procedures seeking to ensure high levels of cyber security (by reference to some relevant metrics, where they are available), as well as protocols and mechanisms to anticipate and react to any (potential) security breaches. The same applies to other desirable regulatory attributes in the procured digital technologies, such as transparency or explainability—which will most likely be describable (or described) by reference to technical standards and procedures—or to general principles, such as ethical or trustworthy AI, also requiring proceduralised implementation. In this context, procurement could be seen as a tool to promote co-regulation or (responsible) self-regulation both at tenderer and industry level, eg in relation to the development of ethical or trustworthy AI.

Against this background, it is relevant to focus on whether procurement tools could effectively operationalise digital regulation goals without simply transferring regulatory decisions to economic operators—ie operating as an effective tool of (responsive) meta-regulation. The analysis below takes a cradle-to-grave approach and focuses on the tools available at the phases of tender preparation and design, tender execution, and contract design and implementation. The analysis is based on EU procurement law, but the functional insights are broadly transferable to other systems.

Tender preparation and design

A public buyer seeking to use procurement as a tool of digital regulation faces an unavoidable information asymmetry. To try to reduce it, the public buyer can engage in a preliminary market consultation to obtain information on eg different technologies or implementation possibilities, or to ‘market-test’ the level of regulatory demand that could be met by existing technology providers. However, safeguards to prevent the use of preliminary market consultations to advantage specific technology providers through eg disclosure of exchanged information, as well as the level of effort required to participate in (detailed) market consultations, raise questions as to their utility to extract information in markets where secrecy is valued (as is notoriously the case of digital technology markets—see discussions on algorithmic secrecy) and where economic operators may be disinclined (or not have the resources) to provide ‘free consultancy’. Moreover, in this setting and given the absence of clear standards or industry practices, there is a heightened risk of capture in the interaction between the public buyer and potential technology providers, with preliminary market consultations not being geared for broader public consultation facilitating the participation of non-market agents (eg NGOs or research institutions). Overall, then, preliminary market consultations may do little to reduce the public buyer’s information asymmetry, while creating significant risks of capture leading to impermissible (discriminatory) procurement practices. They are thus unlikely to operate as an adequate tool to support regulation by contract.

Relatedly, a public buyer facing uncertainty as to the existing off-the-shelf offering and the level of adaptation, innovation or co-production required to otherwise achieve the performance sought in the digital technology procurement, faces a difficult choice of procurement procedure. This is a sort of chicken and egg problem, as the less information the public buyer has, the more difficult it is to choose an adequate procedure, but the choice of the procedure has implications on the information that the public buyer can extract. While the theoretical expectation could be that the public buyer would opt for a competitive dialogue or innovation partnership, as procedures targeted at this type of procurement, evidence of EU level practice shows that public buyers have a strong preference for competitive procedures with negotiations. The use of this procedure exposes the public buyer to direct risks of commercial capture (especially where the technology provider has more resources or the upper hand in negotiations) and the safeguards foreseen in EU law (ie the setting of non-negotiable minimum requirements and award criteria) are unlikely to be effective, as public buyers have a strong incentive to avoid imposing excessively demanding minima to avoid the risk of cancellation and retendering if no technology provider is capable (or willing) to meet them.

In addition, the above risks of commercial capture can be exacerbated when technology providers make exclusivity claims over the technological solutions offered, which could unlock the use of a negotiated procedure without prior publication—on the basis of absence of competition due to technical reasons, or due to the need to protect seclusive rights, including intellectual property rights. While the legal tests to access this negotiated procedure are in principle strict, the public buyer can have the wrong incentives to push through while at the same time controlling some of the safeguarding mechanisms (eg transparency of the award, or level of detail in the relevant disclosure). Similar issues arise with the possibility to creatively structure remuneration under some of these contracts to keep them below regulatory thresholds (eg by ‘remunerating in data’).

In general, this shows that the phase of tender preparation and design is vulnerable to risks of regulatory capture that are particularly relevant when the public buyer is expected to develop a regulatory role in disciplining the behaviour of the industry it interacts with. This indicates that existing flexible mechanisms of market engagement can be a source of regulatory risk, rather than a useful set of regulatory tools.

Tender execution

A public buyer seeking to use procurement as a tool of digital regulation could do so through the two main decisions of tenderer selection and tender evaluation. The expectation is that these are areas where the public buyer can exercise elements of ‘command and control’, eg through tenderer exclusion decisions as well as by setting demanding qualitative selection thresholds, or through the setting of mandatory technical specifications and the use of award constraints.

Tenderer selection

The public buyer could take a dual approach. First, to exclude technology providers with a previous track record of activity falling short of the relevant regulatory goals. Second, to incentivise or recompense high levels of positive commitment to the regulatory goals. However, both approaches present challenges.

First, the use of exclusion grounds would require clearly setting out in the tender documentation which types of digital-governance activities are considered to amount to ‘grave professional misconduct, which renders [the technology provider’s] integrity questionable’, and to reserve the possibility to exclude on grounds of ‘poor past performance’ linked to digital regulation obligations. In the absence of generally accepted standards of conduct and industry practices, and in a context of technological uncertainty, making this type of determinations can be difficult. Especially if the previous instance of ‘untrustworthy’ behaviour is being litigated or could (partially) be attributed to the public buyer under the previous contract. Moreover, a public buyer cannot automatically rely on the findings of another one, as the current EU rules require each contracting authority to come to its own view on the reliability of the economic operator. This raises the burden of engaging with exclusion based on these grounds, which may put some public buyers off, especially if there are complex technical questions on the background. Such judgments may require a level of expertise and available resources exceeding those of the public buyer, which could eg justify seeking to rely on third party certification instead.

Relatedly, it will be difficult to administer such tenderer screening to systems through the creation of lists of approved contractors or third-party certification (or equivalent mechanisms, such as dynamic purchasing systems administered by a central purchasing body, or quality assurance certification). In all cases, the practical difficulty will be that the public buyer will either see its regulatory function conditioned or precluded by the (commercially determined) standards underlying third-party certification, or face a significant burden if it seeks to directly scrutinise economic operators otherwise. The regulatory burden will to some extent be unavoidable because all the above-mentioned mechanisms foresee that (in some circumstances) economic operators that do not have access to the relevant certification or are under no obligation to register in the relevant list must be given the opportunity to demonstrate that they meet the relevant (substantive) qualitative selection criteria by other (equivalent) means.

There will also be additional challenges in ensuring that the relevant vetting of economic operators is properly applied where the digital technology solution relies on a long (technical) supply chain or assemblage, without this necessarily involving any (formal) relationship or subcontracting between the technology provider to be contracted and the developers of parts of the technical assemblage. This points at the significant burden that the public buyer may have to overcome in seeking to use qualitative selection rules to ‘weed out’ technology providers which (general, or past) behaviour is not aligned with the overarching regulatory goals.

Second, a more proactive approach that sought to go beyond exclusion or third-party certification to eg promote adherence to voluntary codes of conduct, or to require technology providers to justify how they eg generally ‘contribute to the development and deployment of trustworthy digital technologies’, would also face significant difficulties. Such requirements could be seen as unjustified and/or disproportionate, leading to an infringement of EU procurement law. They could also be altogether pre-empted by future legislation, such as the proposed EU AI Act.

Tender evaluation

As mentioned above, the possibility of setting demanding technical specifications and minimum requirements for tender evaluation through award constraints in principle seem like suitable tools of digital regulation. The public buyer could focus on the technical solutions and embedding the desired regulatory attributes (eg transparency, explainability, cyber security) and regulatory checks (on data and technology governance, eg in relation to open source code or interoperability, as well as in relation to ethical assessments) in the technical specifications. Award criteria could generate (further) incentives for regulatory performance, perhaps beyond the minimum mandatory baseline. However, this is far from uncomplicated.

The primary difficulty in using technical specifications as a regulatory tool relates to the challenge of clearly specifying the desired regulatory attributes. Some or most of the desired technological attributes are difficult to observe or measure, the processes leading to their promotion are not easy to establish, the outcomes of those processes are not binary and determining whether a requirement has been met cannot be subject to strict rules, but rather to (yet to be developed) technical standards with an unavoidable degree of indefinition, which may also be susceptible of iterative application in eg agile methods, and thus difficult to evaluate at tender stage. Moreover, the desired attributes can be in conflict between themselves and/or with the main functional specifications for the digital technology deployment (eg the increasingly clear unavoidable trade-off between explainability and accuracy in some AI technologies). This issue of the definitional difficulties and the incommensurability of some or most of the regulatory goals also relates to the difficulty of establishing minimum technical requirements as an award constraint—eg to require that no contract is awarded unless the tender reaches a specific threshold in the technical evaluation in relation to all or selected requirements (eg explainability). While imposing minimum technical requirements is permitted, it is difficult to design a mechanism to quantify or objectify the evaluation of some of the desired technological attributes, which will necessarily require a complex assessment. Such assessment cannot be conducted in such a way that the public buyer has an unrestricted freedom of choice, which will require clarifying the criteria and the relevant thresholds that would justify rejecting the tender. This could become a significant sticking point.

Designing technical specifications to capture whether a digital technology is ‘ethical’ or ‘trustworthy’ seems particularly challenging. These are meta-attributes or characteristics that refer to a rather broad set of principles in the design of the technology, but also of its specific deployment, and tend to proceduralise the taking into account of relevant considerations (eg which impact will the deployment have on the population affected). Additionally, in some respects, the extent to which a technological deployment will be ethical or trustworthy is out of the hands of the technology provider (eg may depend on decisions of the entity adopting the technology, eg on how it is used), and in some aspects it depends on specific decisions and choices made during contract implementation. This could make it impossible to verify at the point of the tender whether the end result will or not meet the relevant requirements—while including requirements that cannot be effectively verified prior to award would most likely breach current legal limits.

A final relevant consideration is that technical specifications cannot be imposed in a prescriptive manner, with technology providers having to be allowed to demonstrate compliance by equivalence. This limits the potential prescriptiveness of the technical specifications that can be developed by the public buyer, at least in relation to some of the desired technological attributes, which will always be constrained by their nature of standards rather than rules (or metrics) and the duty to consider equivalent modes of compliance. This erodes the practical scope of using technical specifications as regulatory instruments.

Relatedly, the difficulties in using award criteria to pursue regulatory goals stem from difficulties in the operationalisation of qualitative criteria in practice. First, there is a set of requirements on the formulation of award criteria that seek to avoid situations of unrestricted freedom of choice for the public buyer. The requirements tend to require a high level of objectivity, including in the structuring of award criteria of a subjective nature. In that regard, in order to guarantee an objective comparison and to eliminate the risk of arbitrary treatment, recent case law has been clear that award criteria intended to measure the quality of the tenders must be accompanied by indications which allow a sufficiently concrete comparative assessment between tenders, especially where the quality carries most of the points that may be allocated for the purposes of awarding the tender.

In part, the problem stems from the absence of clear standards or benchmarks to be followed in such an assessment, as well as the need to ensure the possibility of alternative compliance (eg with labels). This can be seen, for example, in relation to explainability. It would not suffice to establish that the solutions need to be explainable or to use explainability as an award criterion without more. It would be necessary to establish sub-criteria, such as eg ‘the solution needs to ensure that an individualised explanation for every output is generated’ (ie requiring local explainability rather than general explainability of the model). This would still need to be further specified, as to what type of explanation and containing which information, etc. The difficulty is that there are multiple approaches to local explainability and that most of them are contested, as is the general approach to post hoc explanations in itself. This puts the public buyer in the position of having to solve complex technical and other principled issues in relation to this award criterion alone. In the absence of standard methodologies, this is a tall order that can well make the procedure inviable or not used (with clear parallels to eg the low uptake of life-cycle costing approaches). However, the development of such methodologies parallels the issues concerning the development of technical standards. Once more, when such standards, benchmarks or methodologies emerge, reliance on them can thus (re)introduce risks of commercial determination, depending on how they are set.

Contract design and implementation

Given the difficulties in using qualitative selection, technical specifications and award criteria to embed regulatory requirements, it is possible that they are pushed to to the design of the contract and, in particular, to their treatment as contract performance conditions, in particular to create procedural obligations seeking to maximise attainment of the relevant regulatory goals during contract implementation (eg to create specific obligations to test, audit or upgrade the technological solution in relation to specific regulatory goals, with cyber security being a relatively straightforward one), or to pass on, ‘back-to-back’, mandatory obligations where they result from legislation (eg to impose transparency obligations, along the lines of the model standard clauses for AI procurement being developed at EU level).

In addition to the difficulty inherent in designing the relevant mechanisms of contractualised governance, a relevant limitation of this approach to embedding (self-standing) regulatory requirements in contract compliance clauses is that recent case law has made clear that ‘compliance with the conditions for the performance of a contract is not to be assessed when a contract is awarded’. Therefore, at award stage, all that can be asked is for technology providers to commit to such requirements as (future) contractual obligations—which creates the risk of awarding the contract to the best liar.

More generally, the effectiveness of contract performance clauses will depend on the contractual remedies attached to them and, in relation to some of the desirable attributes of the technologies, it can well be that there are no adequate contractual remedies or that the potential damages are disproportionate to the value of the contract. There will be difficulties in their use where obligations can be difficult to specify, where negative outputs and effects are difficult to observe or can only be observed with delay, and where contractual remedies are inadequate. It should be stressed that the embedding of regulatory requirements as contract performance clauses can have the effect of converting non-compliance into (mere) money claims against the technology provider. And, additionally, that contractual termination can be complicated or require a significant delay where the technological deployment has created operational dependency that cannot be mitigated in the short or medium term. This does not seem necessarily aligned with the regulatory gatekeeping role expected of procurement, as it can be difficult to create the adequate financial incentives to promote compliance with the overarching regulatory goals in this way—by contrast with, for example, the possibility of sanctions imposed by an independent regulator.

Conclusion

The analysis has stressed those areas where the existing rules prevent the imposition of rigid regulatory requirements or demands for compliance with pre-specified standards (to the exclusion of alternative ones), and those areas where the flexibility of the rules generates heightened risks of regulatory capture and commercial determination of the regulatory standards. Overall, this shows that it is either not easy or at all possible to use procurement tools to embed regulatory requirements in the tender procedure and in public contracts, or that those tools are highly likely to end up being a conduit for the direct or indirect application of commercially determined standards and industry practices.

This supports the claim that using procurement for digital regulation purposes will either be highly ineffective or, counterintuitively, put the public buyer in a position of rule-taker rather than rule-setter and market-shaper—or perhaps both. In the absence of non-industry led standards and requirements formulated eg by an independent regulator, on which procurement tools could be leveraged, each public buyer would either have to discharge a high (and possibly excessive) regulatory burden, or be exposed to commercial capture. This provides the basis for an alternative approach. The next step in the research project will thus be to focus on such mandatory requirements as part of a broader proposal for external oversight of the adoption of digital technologies by the public sector.

Procurement conferences & webinars: dates for the diary

Before your agenda fills up for the coming Spring and Summer, consider putting the following dates on your diary. These are all events where I will be participating. It would be lovely to have a chance to meet (again).

25-26 April 2023 - Public Procurement Conference – Centralization and new trends. Organised by Prof Carina Risvig Hamer and held at the Law Faculty of the University of Copenhagen. It promises to provide two full days of discussions on emerging and challenging procurement governance issues.

27 April 2023 - PhD Conference in Public Procurement & Competition Law. Also organised by Prof Carina Risvig Hamer and Magdalena Socha, and held at the Law Faculty of the University of Copenhagen. A good opportunity for PhD students to present work-in-progress and receive feedback, and for everyone to have a grasp of where emerging research is leading.

23 May 2023 - Can Procurement Be Used to Effectively Regulate AI? [Webinar online] 2pm UK / 3pm CET / 9am EST. This will be a panel discussion co-organised by the University of Bristol Law School and The George Washington University Law School, as part of my current research project on digital technologies and procurement governance [further details to be announced soon].

4 July 2023 - AI and Public Governance Commercialisation: What Role for Public Procurement? [Public lecture, in person]. Bristol, UK 2pm (followed by coffee and cake reception). This will be a lecture to mark the end of my research project, where I will pick out some of the main themes and findings [recording available online thereafter].

Micro-purchases as political football? -- some thoughts on the UK's GPC files and needed regulatory reform

The issue of public micro-purchases has just gained political salience in the UK. The opposition Labour party has launched a dedicated website and an aggressive media campaign calling citizens to scrutinise the use of government procurement cards (GPCs). The analysis revealed so far and the political spin being put on it question the current government’s wastefulness and whether ‘lavish’ GPC expenses are adequate and commensurate with the cost of living crisis and other social pressures. Whether this will yield the political results Labour hopes for is anybody’s guess (I am sceptical), but this is an opportunity to revisit GPC regulation and to action long-standing National Audit Office recommendations on transparency and controls, as well as to reconsider the interaction between GPCs and procurement vehicles based on data analysis. The political football around the frugality expected of a government in times of economic crisis should not obscure the clear need to strengthen GPC regulation in the UK.

Background

GPCs are debit or credit cards that allow government officials to pay vendors directly. In the UK, their issue is facilitated by a framework agreement run by the Crown Commercial Service. These cards are presented as a means to accelerate payment to public vendors (see eg current UK policy). However, their regulatory importance goes beyond their providing an (agile) means of payment, as they generate the risk of public purchases bypassing procurement procedures. If a public official can simply interact with a vendor of their choice and ‘put it on the card’, this can be a way to funnel public funds and engage with direct awards outside procurement procedures. There is thus a clear difference between the use of GPCs within procurement transactions (eg to pay for call-offs within a pre-existing framework agreement) and their use instead of procurement transactions (eg a public official buying something off your preferred online retailer and paying with a card).

Uses within procurement seem rather uncontroversial and the specific mechanism used to pay invoices should be driven by administrative efficiency considerations. There are also good reasons for (some) government officials to hold a GPC to cover the types of expenses that are difficult to procure (eg those linked to foreign travel, or unavoidably ‘spontaneous’ expenses, such as those relating to hospitality). In those cases, GPCs substitute for either the need to provide officials with cash advances (and thus create much sounder mechanisms to control the expenditure, as well as avoiding the circulation of cash with its own corruption and other risks), or to force them to pay in advance from their private pockets and then claim reimbursement (which can put many a public sector worker in financial difficulties, as eg academics know all too well).

The crucial issue then becomes how to control the expenditure under the GPCs and how to impose limits that prevent the bypassing of procurement rules and existing mechanisms. From this perspective, procurement cards are not a new phenomenon at all, and the challenges they pose from a procurement and government contracting perspective have long been understood and discussed—see eg Steven L Schooner and Neil S Whiteman, ‘Purchase Cards and Micro-Purchases: Sacrificing Traditional United States Procurement Policies at the Alter of Efficiency’ (2000) 9 Public Procurement Law Review 148. The UK’s National Audit Office (NAO) also carried out an in-depth investigation and published a report on the issue in 2012.

The regulatory and academic recommendations seeking to ensure probity and value for money in the use of GPCs as a (procurement) mechanism generally address three issues: (1) limits on expenditure, (2) (internal) expenditure control, and (3) expenditure transparency. I would add a fourth issue, which relates to (4) bypassing existing (or easy to set up) procurement frameworks. It is worth noting that the GPC files report provides useful information on each of these issues, all of which requires rethinking in the context of the UK’s current process of reforming procurement law.

Expenditure limits

The GPC files show how there are three relevant value thresholds: the threshold triggering expenditure transparency (currently £500), the maximum single transaction limit (currently £20,000, which raised the pre-pandemic £10,000), and the maximum monthly expenditure (currently £100,000, which raised the pre-pandemic limits if they were lower). It is worth assessing these limits from the perspective of their interaction with procurement rules, as well as broader considerations.

The first consideration is that the £500 threshold triggering expenditure transparency has remained fixed since 2011. Given a cumulative inflation of close to 30% in the period 2011-2022, this means that the threshold has constantly been lower in comparative purchase parity. This should make us reconsider the relevance of some of the findings in the GPC files. Eg the fact that, within its scope, there were ‘65,824 transactions above £500 in 2021, compared to 35,335 in 2010-11’ is not very helpful. This raises questions on the adequacy of having a (fixed) threshold below which expenditure is not published. While the NAO was reluctant to recommend full transparency in 2011, it seems that the administrative burden of providing such transparency has massively lowered in the intervening period, so this may be the time to scrape the transparency threshold. As below, however, this does not mean that the information should be immediately published in open data (as below).

The single transaction limit is the one with the most relevance from a procurement perspective. If a public official can use a GPC for a value exceeding the threshold of regulated procurement, then the rules are not well aligned and there is a clear regulatory risk. Under current UK law, central government contracts with a value above £12,000 must be advertised. This would be kept as the general rule in the Procurement Bill (clause 86(4)), unless there are further amendments prior to its entry into force. This evidences a clear regulatory risk of bypassing procurement (advertising) obligations through GPC use. The single transaction limit should be brought back to pre-pandemic levels (£10,000) or, at least, to the value threshold triggering procurement obligations (£12,000).

The maximum monthly expenditure should be reassessed from an (internal) control perspective (as below), but the need to ensure that GPCs cannot be used to fraction (above threshold) direct awards over short periods of time should also be taken into consideration. From that perspective, ensuring that a card holder cannot spend more than eg £138,760 in a given category of goods or services per month (which is the relevant threshold under both current rules and the foreseen Procurement Bill). Current data analytics in basic banking applications should facilitate such classification and limitation.

(internal) expenditure controls

The GPC files raise questions not only on the robustness of internal controls, but also on the accounting underpinning them (see pp 11-12). Most importantly, there seems to be no meaningful internal post-expenditure control to check for accounting problems or suspected fraudulent use, or no willingness to disclose how any such mechanisms operate. This creates expenditure control opacity that can point to a big governance gap. Expenditure controls should not only apply at the point of deciding who to authorise to hold and use a GPC and up to which expenditure limit, but also (and perhaps more importantly), to how expenditure is being carried out. From a regulatory theory perspective, it is very clear that the use of GPCs is framed under an agency relationship and it is very important to continuously signal to the agent that the principal is monitoring the use of the card and that there are serious (criminal) consequences to misuse. As things stand, it seems that ex post internal controls may operate in some departments (eg those that report recovery for inappropriately used funds) but not (effectively) in others. This requires urgent review of the mechanisms of pre- and post-expenditure control. An update of the 2012 NAO report seems necessary.

Expenditure transparency

The GPC files (pp 10-11) show clear problems in the implementation of the policy of disclosing all expenditure in transactions exceeding £500, which should be published published monthly, 2 months in arrears, despite (relatively clear) guidance to that effect. In addition to facilitating the suppression of the transparency threshold, developments in the collection and publication of open data should also facilitate the rollout of a clear plan to ensure effective publication without the gaps identified in the GPC files (and other problems in practice). However, this is also a good time to carefully consider the purpose of these publications and the need to harmonise them with the publication of other procurement information.

There are conflicting issues at hand. First, the current policy of publishing 2 months in arrears does not seem justified in relation to some qualified users of that information, such as those with an oversight role (or fraud investigation powers). Second, in relation to the general public, publication in full of all details may not be adequate within that time period in some cases, and the publication of some information may not be appropriate at all. There are, of course, intermediate situations, such as data access for journalists of research academics. In relation to this data, as well as all procurement data, this is an opportunity to create a sophisticated data-management architecture that can handle of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions (see here and here).

bypassing procurement frameworks

A final consideration is that the GPC files evidence a risk that GPCs may be used in ways that bypass existing procurement frameworks, or in ways that would require setting up new frameworks (or other types of procurement vehicle, such as dynamic purchasing systems). The use of GPCs to buy goods off Amazon is the clearest example (see pp 24-25), as there is nothing in the functioning of Amazon that could not be replicated through pre-procured frameworks supported by electronic catalogues. In that regard, GPC data should be used to establish the (administrative) efficiency of creating (new) frameworks and to inform product (and service) selection for inclusion therein. There should also be a clear prohibition of using GPCs outside existing frameworks unless better value for money for identical products can be documented, in which case this should also be reported to the entity running the relevant framework (presumably, the Crown Commercial Service) for review.

Conclusion

In addition to discussions about the type and level of expenditure that (high-raking) public officials should be authorised to incur as a political and policy matter, there is clearly a need and opportunity to engage in serious discussions on the tightening of the regulation of GPCs in the UK, and these should be coordinated with the passage of the Procurement Bill through the House of Commons. I have identified the following areas for action:

  • Suppression of the value threshold triggering transparency of specific transactions, so that all transactions are subjected to reporting.

  • Coordination of the single transaction threshold with that triggering procurement obligations for central government (which is to also apply to local and other contracting authorities).

  • Coordination of the maximum monthly spend limit with the threshold for international advertising of contract opportunities, so that no public official can spend more than the relevant amount in a given category of goods or services per month.

  • Launch of a new investigation and report by NAO on the existing mechanisms of pre- and post-expenditure control.

  • Creation of a sophisticated data-management architecture that can handle of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions. This needs to be in parallel or jointly with proposals under the Procurement Bill.

  • There should also be a clear prohibition of using GPCs outside existing frameworks unless better value for money for identical products can be documented. GPC data should be used to inform the creation and management of procurement frameworks and other commercial vehicles.

Regulating public and private interactions in public sector digitalisation through procurement

As discussed in previous entries in this blog (see here, here, here, here or here), public procurement is progressively being erected as the gatekeeper of the public interest in the process of digital technology adoption by the public sector, and thus positioned as digital technology regulator—especially in the EU and UK context.

In this gatekeeping role, procurement is expected to ensure that the public sector only acquires and adopts trustworthy technologies, and that (private) technology providers adhere to adequate technical, legal, and ethical standards to ensure that this is the case. Procurement is also expected to operate as a lever for the propagation of (soft) regulatory tools, such as independently set technical standards or codes of conduct, to promote their adoption and harness market dynamics to generate effects beyond the public sector (ie market-shaping). Even further, where such standards are not readily available or independently set, the procurement function is expected to formulate specific (contractual) requirements to ensure compliance with the overarching regulatory goals identified at higher levels of policymaking. The procurement function is thus expected to leverage the design of public tenders and public contracts as tools of digital technology regulation to plug the regulatory gap resulting from the absence of binding (legal) requirements. This is a tall order.

Analysing this gatekeeping role and whether procurement can adequately perform it is the focus of the last part of my current research project. In this latest draft book chapter, I focus on an analysis of the procurement function as a regulatory actor. The following chapter will focus on an analysis of procurement rules on the design of tender procedures and some elements of contractual design as regulatory tools. Combined, the analyses will shed light on the unsuitability of procurement to carry out this gatekeeping role in the absence of minimum mandatory requirements and external oversight, which will also be explored in detail in later chapters. This draft book chapter is giving me a bit of a hard time and some of the ideas there are still slightly tentative, so I would more than ever welcome any and all feedback.

In ‘Regulating public and private interactions in public sector digitalisation through procurement: the clash between agency and gatekeeping logics’, my main argument is that the proposals to leverage procurement to regulate public sector digitalisation, which seek to use public sector market power and its gatekeeping role to enforce standards of technological regulation by embedding them in public contracts, are bound to generate significant dysfunction due to a break in regulatory logic. That regulatory logic results from an analysis of the procurement function from an agency theory and a gatekeeping theory perspective, which in my view evidence the impossibility for procurement to carry out conflicting roles. To support this claim, I explore: 1) the position of the procurement function amongst the public and private actors involved in public sector digitalisation; 2) the governance implications of the procurement function’s institutional embeddedness; and 3) the likely (in)effectiveness of public contracts in disciplining private and public behaviour, as well as behaviour that is mutually influenced or coproduced by public and private actors during the execution of public contracts.

My analysis finds that, in the regulation of public-private interactions, the regulatory logic underpinning procurement is premised on the existence of a vertical relationship between the public buyer and (potential) technology providers and an expectation of superiority of the public buyer, which is thus (expected to be) able to dictate the terms of the market interaction (through tender requirements), to operate as gatekeeper (eg by excluding potential providers that fall short of pre-specified standards), and to dictate the terms of the future contract (eg through contract performance clauses with a regulatory component). This regulatory logic hits obvious limitations when the public buyer faces potential providers with market power, an insufficient offer of (regulated) goods and services, or significant information asymmetries, which result in a potential ‘weak public buyer’ problem. Such problem has generally been tried to be addressed through procurement centralisation and upskilling of the (centralised) procurement workforce, but those measures create additional governance challenges (especially centralisation) and are unlikely to completely re-establish the balance of power required for the effective regulation by contract of public sector digitalisation, as far as the provider side is concerned.

Parking the ‘weak public buyer’ problem, my analysis then focuses on the regulation of public-public interactions between the adopting public sector entity and the procurement function. I separate them for the purposes of the analysis, to point out that at theoretical level, there is a tension between the expectations of agency and gatekeeping theories in this context. While both of them conceptualise the relationship as vertical, they operate on an opposite understanding of who holds a predominant position. Under agency theory, the public buyer is the agent and thus subject to the instructions of the public entity that will ultimately adopt the digital technology. Conversely, under gatekeeping theory, the public buyer is the (independent) guarantor of a set of goals or attributes in public sector digitalisation projects and is thus tasked with ensuring compliance therewith. This would place the public buyer in a position of (functional) superiority, in that it would (be expected to) be able to dictate (some of) the terms of the technological adoption. This conflict in regulatory logics creates a structural conflict of interest for the procurement function as both agent and gatekeeper.

The analysis then focuses on how the institutional embeddedness of procurement exacerbates this problem. Where the procurement function is embedded in the same administrative unit or entity that is seeking to adopt the technology, it is subjected to hierarchical governance and thus lacks the independence required to carry out the gatekeeping role. Similarly, where the procurement function is separate (eg in the case of centralised or collaborative procurement), in the absence of mandatory requirements (eg to use the centralised procurement vehicle), the adopting public entity retains discretion whether to subject itself to the (gatekeeper) procurement function or to carry out its own procurement. Moreover, even when it uses centralised procurement vehicles, it tends to retain discretion (eg on the terms of mini-competitions or for the negotiation of some contractual clauses), which also erodes the position of the procurement function to effectively carry out its gatekeeping role.

On the whole, the procurement function is not in a good position to discipline the behaviour of the adopting public entity and this creates another major obstacle to the effectiveness of the proposed approach to the regulation by contract of public sector digitalisation. This is exacerbated by the fact that the adopting public entity will be the principal of the regulatory contract with the (chosen) technology provider, which means that the contractual mechanisms designed to enforce regulatory goals will be left to interpretation and enforcement by those actors whose behaviour it seeks to govern.

In such decentred interactions, procurement lacks any meaningful means to challenge deviations from the contract that are in the mutual interest of both the adopting entity and the technology provider. The emerging approach to regulation by contract cannot properly function where the adopting public entity is not entirely committed to maximising the goals of digital regulation that are meant to be enforced by contract, and where the public contractor has a concurring interest in deviating from those goals by reducing the level of demand of the relevant contractual clauses. In the setting of digital technology regulation, this seems a likely common case, especially if we consider that the main regulatory goals (eg explainability, trustworthiness) are open-ended and thus the question is not whether the goals in themselves are embraced in abstracto by the adopting entity and the technology provider, but the extent to which effective (and costly or limiting) measures are put in place to maximise the realisation of such goals. In this context, (relational) contracts seem inadequate to prevent behaviour (eg shirking) that is the mutual interest of the contractual parties.

This generates what I label as a ‘two-sided gatekeeping’ challenge. This challenge encapsulates the difficulties for the procurement function to effectively influence regulatory outcomes where it needs to discipline both the behaviour of technology providers and adopting entities, and where contract implementation depends on the decentred interaction of those two agents with the procurement function as a (toothless) bystander.

Overall, then, the analysis shows that agency and gatekeeping theory point towards a disfunction in the leveraging of procurement to regulate public sector digitalisation by contract. There are two main points of tension or rupture with the regulatory logic. First, the regulatory approach cannot effectively operate in the absence of a clear set of mandatory requirements to bind the discretion of the procurement function during the tendering and contract formation phase, as well as the discretion of the adopting public entity during contract implementation phase, and which are also enforceable on the technology provider regardless of the terms of the contract. Second, the regulatory approach cannot effectively operate in the absence of an independent actor capable of enforcing those standards and monitoring continuous compliance during the lifecycle of technological adoption and use by the public sector entity. As things stand, the procurement function is affected by structural and irresolvable conflicts between its overlaid roles. Moreover, even if the procurement function was not caught by the conflicting logics and requirements of agency and gatekeeping (eg as a result of the adoption of the mandatory requirements mentioned above), it would still not be in an adequate position to monitor and discipline the behaviour of the adopting public entity—and, relatedly, of the technology provider—after the conclusion of the procurement phase.

The regulatory analysis thus points to the need to discharge the procurement function from its newest gatekeeping role, to realign it with agency theory as appropriate. This would require both the enactment of mandatory requirements and the subjection to external oversight of the process of technological adoption by the public sector. This same conclusion will be further supported by an analysis of the limitations of procurement law to effectively operate as a regulatory tool, which will be the focus of the next chapter in the book.

Some further thoughts on setting procurement up to fail in 'AI regulation by contract'

The next bit of my reseach project concerns the leveraging of procurement to achieve ‘AI regulation by contract’ (ie to ensure in the use of AI by the public sector: trustworthiness, safety, explainability, human rights compliance, legality especially in data protection terms, ethical use, etc), so I have been thinking about it for the last few weeks to build on my previous views (see here).

In this post, I summarise my further thoughts — which have been prompted by the rich submissions to the House of Commons Science and Technology Committee [ongoing] inquiry on the ‘Governance of Artificial Intelligence’.

Let’s do it via procurement

As a starting point, it is worth stressing that the (perhaps unsurprising) increasingly generalised position is that procurement has a key role to play in regulating the adoption of digital technologies (and AI in particular) by the public sector—which consolidates procurement’s gatekeeping role in this regulatory space (see here).

More precisely, the generalised view is not that procurement ought to play such a role, but that it can do so (effectively and meaningfully). ‘AI regulation by contract’ via procurement is seen as an (easily?) actionable policy and governance mechanism despite the more generalised reluctance and difficulties in regulating AI through general legislative and policy measures, and in creating adequate governance architectures (more below).

This is very clear in several submissions to the ongoing Parliamentary inquiry (above). Without seeking to be exhaustive (I have read most, but not all submissions yet), the following points have been made in written submissions (liberally grouped by topics):

Procurement as (soft) AI regulation by contract & ‘Market leadership’

  • Procurement processes can act as a form of soft regulation Government should use its purchasing power in the market to set procurement requirements that ensure private companies developing AI for the public sector address public standards. ’ (Committee on Standards in Public Life, at [25]-[26], emphasis added).

  • For public sector AI projects, two specific strategies could be adopted [to regulate AI use]. The first … is the use of strategic procurement. This approach utilises government funding to drive change in how AI is built and implemented, which can lead to positive spill-over effects in the industry’ (Oxford Internet Institute, at 5, emphasis added).

  • Responsible AI Licences (“RAILs”) utilise the well-established mechanisms of software and technology licensing to promote self-governance within the AI sector. RAILs allow developers, researchers, and companies to publish AI innovations while specifying restrictions on the use of source code, data, and models. These restrictions can refer to high-level restrictions (e.g., prohibiting uses that would discriminate against any individual) as well as application-specific restrictions (e.g., prohibiting the use of a facial recognition system without consent) … The adoption of such licenses for AI systems funded by public procurement and publicly-funded AI research will help support a pro-innovation culture that acknowledges the unique governance challenges posed by emerging AI technologies’ (Trustworthy Autonomous Systems Hub, at 4, emphasis added).

Procurement and AI explainability

  • public bodies will need to consider explainability in the early stages of AI design and development, and during the procurement process, where requirements for transparency could be stipulated in tenders and contracts’ (Committee on Standards in Public Life, at [17], emphasis added).

  • In the absence of strong regulations, the public sector may use strategic procurement to promote equitable and transparent AI … mandating various criteria in procurement announcements and specifying design criteria, including explainability and interpretability requirements. In addition, clear documentation on the function of a proposed AI system, the data used and an explanation of how it works can help. Beyond this, an approved vendor list for AI procurement in the public sector is useful, to which vendors that agree to meet the defined transparency and explainability requirements may be added’ (Oxford Internet Institute, at 2, referring to K McBride et al (2021) ‘Towards a Systematic Understanding on the Challenges of Procuring Artificial Intelligence in the Public Sector’, emphasis added).

Procurement and AI ethics

  • For example, procurement processes should be designed so products and services that facilitate high standards are preferred and companies that prioritise ethical practices are rewarded. As part of the commissioning process, the government should set out the ethical principles expected of companies providing AI services to the public sector. Adherence to ethical standards should be given an appropriate weighting as part of the evaluation process, and companies that show a commitment to them should be scored more highly than those that do not (Committee on Standards in Public Life, at [26], emphasis added).

Procurement and algorithmic transparency

  • … unlike public bodies, the private sector is not bound by the same safeguards – such as the Public Sector Equality Duty within the Equality Act 2010 (EA) – and is able to shield itself from criticisms regarding transparency behind the veil of ‘commercial sensitivity’. In addition to considering the private company’s purpose, AI governance itself must cover the private as well as public sphere, and be regulated to the same, if not a higher standard. This could include strict procurement rules – for example that private companies need to release certain information to the end user/public, and independent auditing of AI systems’ (Liberty, at [20]).

  • … it is important that public sector agencies are duly empowered to inspect the technologies they’re procuring and are not prevented from doing so by the intellectual property rights. Public sector buyers should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about, for example, accuracy and bias’ (BILETA, at 6).

Procurement and technical standards

  • Standards hold an important role in any potential regulatory regime for AI. Standards have the potential to improve transparency and explainability of AI systems to detail data provenance and improve procurement requirements’ (Ada Lovelace Institute, at 10)

  • The speed at which the technology can develop poses a challenge as it is often faster than the development of both regulation and standards. Few mature standards for autonomous systems exist and adoption of emerging standards need to be encouraged through mechanisms such as regulation and procurement, for example by including the requirement to meet certain standards in procurement specification’ (Royal Academy of Engineering, at 8).

Can procurement do it, though?

Implicit in most views about the possibility of using procurement to regulate public sector AI adoption (and to generate broader spillover effects through market-based propagation mechanisms) is an assumption that the public buyer does (or can get to) know and can (fully, or sufficiently) specify the required standards of explainability, transparency, ethical governance, and a myriad other technical requirements (on auditability, documentation, etc) for the use of AI to be in the public interest and fully legally compliant. Or, relatedly, that such standards can (and will) be developed and readily available for the public buyer to effectively refer to and incorporate them into its public contracts.

This is a BIG implicit assumption, at least in relation with non trivial/open-ended proceduralised requirements and in relation to most of the complex issues raised by (advanced) forms of AI deployment. A sobering and persuasive analysis has shown that, at least for some forms of AI (based on neural networks), ‘it appears unlikely that anyone will be able to develop standards to guide development and testing that give us sufficient confidence in the applications’ respect for health and fundamental rights. We can throw risk management systems, monitoring guidelines, and documentation requirements around all we like, but it will not change that simple fact. It may even risk giving us a false sense of confidence’ [H Pouget, ‘The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist’ (Lawfare.com, 12 Jan 2023)].

Even for less complex AI deployments, the development of standards will be contested and protracted. This not only creates a transient regulatory gap that forces public buyers to ‘figure it out’ by themselves in the meantime, but can well result in a permanent regulatory gap that leaves procurement as the only safeguard (on paper) in the process of AI adoption in the public sector. If more general and specialised processes of standard setting are unlikely to plug that gap quickly or ever, how can public buyers be expected to do otherwise?

seriously, can procurement do it?

Further, as I wrote in my own submission to the Parliamentary inquiry, ‘to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards’ (at [4]).

Even optimistically ignoring the issues above and adopting the presumption that standards will emerge or the public buyer will be able to (eventually) figure it out (so we park requirement (i) for now), and also assuming that the public sector will be able to develop the required level of eg digital capability (so we also park (iii), but see here)), does however not overcome other obstacles to leveraging procurement for ‘AI regulation by contract’. In particular, it does not address the issue of whether there can be effective enforcement mechanisms within the contractual relationship resulting from a procurement process to impose compliance with the required standards (of explainability, transparency, ethical use, non-discrimination, etc).

I approach this issue as the challenge of enforcing not entirely measurable contractual obligations (ie obligations to comply with a contractual standard rather than a contractual rule), and the closest parallel that comes to my mind is the issue of enforcing quality requirements in public contracts, especially in the provision of outsourced or contracted-out public services. This is an issue on which there is a rich literature (on ‘regulation by contract’ or ‘government by contract’).

Quality-related enforcement problems relate to the difficulty of using contract law remedies to address quality shortcomings (other than perhaps price reductions or contractual penalties where those are permissible) that can do little to address the quality issues in themselves. Major quality shortcomings could lead to eg contractual termination, but replacing contractors can be costly and difficult (especially in a technological setting affected by several sources of potential vendor and technology lock in). Other mechanisms, such as leveraging past performance evaluations to eg bar access to future procurements can also do too little too late to control quality within a specific contract.

An illuminating analysis of the ‘problem of quality’ concluded that the ‘structural problem here is that reliable assurance of quality in performance depends ultimately not on contract terms but on trust and non-legal relations. Relations of trust and powerful non-legal sanctions depend upon the establishment of long-term … relations … The need for a governance structure and detailed monitoring in order to achieve co-operation and quality seems to lead towards the creation of conflictual relations between government and external contractors’ [see H Collins, Regulating Contracts (OUP 1999) 314-15].

To me, this raises important questions about the extent to which procurement and public contracts more generally can effectively deliver the expected safeguards and operate as an adequate sytem of ‘AI regulation by contract’. It seems to me that price clawbacks or financial penalties, even debarment decisions, are unilkely to provide an acceptable safety net in some (or most) cases — eg high-risk uses of complex AI. Not least because procurement disputes can take a long time to settle and because the incentives will not always be there to ensure strict enforcement anyway.

More thoughts to come

It seems increasingly clear to me that the expectations around the leveraging of procurement to ‘regulate AI by contract’ need reassessing in view of its likely effectiveness. Such effectiveness is constrained by the rules on the design of tenders for the award of public contracts, as well as those public contracts, and mechanisms to resolve disputes emerging from either tenders or contracts. The effectiveness of this approach is, of course, also constrained by public sector (digital) capability and by the broader difficulties in ascertaining the appropriate approach to (standards-based) AI regulation, which cannot so easily be set aside. I will keep thinking about all this in the process of writing my monograph. If this is of interested, keep an eye on this blog fior further thougths and analysis.

Interoperable Europe Act: Quick Procurement Annotation

© European Commission.

In November 2022, the European Commission published its proposal for an ‘Interoperable Europe Act’ to strengthen cross-border interoperability and cooperation in the public sector across the EU (the ‘IEA Proposal’, or ‘IEAP’) . The IEA Proposal seeks to revamp and strengthen the current European Interoperability Framework, which has seen very limited uptake since its inception in 2004, as detailed in the Communication ‘Linking public services, supporting public policies and delivering public benefits. Towards an “Interoperable Europe”’ (the ‘IEA Communication’).

The IEA Proposal thus seeks to introduce mandatory obligations and support mechanisms to foster the creation of a network of sovereign and interconnected digital public administrations and to accelerate the digital transformation of Europe's public sector, as an attempt to achieve Europe's 2030 digital targets and support trusted data flows. It also seeks to stimulate public sector innovation and public-private GovTech projects.

The IEA Proposal has a few procurement implications, some more evident than others. In this post, I try to map them, and offer some comments.

Some basics of the IEA Proposal

The IEA Proposal seeks to create a toolkit to promote increasing levels of interoperability in the network and information systems that enable public services to be delivered or managed electronically, with a primary focus on cross-border digital public services (Arts 3-14). The toolkit is complemented by institutional mechanisms for the governance of cross-border interoperability (Arts 15-18), as well as some central planning and monitoring instruments (Arts 19-20).

From a procurement perspective, some elements in the toolkit are particularly relevant, including: (i) an obligation to carry out interoperability assessments; (ii) an obligation to exchange information on ‘interoperability solutions’ and to cooperate with other public sector bodies; (iii) innovation measures with a GovTech focus; and (iv) regulatory sandboxes. Other measures, such as the creation of a portal for the publication of information on ‘interoperability solutions’, the possibility to set up Commission-driven policy implementation projects, provisions on training, or peer review mechanisms, are of lesser direct relevance. The rest of this post focuses on the four elements with a more direct procurement link.

Using procurement to trigger interoperability assessments

Interoperability assessments are one of the main elements in the IEAP toolkit. Recital (8) stresses that

To set up cross-border interoperable public services, it is important to focus on … interoperability … as early as possible in the policymaking process. Therefore, the public organisation that intends to set up a new or to modify an existing network and information system that is likely [to] result in high impacts on the cross-border interoperability, should carry out an interoperability assessment. This assessment is necessary to understand the magnitude of impact of the planned action and to propose measures to reap up the benefits and address potential costs.

Recital (10) then adds that

The outcome of that [interoperability] assessment should be taken into account when determining the appropriate measures that need to be taken in order to set up or modify the network and information system.

The minimum content of the interoperability assessment is prescribed and includes specific analysis of the ‘level of alignment of the network and information systems concerned with the European Interoperability Framework, and with the Interoperable Europe solutions [a new form of recommended interoperability standard]’ (Art 3(4)(b) IEAP). The purpose of the assessment is clearly to promote convergence towards European standards, even if there is no strict obligation to do so. The outcome of the interoperability assessment must be published on the public sector body’s website (Art 3(2) IEAP). Such transparency may support convergence towards European standards.

The IEA Proposal uses the likelihood of a procurement process as one of three triggers for the obligation to carry out an interoperability assessment. Article 3(1)(b) IEA Proposal indeed makes it mandatory to carry out such interoperability assessment ‘where the intended set-up or modification [of an existing network and information system that enables public services to be delivered or managed electronically] will most likely result in procurements for network and information systems used for the provision of cross-border services above the threshold set out in Article 4 of Directive 2014/24/EU’.

This trigger raises the question why the same obligation is not imposed when other EU procurement rules may be applicable — notably Directive 2014/23/EU on concessions, but also Directive 2014/25/EU as the infrastructure for digital public services may not be directly procured by an entity covered by Directive 2014/24/EU — although it is possible to carry out interoperability assessments on a voluntary basis.

Be it as it may, as a first procurement implication, the IEA Proposal would create an add-on regulatory obligation to carry out an interoperability assessment for (likely) procurements covered by Directive 2014/24/EU. It may be worth noting that the obligation to carry out an interoperability assessment is also triggered where ‘the intended set-up or modification affects one or more network and information systems used for the provision of cross-border services across several sectors or administrations’ (Art 3(1)(a) IEAP), so the obligation would not be circumvented in eg cases of public-public cooperation or in-house provision, whether they are considered covered and exempted, or excluded, from Directive 2014/24/EU.

The obligation to carry out the interoperability assessment can have a knock-on effect on the setting of technical specifications for the future procurement, to the extent that it promotes the adoption of Interoperable Europe solutions as standards. In that regard, it is worth noting that the IEA Proposal highlights that ‘Interoperability is a condition for avoiding technological lock-in, enabling technical developments, and fostering innovation’ (rec (22)), and also establishes a clear link between its objectives and the standardisation of technical specifications. In Recital (18), is stresses that

Interoperability is directly connected with, and dependent on the use of open specifications and standards. Therefore, the Union public sector should be allowed to agree on cross-cutting open specifications and other solutions to promote interoperability. The new framework should provide for a clear process on the establishment and promotion of such agreed interoperability solutions in the future. This way, the public sector will have a more coordinated voice to channel public sector needs and public values into broader discussions.

Therefore, a secondary procurement implication is that the IEA Proposal can have implications for the setting of technical specifications, in particular to promote the use of Interoperable Europe solutions. These can propagate beyond cross-border digital public services to the extent that such standardisation can also generate functional and financial advantages in a strictly domestic context. Moreover, as Interoperable Europe solutions are developed, they can simply become de facto industry standards.

obligations to exchange information: need for new or additional clauses in public contracts?

Another of the key goals of the IEA Proposal is to facilitate (cross-border) information exchanges between public administrations on the interoperability solutions they have implemented. Such exchange of information is meant to promote sharing and reusing proven tools as a ‘fast and cost-effective approach to designing digital public services’ (IEA Communication, at 2).

In that regard, Recital (12) of the IEA Proposal programmatically stresses that

Public sector bodies or institutions, bodies or agencies of the Union that search for interoperability solutions should be able to request from other public sector bodies or institutions, bodies or agencies of the Union the software code those organisations use, together with the related documentation. Sharing should become a default among public sector bodies, and institutions, bodies and agencies of the Union while not sharing would need a legal justification. In addition, public sector bodies or institutions, bodies, or agencies of the Union should seek to develop new interoperability solutions or to further develop existing interoperability solutions.

Such a maximalist approach would generalise a practice of ‘EU-wide’ ‘software code’ and technical documentation exchange that would likely raise some eyebrows, especially in relation to proprietary software and in relation to algorithmic source code protection. The IEA Proposal justifies this in Recital (13) on grounds that

When public administrations decide to share their solutions with other public administrations or the public, they are acting in the public interest. This is even more relevant for innovative technologies: for instance, open code makes algorithms transparent and allows for independent audits and reproducible building blocks. The sharing of interoperability solutions among public administration should set the conditions for the achievement of an open ecosystem of digital technologies for the public sector that can produce multiple benefits.

However, the IEA Proposal is much more limited than the recitals would suggest. The information exchange regime created by the IEA Proposal is regulated in Article 4. It needs to be read bearing in mind that Article 2(3) defines an ‘interoperability solution’ as a ‘technical specification, including a standard, or another solution, including conceptual frameworks, guidelines and applications, describing legal, organisational, semantic or technical requirements to be fulfilled by a network and information system in order to enhance cross-border interoperability’.

Depending on its interpretation, this definition can severely limit the scope of the information exchange obligations under the IEA Proposal, in particular due to the (functional) requirement that the covered ‘interoperability solutions’ refer to ‘requirements to be fulfilled by a network and information system in order to enhance cross-border interoperability’ (emphasis added). It should be noted that ‘cross-border interoperability’ is defined as ‘the ability of network and information systems to be used by public sector bodies in different Member States and institutions, bodies, and agencies of the Union in order to interact with each other by sharing data by means of electronic communication’. The IEA Communication and several aspects of the IEA Proposal seem to indicate that the purpose is not to restrict the relevant obligations to cases of existing cross-border interaction, but to facilitate potential cross-border interoperability. In that regard, it seems that it would have been preferable to define the scope of application as concerning information on any ‘solutions’ adopted by a public sector institution, so long as the information request was based on the potential interoperability of such solution with that (to be) adopted by the requesting institution. Nonetheless, it also seems functionally necessary for the information exchange mechanism not to be constrained to interoperability solutions already addressing issues of cross-border interoperability.

According to Article 4(1), ‘A public sector body or an institution, body or agency of the Union shall make available to any other such entity that requests it, interoperability solutions that support the public services that it delivers or manages electronically. The shared content shall include the technical documentation and, where applicable, the documented source code.’

Importantly, though, this obligation is excluded in the crucial case of interoperability solutions ‘for which third parties hold intellectual property rights and do not allow sharing’ (Art 4(1)(b)). It is also excluded regarding interoperability solutions that support processes which fall outside the scope of the public task of the public sector bodies or institutions, bodies, or agencies of the Union concerned (Art 4(1)(a)), and those with restricted access due to the protection of critical infrastructure, defence interests, or public security (Art 4(1)(c)).

So, what is left? Primarily, exchanges based on open source interoperability solutions, or exchanges of proprietary information permitted by the IP holder — eg through a licence that allows for the reuse by other public sector bodies or institutions, bodies or agencies of the Union, or other contractual means. In that regard, the obligation to exchange information is much more limited than may at first seem and does not create significant new technology governance duties on public buyers—other than the primary duty to disclose which solution is being used and to participate in the exchange of open (or permissioned) information, which can be done through a new portal to avoid multiple bilateral interactions (see Art 4(3) IEAP).

It may however be necessary to develop contractual clauses to clarify whether IP protected interoperability solutions can or cannot be shared (and in which terms), along the lines of some of the obligations regulated in the standard contractual clauses of the procurement of artificial intelligence, currently under development. Such contractual regime is also necessary in relation to software source code in any case, as a result of the CJEU Judgment in Informatikgesellschaft für Software-Entwicklung, C-796/18, EU:C:2020:395 (the ‘ISE case’, see here for discussion).

‘mandatory’ public-public cooperation

To support the reuse of (exchanged) interoperability solutions, Article 4(2) IEA Proposal includes an interesting provision on cooperation between the requesting (reusing) and the disclosing (sharing) public sector bodies:

To enable the reusing entity to manage the interoperability solution autonomously, the sharing entity shall specify the guarantees that will be provided to the reusing entity in terms of cooperation, support and maintenance. Before adopting the interoperability solution, the reusing entity shall provide to the sharing entity an assessment of the solution covering its ability to manage autonomously the cybersecurity and the evolution of the reused interoperability solution.

The sharing and reusing entities can also ‘conclude an agreement on sharing the costs for future developments of the interoperability solution’ (Art 4(5) IEAP). However, this cooperation obligation is excluded if the ‘sharing’ public sector body has published the interoperability solution in the relevant portal (Art 4(3) IEAP), which seems like a clear incentive to publish open source or broadly licensed interoperability solutions.

It is worth noting that, where arranged, such cooperation agreements (especially if they deal with future development costs) can in themselves constitute a public contract and thus be subject to compliance with Directive 2014/24/EU if the (wide) boundaries of public-public cooperation are exceeded—again, by reference to the ISE case. This seems an unlikely scenario given that the remit of the IEA Proposal is primarily concerned with networks for the cross-border (joint or linked) provision of digital public services, but it cannot be excluded if the broader interpretation of (potential) cross-border interoperability is adopted, especially in the context of reuse of a solution for a purpose (slightly) different than that for which the ‘sharing’ public sector entity implemented it.

Importantly, it is also necessary to consider whether the sharing of non-open access interoperability solutions under a cooperation agreement can have the effect of placing the IP holder in a position of advantage vis-à-vis its competitors, in which case the cooperation agreement would be in breach of Directive 2014/24/EU, once again, by reference to the ISE case. It can well be that this is a further disincentive for the sharing of IP protected interoperability solutions, even if a broad licence for public sector re-use is available.

In general, it seems like most of the mechanisms of the IEA Proposal can only really work in relation to open code and software. This is an important, general point. The IEA Communication stresses that interoperability assets ‘need to be open in order to be readily reusable by public administrations at all levels, that create interoperable systems and services, and by private sector and industry partners working with these administrations … This is why the proposed Interoperable Europe Act provides for access to reusable solutions, including code, where appropriate and possible.’ The main issue is that the IEA Proposal does not contain any explicit requirement for Member States’ public sector bodies to use open source solutions. Therefore, the effectiveness of most of its mechanisms ultimately depends on the level of uptake of open source solutions at national level.

innovation measures with a GovTech focus

Another procurement-relevant aspect of the IEA Proposal is its attempt to foster GovTech (peculiarly) defined as a ‘a technology-based cooperation between public and private sector actors supporting public sector digital transformation’ (Art 2(7) EIAP). The IEA Communication stresses that

Public-private ‘GovTech’ or ‘CivicTech’ cooperation stimulates public sector innovation, supports Europe’s technological sovereignty and opens pathways to public procurement. Gaining access to public procurement is a core concern for smaller companies, to be able to scale up and gain recognition and stable operating income (at 8).

Along the same lines, Recitals (24) and (25) of the IEA Proposal stress that

All levels of government should cooperate with innovative organisations, be it companies or non-profit entities, in design, development and operation of public services. Supporting GovTech cooperation between public sector bodies and start-ups and innovative SMEs, or cooperation mainly involving civil society organisations (‘CivicTech’), is an effective means of supporting public sector innovation and promoting use of interoperability tools across private and public sector partners. Supporting an open GovTech ecosystem in the Union that brings together public and private actors across borders and involves different levels of government should allow to develop innovative initiatives aimed at the design and deployment of GovTech interoperability solutions.

Identifying shared innovation needs and priorities and focusing common GovTech and experimentation efforts across borders would help Union public sector bodies to share risks, lessons learnt, and results of innovation support projects. Those activities will tap in particular into the Union’s rich reservoir of technology start-ups and SMEs. Successful GovTech projects and innovation measures piloted by Interoperable Europe innovation measures should help scale up GovTech tools and interoperability solutions for reuse.

However, there is little detail in the IEA Proposal on how GovTech uptake should be promoted. Article 10 indicates that the Interoperable Europe Board may propose that the Commission sets up innovation measures to support the development and uptake of innovative interoperability solutions in the EU, and that such measures ‘shall involve GovTech actors’. Such measures can be regulatory sandboxes (below). The Commission is also tasked with monitoring ‘the cooperation with GovTech actors in the field of cross-border interoperable public services to be delivered or managed electronically in the Union’ (Art 20(2)(c) IEAP).

None of this is very precise. The lack of detail on how to promote GovTech leaves many questions unanswered. This is particularly problematic because it is clear that engaging in GovTech requires rather sophisticated and advanced procurement, commercial and digital skills (see eg this report for the European Parliament) — even if only to understand the limits to pre-commercial procurement and other procurement-compliant ways to create a ‘route to market’ for GovTech companies.

It is also clear that existing support mechanisms (eg the Commission’s Guidance on Innovation Procurement) are insufficient. It remains to be seen whether the Commission can develop effective innovation measures under the IEA Proposal, which implementation will likely require overcoming the non-negligible obstacles to cross-border procurement under Directive 2014/24/EU — as the scope of the IEA Proposal is primarily constrained to cross-border digital public services and, more generally, to facilitating interoperability in different Member States.

regulatory Sandboxes and procurement?

As mentioned above in relation to GovTech, the IEA Proposal also includes the creation of regulatory sandboxes in its toolkit. Article 11 establishes that ‘Regulatory sandboxes shall provide a controlled environment for the development, testing and validation of innovative interoperability solutions supporting the cross-border interoperability of network and information systems which are used to provide or manage public services to be delivered or managed electronically for a limited period of time before putting them into service’. The aims of the sandboxes are specified, and include facilitating ‘cross-border cooperation between national competent authorities and synergies in public service delivery’; and facilitating ‘the development of an open European GovTech ecosystem, including cooperation with small and medium enterprises and start-ups’ (Art 11(3)(b) and (c) IEAP).

To me, it is unclear whether there will be much uptake of the possibility to participate in a sandbox to develop interoperability solutions for the public sector that are (tendentially at least) to be open source, as the economic incentives are not the same as those for participation in regulatory sandboxes that have as a sole purpose to exempt compliance from applicable regulatory obligations for the development of (otherwise) marketable products and services—eg in relation to FinTech services, or the pilot regulatory sandbox on Artificial Intelligence.

It seems to me more likely that the IEA regulatory sandboxes will be used in conjunction with a procurement process or for the implementation of public (services) contracts. In that case, it is unclear how the two mechanisms will interact. The IEA Proposal’s provisions on sandboxes only have detailed rules on data protection compliance, which clearly is a focus of legal risk. However, more could have been said in relation to coordinating the sandbox with the rules on cross-border procurement in Directive 2014/24/EU. Additional guidance seems necessary.

Final thoughts

The IEA Proposal has clear and not so clear interactions with public procurement. Notably, it forms part of a broader soft approach to fostering the procurement of open source digital solutions. As such, its effectiveness will be mostly constrained by the Member States’ willingness to embrace open source by default in their domestic procurement policies, as well as their proactive participation in the publication and cooperation mechanisms included in the IEA Proposal. It will be interesting to see how far such a change in public sector technology governance goes in coming years.

More Nuanced Procurement Transparency to Protect Competition: Has the Court of Justice Hit the Brakes on Open Procurement Data in Antea Polska (C-54/21)?

** This comment was first published as an Op-Ed for EU Law Live on 8 December 2022 (see formatted version). I am reposting it here in case of broader interest. **

In Antea Polska (C-54/21), the Court of Justice provided further clarification of the duties incumbent on contracting authorities to protect the confidentiality of different types of information disclosed by economic operators during tender procedures for the award of public contracts. Managing access to such information is challenging. On the one hand, some of the information will have commercial value and be sensitive from a market competition perspective, or for other reasons. On the other hand, disappointed tenderers can only scrutinise and challenge procurement decisions reliant on that information if they can access it as part of the duty to give reasons incumbent on the contracting authority. There is thus a clash of private interests that the public buyer needs to mediate as the holder of the information.

However, in recent times, procurement transparency has also gained a governance dimension that far exceeds the narrow confines of the tender procedures and related disputes. Open contracting approaches have focused on procurement transparency as a public governance tool, emphasising the public interest in the availability of such information. This creates two overlapping tracks for discussions on procurement transparency and its limitations: a track concerning private interests, and a track concerning the public interest. In this Op-Ed, I examine the judgment of Court of Justice in Antea Polska from both perspectives. I first consider the implications of the judgment for the public interest track, ie the open data context. I then focus on the specifics of the judgment in the private interest track, ie the narrower regulation of access to remedies in procurement. I conclude with some broader reflections on the need to develop the institutional mechanisms and guidance required by the nuanced approach to procurement transparency demanded by the Court of Justice, which is where both tracks converge.

Procurement Transparency and Public Interest

In the aftermath of the covid-19 pandemic, procurement transparency became a mainstream topic. Irregularities and corruption in the extremely urgent direct award of contracts could only be identified where information was made public, sometimes after extensive litigation to force disclosure. And the evidence that slowly emerged was concerning. The improper allocation of public funds through awards not subjected to most (or any) of the usual checks and balances renewed concerns about corruption and maladministration in procurement. This brought the spotlight back on proactive procurement transparency as a governance tool and sparked new interest in open data approaches. These would generate access to (until then) confidential procurement information without the need for an explicit request by the interested party.

A path towards ‘open by default’ procurement data has been plotted in the Open Data Directive, the Data Governance Act, and the new rules on Procurement eForms. Combined, these measures impose minimum open data requirements and allow for further ‘permissioned’ openness, including the granting of access to information subject to the rights of others—eg on grounds of commercial confidentiality, the protection of intellectual property (IP) or personal data (see here for discussion). In line with broader data strategies (notably, the 2020 Data Strategy), EU digital law seems to gear procurement towards encouraging ‘maximum transparency’—which would thus be expected to become the new norm soon (although I have my doubts, see here).

However, such ‘maximum transparency’ approach does not fit well the informational economics of procurement. Procurement is at its core an information or data-intensive exercise, as public buyers use tenders and negotiations to extract private information from willing economic operators to identify the contractor that can best satisfy the relevant needs. Subjecting the private information revealed in procurement procedures to maximum (or full) transparency would thus be problematic, as the risk of disclosure could have chilling and anticompetitive effects. This has long been established in principle in EU procurement law—and more generally in freedom of information law—although the limits to (on-demand and proactive) procurement transparency remain disputed and have generated wide variation across EU jurisdictions (for extensive discussion, see the contributions to Halonen, Caranta & Sanchez-Graells, Transparency in EU Procurements (2019)).

The Court’s Take

The Court of Justice’s case law has progressively made a dent on ‘maximum transparency’ approaches to confidential procurement information. Following its earlier Judgment in Klaipėdos regiono atliekų tvarkymo centras (C-927/19), the Court of Justice has now provided additional clarification on the limits to disclosure of information submitted by tenderers in public procurement procedures in its Judgment in Antea Polska. From the open data perspective, the Court’s approach to the protection of public interests in the opacity of confidential information are relevant.

Firstly, the Court of Justice has clearly endorsed limitations to procurement transparency justified by the informational economics of procurement. The Court has been clear that ‘the principal objective of the EU rules on public procurement is to ensure undistorted competition, and that, in order to achieve that objective, it is important that the contracting authorities do not release information relating to public procurement procedures which could be used to distort competition, whether in an ongoing procurement procedure or in subsequent procedures. Since public procurement procedures are founded on a relationship of trust between the contracting authorities and participating economic operators, those operators must be able to communicate any relevant information to the contracting authorities in such a procedure, without fear that the authorities will communicate to third parties items of information whose disclosure could be damaging to those operators’; Antea Polska (C-54/21, para 49). Without perhaps explicitly saying it, the Court has established the protection of competition and the fostering of trust in procurement procedures as elements inherently placed within the broader public interest in the proper functioning of public procurement mechanisms.

Second, the Court has recognised that ‘it is permissible for each Member State to strike a balance between the confidentiality [of procurement information] and the rules of national law pursuing other legitimate interests, including that … of ensuring “access to information”, in order to ensure the greatest possible transparency in public procurement procedures’; Antea Polska (C-54/21, para 57). However, in that regard, the exercise of such discretion cannot impinge on the effectiveness of the EU procurement rules seeking to align practice with the informational economics of procurement (ie to protect competition and the trust required to facilitate the revelation of private information, as above) to the extent that they also protect public interests (or private interests with a clear impact on the broader public interest, as above). Consequently, the Court stressed that ‘[n]ational legislation which requires publicising of any information which has been communicated to the contracting authority by all tenderers, including the successful tenderer, with the sole exception of information covered by the [narrowly defined] concept of trade secrets [in the Trade Secrets Directive], is liable to prevent the contracting authority … from deciding not to disclose certain information pursuant to interests or objectives [such as the protection of competition or commercial interests, but also the preservation of law enforcement procedures or the public interest], where that information does not fall within that concept of a trade secret’; Antea Polska (C-54/21, para 62).

In my view, the Court is clear that a ‘maximum transparency’ approach is not permissible and has stressed the duties incumbent on contracting authorities to protect public and private interests opposed to transparency. This is very much in line with the nuanced approach it has taken in another notable recent Judgment concerning open beneficial ownership data: Luxembourg Business Registers (C‑37/20 and C‑601/20) (see here for discussion). In Antea Polska, the Court has emphasised the need for case-by-case analysis of the competing interests in the confidentiality or disclosure of certain information.

This could have a significant impact on open data initiatives. First, it comes to severely limit ‘open by default’ approaches. Second, if contracting authorities find themselves unable to engage with nuanced analysis of the implications of information disclosure, they may easily ‘clam up’ and perpetuate (or resort back to) generally opaque approaches to procurement disclosure. Developing adequate institutional mechanisms and guidance will thus be paramount (as below).

Procurement Transparency and Private Interest

In its more detailed analysis of the specific information that contracting authorities need to preserve in order to align their practice with the informational economics of procurement (ie to promote trust and to protect market competition), the Court’s views in Antea Polska are also interesting but more problematic. The starting point is that the contracting authority cannot simply take an economic operator’s claim that a specific piece of information has commercial value or is protected by IP rights and must thus be kept confidential (Antea Polska, C-54/21, para 65), as that could generate excessive opacity and impinge of the procedural rights of competing tenderers. Moving beyond this blanket approach requires case-by-case analysis.

Concerning information over which confidentiality is claimed on the basis of its commercial value, the Court has stressed that ‘[t]he disclosure of information sent to the contracting authority in the context of a public procurement procedure cannot be refused if that information, although relevant to the procurement procedure in question, has no commercial value in the wider context of the activities of those economic operators’; Antea Polska (C-54/21, para 78). This requires the contracting authority to be able to assess the commercial value of the information. In the case, the dispute concerned whether the names of employees and subcontractors of the winning tenderer should be disclosed or not. The Court found that ‘in so far as it is plausible that the tenderer and the experts or subcontractors proposed by it have created a synergy with commercial value, it cannot be ruled out that access to the name-specific data relating to those commitments must be refused on the basis of the prohibition on disclosure’; Antea Polska (C-54/21, para 79). This points to the emergence of a sort of rebuttable presumption of commercial value that will be in practice very difficult to overcome by a contracting authority seeking to disclose information—either motu proprio, or on the request of a disappointed tenderer.

Concerning information over which confidentiality is claimed on the basis that it is protected by an IP right, in particular by copyright, the Court stressed that it is unlikely that copyright protection will apply to ‘technical or methodological solutions’ of procurement relevance (Antea Polska, C-54/21, para 82). Furthermore, ‘irrespective of whether they constitute or contain elements protected by an intellectual property right, the design of the projects planned to be carried out under the public contract and the description of the manner of performance of the relevant works or services may … have a commercial value which would be unduly undermined if that design and that description were disclosed as they stand. Their publication may, in such a case, be liable to distort competition, in particular by reducing the ability of the economic operator concerned to distinguish itself using the same design and description in future public procurement procedures’; Antea Polska (C-54/21, para 83). Again, this points to the emergence of a rebuttable presumption of commercial value and anticompetitive potential that will also be very difficult to rebut in practice.

The Court has also stressed that keeping this type of information confidential does not entirely bar disclosure. To discharge their duty to give reasons and facilitate access to remedies by disappointed tenderers, contracting authorities are under an obligation to disclose, to the extent possible, the ‘essential content’ of the protected information; Antea Polska (C-54/21, paras 80 and 84). Determining such essential content and ensuring that the relevant underlying (competing) rights are adequately protected will also pose a challenge to contracting authorities.

In sum, the Court has stressed that preserving competing interests related to the disclosure of confidential information in procurement requires the contracting authority to ‘assess whether that information has a commercial value outside the scope of the public contract in question, where its disclosure might undermine legitimate commercial concerns or fair competition. The contracting authority may, moreover, refuse to grant access to that information where, even though it does not have such commercial value, its disclosure would impede law enforcement or would be contrary to the public interest. A contracting authority must, where full access to information is refused, grant that tenderer access to the essential content of that information, so that observance of the right to an effective remedy is ensured’; Antea Polska (C-54/21, para 85). Once again, developing adequate institutional mechanisms and guidance will thus be paramount (as below).

Investing in the Way Forward

As I have argued elsewhere, and the Antea Polska Judgment has made abundantly clear, under EU procurement (and digital) law, it is simply not possible to create a system that makes all procurement data open. Conversely, the Judgment also makes clear that it is not possible to operate a system that keeps all procurement data confidential (Antea Polska, C-54/21, para 68).

Procurement data governance therefore requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions. This will require investing in data and analysis capabilities by public buyers, which can no longer treat the regulation of confidentiality in procurement as an afterthought or secondary consideration. In the data economy, public buyers need to create the required institutional mechanisms to discharge their growing data governance obligations.

Moreover, and crucially, creating adequate data governance approaches requires the development of useful guidance by the European Commission and national competition authorities, as well as procurement oversight bodies. The Court of Justice’s growing case law points to the potential emergence of (difficult to challenge) rebuttable presumptions of justified confidentiality that could easily result in high levels of procurement opacity. To promote a better balance of the competing public and private interests, a more nuanced approach needs to be supported by actionable guidance. This will be very important across all EU jurisdictions, as it is not only jurisdictions that had embraced ‘maximum transparency’ that now need to correct course—but also those that continue to lag in the disclosure of procurement information. Ensuring a level playing field in procurement data governance depends on the harmonisation of currently widely diverging practices. Procurement digitalisation thus offers an opportunity that needs to be pursued.

Happy holidays and all the best for 2023

Dear HTCaN friends,

The last few months have required intense work to make progress on the digital technologies and procurement governance research project. And more remains to be done before the final deadline in July 2023.

Knowing that you are there and that the draft chapters and posts are being read is a source of constant motivation. Receiving some useful feedback is always a gift. Thank you for your continued support and engagement with my scholarship during 2022.

I will take a break now, and I hope you will all also be able to disconnect, recharge and enjoy yourselves over the coming weeks. See you in the new year.

Season’s greetings and all best wishes,
Albert

"Tech fixes for procurement problems?" [Recording]

The recording and slides for yesterday’s webinar on ‘Tech fixes for procurement problems?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Professor Sope Williams (Stellenbosch), and Eliza Niewiadomska (EBRD) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.