AI regulation by contract: submission to UK Parliament

In October 2022, the Science and Technology Committee of the House of Commons of the UK Parliament (STC Committee) launched an inquiry on the ‘Governance of Artificial Intelligence’. This inquiry follows the publication in July 2022 of the policy paper ‘Establishing a pro-innovation approach to regulating AI’, which outlined the UK Government’s plans for light-touch AI regulation. The inquiry seeks to examine the effectiveness of current AI governance in the UK, and the Government’s proposals that are expected to follow the policy paper and provide more detail. The STC Committee has published 98 pieces of written evidence, including submissions from UK regulators and academics that will make for interesting reading. Below is my submission, focusing on the UK’s approach to ‘AI regulation by contract’.

A. Introduction

01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:

  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    • Is more legislation or better guidance required?

02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.

B. ‘AI Regulation by Contract’ through Procurement

03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.

04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.

05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.

C. Guidelines and Framework for AI procurement

06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.

07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.

08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.

09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.

D. Governance Shortcomings

10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).

11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.

12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.

13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).

14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.

15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.

E. Conclusion: An Alternative Approach

16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.

17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.

18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.

20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).

_______________________________________
Note: all websites last accessed on 25 October 2022.

[1] Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).

[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.

[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.

[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.

[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.

[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.

[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.

[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.

[9] See eg < https://datahazards.com/index.html >.

[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.

[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.

[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.

[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).

[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.

[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.

[16] Above (n 5).

[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.

[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.

[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.

[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.

[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.

[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.

[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.

[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.

[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.

[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.

[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.

[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.

[29] < https://artificialintelligenceact.eu/ >.

[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.

Wishful legal analysis as a trade strategy? A rebuttal to the Minister for International Trade

In the context of the Parliamentary scrutiny of the procurement chapters of the UK’s Free Trade Agreements with Australia and New Zealand, I submitted several pieces of written evidence, which I then gathered together and reformulated in A Sanchez-Graells, ‘The Growing Thicket of Multi-Layered Procurement Liberalisation between WTO GPA Parties, as Evidenced in Post-Brexit UK’ (2022) 49(3) Legal Issues of Economic Integration 247–268. I was also invited to submit oral evidence to the Public Bills Comittee for the Trade (Australia and New Zealand) Bill.

In my research, I raised some legal issues on the way the UK-AUS and UK-NZ procurement chapters would interact with the World Trade Agreement Government Procurement Agreement (GPA)—to which UK, AUS and NZ are members—and the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP)—to which the UK seeks accession and both AUS and NZ are members. I also raised issues with the rules on remedies in particular, both in relation to UK-AUS and the CPTPP.

I have now become aware of a letter from the Minister for International Trade, where the UK Government simply dismisses my legal analysis in an unconvicing manner. In this post, I try to rebut their position—although their lack of arguments makes this rather difficult—and stress some of the misunderstandings that the letter evidences. The letter seems to me to reflect a worrying strategy of ‘wishful legal analysis’ that does not bode well for post-Brexit UK trade realignment.

Interaction between the GPA, FTAs and the CPTPP

In my analysis and submissions, I stressed how deviations in the UK’s FTAs from the substantive obligations set in the GPA generate legal uncertainty and potential problems in ‘dual regulation’ situations, where one of the contracting parties (eg the UK) would be under the impossibility of complying at the same time with the obligations resulting from the GPA with tenderers from GPA countries and those arising from the FTAs with AUS or NZ with their tenderers—without either breaching GPA obligations or, what is more likely, ignoring the deviation in the FTAs to ensure GPA compliance. It would also generate issues where compliance with the more demanding standards in the FTAs would be automatically propagated to the benefit of economic operators from other jurisdictions. I also raised how the deviations can generate legal uncertainty and make it more difficult for UK tenderers to ascertain their legal position in AUS and NZ. And I also raised how this situation can get further complicated if the UK accesses CPTPP.

My concerns were discussed in Committee and the Minister had the following to say:

The [GPA] and the [CPTPP] are plurilateral agreements between twenty-one and eleven parties respectively, including in each case Australia and New Zealand. As recognised in Committee, the [GPA] in particular establishes a global baseline for international procurement. Nonetheless, neither prevents its members from entering into bilateral free trade agreements to sit alongside the [GPA] and [CPTPP] while at the same time going further in terms of the procurement commitments between members.

These Agreements with Australia and New Zealand do just that, going beyond both the [GPA] and the [CPTPP] baselines. … Although the texts of the Agreements with Australia and New Zealand are sometimes laid out differently to the way they are in the Agreement on Government Procurement, they in no way dilute or reduce the global baseline established by the [GPA]. (emphases added).

There are two points to note, here. The first one is that the fact that the GPA and the CPTPP allow for bilateral agreements between their parties does not clarify how the overlapping treaties would operate, which is exactly what I analysed. Of note, under the 1969 Vienna Convention on the Law of Treaties (Art 30), when States conclude successive treaties relating to the same subject matter, the most recent treary prevails, and the provisions of the earlier treaty/ies only apply to the extent that they are not incompatible with those of the later treaty.

This is crucial here, especially as the Minister indicates that the UK-AUS and UK-NZ go beyond not only the GPA, but also the CPTPP. This would mean that entering into CPTPP after UK-AUS and UK-NZ—as the UK is currently in train of doing—could negate some of the aspects that go beyond CPTPP in both those FTAs. Moreover, the simple assertion that the FTAs do not dilute the GPA baseline is unconvincing, as detailed analysis shows that there are significant problems with eg the interpretation of the national treatment under the different treaties.

Secondly, the explanation provided does not resolve the practical problems arising from ‘dual regulation’ that I have identified and leaves the question open as to how the obligations under the FTAs will be interpreted and complied with in triangular situations involving tenderers not from AUS or NZ. Either the UK will apply the more demanding obligations—which will then benefit all GPA parties, not only AUS and NZ—or will stick to the GPA baseline in breach of the FTAs. There is no recognition of this issue in the letter.

The Minister also indicated that:

There was also suggestion in Committee that it would be difficult for suppliers in the United Kingdom to navigate the Agreements with Australia and New Zealand, as well as the [CPTPP] in the future. I would like to reassure the Committee that when bidding for United Kingdom procurements, the only system that British suppliers need to concern themselves with is United Kingdom’s procurement regulations. (emphasis added).

The Minister has either not understood the situation, or is seeking to obscure the analysis here. The concerns about legal uncertainty do not relate to UK businesses tendering for contracts in the UK, but to UK businesses tendering for contracts in AUS or NZ—which are the ones that would be seeking to benefit from the trade liberalisation pursued by those FTAs. Nothing in the Minister’s letter addresses this issue.

Domestic review rights under the Australian procurement chapter

One of the specific deviations from the GPA baseline that I identified in my research concerns the exclusion of access to remedies on grounds of public interest. While the GPA only allows excluding interim measures on such grounds, the AUS-UK FTA and CPTPP seem to allow for public interest to also bar access to remedies such as compensation—and, if this does not limit access to remedies as I submit, at least it does cause legal uncertainty in that respect.

My submission is met with the following response by the Minister [the mentioned annex is reproduced at the end of this post]:

The Committee also considered the evidence raised by Professor Sánchez-Graells regarding domestic review procedures … The Government respectfully disagrees with the analysis presented at that session that a provision in the government procurement chapter of the [UK-AUS FTA] ‘allows for the exclusion of legal remedies completely on the basis of public interest’.

The public interest exclusion only applies to temporary measures put in place to ensure aggrieved suppliers may continue to participate in a procurement.

The Government also respectfully disagrees with the suggestion in the witness evidence that this public interest exclusion is not similarly reflected in the [GPA] or the [UK-NZ FTA]. The Government acknowledges that the specific position of the exclusion differs between these agreements and is closer to the approach adopted in the [CPTPP]. Nonetheless, the Government do not consider this alters the legal effect or gives rise to legal uncertainty. For the benefit of the Committee, the relevant provisions from each of the [FTAs], the [GPA] and the [CPTPP] are set out in an annex to this letter.

The Minister’s explanations are not supported by any arguments. There is no reasoning to explain why the order of the clauses and subclauses in the relevant provisions does not alter their legal interpretation or effects. There is also no justification whatsoever for the opinion that textual differences do not give rise to legal uncertainty. The Government seems to think that it can simply wish the legal issues away.

The table included in the annex to the letter (below) is revealing of the precise issue that raises legal uncertainty and, potentially, a restriction on access to remedies other than interim measures beyond the GPA (and thus, in breach thereof). Why would treaties that seem to replicate the same rules draft them differently? How can any legal interpreter be of the opinion that the positioning of the exception clause does not have an effect on the interpretation of its scope of application? Is the fact that these agreements post-date the GPA and still deviate from it not of legal relevance?

Of course, there are arguments that could be made to counter my analysis. They could eg focus on the use of different (undefined) terms in different sub-clauses (such as ‘measures’ and ‘corrective action’). They could also focus on any preparatory works to the treaties (especially the CPTPP and UK-AUS FTA, which I have not yet been able to locate). They could even be more creative and attempt functional or customary interpretation arguments. But the letter contains no arguments at all.

Conclusion

It is a sad state of affairs where detailed legal analysis—whether correct or not—is dismissed without offering any arguments to the contrary and simply seeking to leverage the ‘authority’ of a Minister or Department. If this is the generalised approach to assessing the legal implications of the trade agreements negotiated (at speed) by the UK post-Brexit, it does not bode well for the legal certainty required to promote international investments and commercial activities.

The reassurances in the letter are void of any weight, in my view. I can only hope that the Committee is not persuaded by the empty explanations contained in the letter either.



A tribute fit for a king -- some personal reflections after Steen Treumer's Mindeskrift

On 2 December 2022, the Faculty of Law at the University of Copenhagen hosted the conference ‘Into the Northern Light — In memory of Steen Treumer’ to celebrate his life and academic legacy on what would have been his 57th birthday. The conference was co-organised by Carina Risvig Hamer and Marta Andhov, who put together a tribute fit for a king. It was an exceptional event. Not only for the academic content of the presentations and the further papers in the tribute book (which you can buy here), but also because it provided an opportunity to learn more about Steen and his approach to academia. I have since been mulling over lots of things I heard on the day. This is a rather personal reflection on what knowing more about Steen’s life means for my aspirations as a senior academic (if you are interested, here are some earlier thoughts).

It is easy to idolize the academics that have been influential in your academic path to knowledge. And it is sometimes a bad idea to ‘meet your idols’, for great ideas are not always formualted or held by great people. However, in Steen’s case, it was not only transformative to know him, but also deeply inspirational. What most struck me at the conference is not only that all the stories and anecdotes that were shared rang true with my own experience of collaborating with Steen. But also that there was so much more exceptional in the person than in the academic, and that his personality and private life were an extension of his academic persona.

Steen incarnated exceptional virtues as an academic role model. He was extremely clever, dedicated and curious. This led him to pioneer research and produce a wealth of knowledge that was ahead of the curve and that had clear practical relevance and influence. It led him to have high standards and to always seek to engage in detailed discussions of complicated and controversial topics. It was said he was competitive and always keen on winning the argument. However, he was always approachable, accessible, respectful and never punched down. He was compassionate and kind. He was measured and knew how to be forceful without being aggressive. He was patient and listened twice as much as he spoke (for he never forgot that he had two ears and one mouth, as was stressed in the conference). He sought collaborations and nurtured relationships. He always played the long game. He was an enabler of others and took pride in that. He was extremely resilient and down to earth, and could control what others would have experienced as overwhelming emotions without losing hope or letting them derail his projects, even in the face of the greatest adversity. And this is not an exhaustive list of his virtues.

Sitting there, witnessing the love for Steen and the sadness for his unjustifiedly early departure, and reflecting on all this, I realized that I am now roughly of the same age Steen had when I first met him in 2009. And I hold a roughly comparable academic position. However, I am so far from having developed the skills and the approach he already had back then that I feel rather inadequate in many aspects of my role. I won’t list my shortcomings (too long a laundry list, best dealt with in private), but the one I keep thinking about is my limited humility (or rather, my egotism and pride) and my conflation of forceful or passionate arguing with aggressive attitudes. I am increasingly aware that over the years I have probably offended more than a fellow academic (at conferences, in this blog) and that some of my views could have been presented more kindly without detracting from the academic judgement underlying them. For that, I can only offer an unreserved apology. And to commit to try my best to change attitude, be more humble and, dare I say, try to be a little more like Steen.

If I have any chance of success, it is because of the role model Steen offered (which aligns with the core values and attitudes of other role models I still benefit from) and the unwavering support I receive from many colleagues, but especially the core of my academic collaborators and friends at the European Procurement Law Group: Roberto Caranta, Kirsi-Maria Halonen, Carina Risvig Hamer, and Pedro Telles. Seeing them again, after 3 or 4 years apart, made me far happier than I could have anticipated. And this reminded me both of the joys of belonging to a community and the duty to foster the right ways of engagement for such a community to thrive. I won’t forget this again.

New CJEU case law against excessive disclosure: quid de open data? (C‑54/21, and joined C‑37/20 and C‑601/20)

In the last few days, the Court of Justice of the European Union (CJEU) has delivered two judgments imposing significant limitations on the systematic, unlimited disclosure of procurement information with commercial value, such as the identity of experts and subcontractors engaged by tenderers for public contracts; and beneficial ownership information. In imposing a nuanced approach to the disclosure of such information, the CJEU may have torpedoed ‘full transparency’ approaches to procurement and beneficial ownership open data.

Indeed, these are two classes of information at the core of current open data efforts, and they are relevant for (digital) procurement governance—in particular in relation to the prevention of corruption and collusion, which automated screening requires establishing relationships and assessing patterns of interaction reliant on such data [for discussion, see A Sanchez-Graells, ‘Procurement Corruption and Artificial Intelligence: Between the Potential of Enabling Data Architectures and the Constraints of Due Process Requirements’ in S Williams & J Tillipman (eds), Routledge Handbook of Public Procurement Corruption (forthcoming)]. The judgments can thus have important implications.

In Antea Polska, the CJEU held that EU procurement rules prevent national legislation mandating all information sent by the tenderers to the contracting authorities to be published in its entirety or communicated to the other tenderers, with the sole exception of trade secrets. The CJEU reiterated that the scope of non-disclosable information is much broader and requires a case-by-case analysis by the contracting authority, in particular with a view to avoiding the release of information that could be used to distort competition. Disclosure of information needs to strike an adequate balance between meeting good administration duties to enable the right to the effective review of procurement decisions, on the one hand, and the protection of information with commercial value or with potential competition implications, on the other.

In a related fashion, in Luxembourg Business Registers, the CJEU declared invalid the provision of the Anti-Money Laundering Directive whereby Member States had to ensure that the information on the beneficial ownership of corporate and other legal entities incorporated within their territory was accessible in all cases to any member of the general public—without the need to demonstrate having a legitimate interest in accessing it. The CJEU considered that the disclosure of the information to undefined members of the public created an excessive interference with the fundamental rights to respect for private life and to the protection of personal data.

In this blog post, I analyse these two cases and reflect on their implications for the management of (big) open data for procurement governance purposes, in particular from an anti-corruption perspective and in relation to the EU law data governance obligations incumbent on public buyers.

There is more than trade secrets to procurement confidentiality

In Antea Polska and Others (C-54/21, EU:C:2022:888), among other questions, the CJEU was asked whether Directive 2014/24/EU precludes national legislation on public procurement which required that, with the sole exception of trade secrets, information sent by the tenderers to the contracting authorities be published in its entirety or communicated to the other tenderers, and a practice on the part of contracting authorities whereby requests for confidential treatment in respect of trade secrets were accepted as a matter of course.

I will concentrate on the first part of the question on full transparency solely constrained by trade secrets—and leave the ‘countervailing’ practice aside for now (though it deserves some comment because it creates a requirement for the contracting authority to assess the commercial value of procurement information in the wider context of the activities of the participating economic operators, at paras 69-85). I will also not deal with the discrepancy between the concept of ‘trade secret’ under the Trade Secrets Directive and the concept of ‘confidential information’ in Directive 2014/24 (which the CJEU clarifies, again, at paras 51-55).

The issue of full transparency of procurement information subject only to trade secret protection raises an interesting question because it concerns the compatibility with EU law of a maximalistic approach to procurement transparency that is not peculiar to Poland (where the case originated) but shared by other Member States with a permissive tradition of access to public documents [for in-depth country-specific analyses and comparative considerations, see the contributions to K-M Halonen, R Caranta & A Sanchez-Graells, Transparency in EU Procurements. Disclosure Within Public Procurement and During Contract Execution (Edward Elgar 2019)].

The question concerns the interpretation of multiple provisions of Directive 2014/24/EU and, in particular, Art 21(1) on confidentiality and Arts 50(4) and 55(3) on the withholding of information [see my comments to Art 21 and 55 in R Caranta & A Sanchez-Graells, European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar 2021)]. All of them are of course to be interpreted in line with the general principle of competition in Art 18(1) [see A Sanchez-Graells, Public Procurement and the EU Competition Rules (2nd edn, hart 2015) 444-445].

In addressing the question, the CJEU built on its recent judgment in Klaipėdos regiono atliekų tvarkymo centras (C‑927/19, EU:C:2021:700), and reiterated its general approach to the protection of confidential information in procurement procedures:

‘… the principal objective of the EU rules on public procurement is to ensure undistorted competition … to achieve that objective, it is important that the contracting authorities do not release information relating to public procurement procedures which could be used to distort competition, whether in an ongoing procurement procedure or in subsequent procedures. Since public procurement procedures are founded on a relationship of trust between the contracting authorities and participating economic operators, those operators must be able to communicate any relevant information to the contracting authorities in such a procedure, without fear that the authorities will communicate to third parties items of information whose disclosure could be damaging to those operators’ (C-54/21, para 49, reference omitted, emphasis added).

The CJEU linked this interpretation to the prohibition for contracting authorities to disclose information forwarded to it by economic operators which they have designated as confidential [Art 21(1) Dir 2014/24] and stressed that this had to be reconciled with the requirements of effective judicial protection and, in particular, the general principle of good administration, from which the obligation to state reasons stems because ‘in the absence of sufficient information enabling it to ascertain whether the decision of the contracting authority to award the contract is vitiated by errors or unlawfulness, an unsuccessful tenderer will not, in practice, be able to rely on its right .. to an effective review’ (C-54/21, para 50).

The Court also stressed that the Directive allows Member States to modulate the scope of the protection of confidential information in accordance with their national legislation, in particular legislation concerning access to information [Art 21(1) Dir 2014/24, C-54/21, para 56]. In that regard, however, the CJEU went on to stress that

‘… if the effectiveness of EU law is not to be undermined, the Member States, when exercising the discretion conferred on them by Article 21(1) of that directive, must refrain from introducing regimes … which undermine the balancing exercise [with the right to an effective review] or which alter the regime relating to the publicising of awarded contracts and the rules relating to information to candidates and tenderers set out in Article 50 and 55 of that directive … any regime relating to confidentiality must, as Article 21(1) of Directive 2014/24 expressly states, be without prejudice to the abovementioned regime and to those rules laid down in Articles 50 and 55 of that directive’ (C-54/21, para 58-59).

Focusing on Art 50(4) and Art 55(3) of Directive 2014/24/EU, the CJEU stressed that these provisions empower contracting authorities to withhold from general publication and from disclosure to other candidates and tenderers ‘certain information, where its release would impede law enforcement, would otherwise be contrary to the public interest or would prejudice the legitimate commercial interests of an economic operator or might prejudice fair competition’ (para 61 and, almost identically, para 60). This led the Court to the conclusion that

‘National legislation which requires publicising of any information which has been communicated to the contracting authority by all tenderers, including the successful tenderer, with the sole exception of information covered by the concept of trade secrets, is liable to prevent the contracting authority, contrary to what Articles 50(4) and 55(3) of Directive 2014/24 permit, from deciding not to disclose certain information pursuant to interests or objectives mentioned in those provisions, where that information does not fall within that concept of a trade secret.

Consequently, Article 21(1) of Directive 2014/24, read in conjunction with Articles 50 and 55 of that directive … precludes such a regime where it does not contain an adequate set of rules allowing contracting authorities, in circumstances where Articles 50 and 55 apply, exceptionally to refuse to disclose information which, while not covered by the concept of trade secrets, must remain inaccessible pursuant to an interest or objective referred to in Articles 50 and 55’ (paras 62-63).

In my view, this is the correct interpretation and an important application of the rules seeking to minimise the risk of distortions of competition due to excessive procurement transparency, on which I have been writing for a long time [see also K-M Halonen, ‘Disclosure rules in EU public procurement: balancing between competition and transparency’ (2016) 16(4) Journal of Public Procurement 528].

The Antea Polska judgment stresses the importance of developing a nuanced approach to the management, restricted disclosure and broader publication of information submitted to the contracting authority in a procurement procedure. Notably, this will create particular complications in the context of the design and rollout of procurement open data, especially in the context of the new eForms (see here, and below).

Transparency for what? Who really cares about beneficial ownership?

In Luxembourg Business Registers (joined cases C‑37/20 and C‑601/20, EU:C:2022:912, FR only—see EN press release on which I rely to avoid extensive own translations from French) the CJEU was asked to rule on the compatibility with the Charter of Fundamental Rights—and in particular Articles 7 (respect for private and family life) and 8 (protection of personal data)—of Article 30(5)(c) of the consolidated version of the Anti-Money Laundering Directive (AML Directive), which required Member States to ensure that information on the beneficial ownership of corporate and other legal entities incorporated within their territory is accessible in all cases to any member of the general public. In particular, members of the general public had to ‘be permitted to access at least the name, the month and year of birth and the country of residence and nationality of the beneficial owner as well as the nature and extent of the beneficial interest held.’

The CJEU has found that the general public’s access to information on beneficial ownership constitutes a serious interference with the fundamental rights to respect for private life and to the protection of personal data, which is exacerbated by the fact that, once those data have been made available to the general public, they can not only be freely consulted, but also retained and disseminated.

While the CJEU recognised that the AML Directive pursues an objective of general interest and that the general public’s access to information on beneficial ownership is appropriate for contributing to the attainment of that objective, the interference with individual fundamental rights is neither limited to what is strictly necessary nor proportionate to the objective pursued.

The Court paid special attention to the fact that the rules requiring unrestricted public access to the information result from a modification of the previous regime in the original AML Directive, which required, in addition to access by the competent authorities and certain entities, for access by any person or organisation capable of demonstrating a legitimate interest. The Court considered that the suppression of the requirement to demonstrate a legitimate interest in accessing the information did not generate sufficient benefits from the perspective of combating money laundering and terrorist financing to offset the significantly more serious interference with fundamental rights that open publication of the beneficial ownership data entails.

Here, the Court referred to its judgment in Vyriausioji tarnybinės etikos komisija (C‑184/20, EU:C:2022:601), where it carried out a functional comparison of the anti-corruption effects of a permissioned system of institutional access and control of relevant disclosures, versus public access to that information. The Court was clear that

‘… the publication online of the majority of the personal data contained in the declaration of private interests of any head of an establishment receiving public funds … does not meet the requirements of a proper balance. In comparison with an obligation to declare coupled with a check of the declaration’s content by the Chief Ethics Commission … such publication amounts to a considerably more serious interference with the fundamental rights guaranteed in Articles 7 and 8 of the Charter, without that increased interference being capable of being offset by any benefits which might result from publication of all those data for the purpose of preventing conflicts of interest and combating corruption’ (C-184/20, para 112).

In Luxembourg Business Registers, the CJEU also held that the optional provisions in Art 30 AML Directive that allowed Member States to make information on beneficial ownership available on condition of online registration and to provide, in exceptional circumstances, for an exemption from access to that information by the general public, were not, in themselves, capable of demonstrating either a proper balance between competing interests, or the existence of sufficient safeguards.

The implication of the Luxembourg Business Registers is that a different approach to facilitating access to beneficial ownership data is required, and that an element of case-by-case assessment (or at least of an assessment based on categories of organisations and individuals seeking access) will need to be brought back into the system. In other words, permissioned access to beneficial ownership data seems unavoidable.

Implications for open data and data governance

These recent CJEU judgments seem to me to clearly establish the general principle that unlimited transparency does not equate public interest, as there is also an interest in preserving the (relative) confidentiality of some information and data and an adequate, difficult balance needs to be struck. The interests in competition with transparency can be either individual (fundamental rights, or commercial value) or collective (avoidance of distortions of competition). Detailed and comprehensive assessment on a case-by-case basis is required.

As I advocated long ago, and recently reiterated in relation to the growing set of data governance obligations incumbent on public buyers, under EU law,

‘It is thus simply not possible to create a system that makes all procurement data open. Data governance requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions. While the need to balance procurement transparency and the protection of data subject to the rights of others and competition-sensitive data is not a new governance challenge, the digital management of this information creates heightened risks to the extent that the implementation of data management solutions is tendentially ‘open access’ (and could eg reverse presumptions of confidentiality), as well as in relation to system integrity risks (ie cybersecurity)’ (at 10, references omitted).

The CJEU judgments have (re)confirmed that unlimited ‘open access’ is not a viable strategy under EU law. It is perhaps clearer than ever that the capture, structuring, retention, and disclosure of governance-relevant procurement and related data (eg beneficial ownership) needs to be decoupled from its proactive publication. This requires a reconsideration of the open data model and, in particular, a careful assessment of the implementation of the new eForms that only just entered into force.

Governing the Assessment and Taking of Risks in Digital Procurement Governance

In a previous blog post, I explored the main governance risks and legal obligations arising from the adoption of digital technologies, which revolve around data governance, algorithmic transparency, technological dependency, technical debt, cybersecurity threats, the risks stemming from the long-term erosion of the skills base in the public sector, and difficult trade-offs due to the uncertainty surrounding immature and still changing technologies within an also evolving regulatory framework. To address such risks and ensure compliance with the relevant governance obligations, I stressed the need to embed a comprehensive mechanism of risk assessment in the process of technological adoption.

In a new draft chapter (num 9) for my book project, I analyse how to embed risk assessments in the initial stages of decision-making processes leading to the adoption of digital solutions for procurement governance, and how to ensure that they are iterated throughout the lifecycle of use of digital technologies. To do so, I critically review the model of AI risk regulation that is emerging in the EU and the UK, which is based on self-regulation and self-assessment. I consider its shortcomings and how to strengthen the model, including the possibility of subjecting the process of technological adoption to external checks. The analysis converges with a broader proposal for institutionalised regulatory checks on the adoption of digital technologies by the public sector that I will develop more fully in another part of the book.

This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Governing the Assessment and Taking of Risks in Digital Procurement Governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming), Available at SSRN: https://ssrn.com/abstract=4282882.

AI Risk Regulation

The emerging (global) model of AI regulation is risk-based—as opposed to a strict precautionary approach. This implies an assumption that ‘a technology will be adopted despite its harms’. This primarily means accepting that technological solutions may (or will) generate (some) negative impacts on public and private interests, even if it is not known when or how those harms will arise, or how extensive they will be. AI are unique, as they are ‘long-term, low probability, systemic, and high impact’, and ‘AI both poses “aggregate risks” across systems and low probability but “catastrophic risks to society”’ [for discussion, see Margot E Kaminski, ‘Regulating the risks of AI’ (2023) 103 Boston University Law Review, forthcoming]

This should thus trigger careful consideration of the ultimate implications of AI risk regulation, and advocates in favour of taking a robust regulatory approach—including to the governance of the risk regulation mechanisms put in place, which may well require external controls, potentially by an independent authority. By contrast, the emerging model of AI risk regulation in the context of procurement digitalisation in the EU and the UK leaves the adoption of digital technologies by public buyers largely unregulated and only subject to voluntary measures, or to open-ended obligations in areas without clear impact assessment standards (which reduces the prospect of effective mandatory enforcement).

Governance of Procurement Digitalisation in the EU

Despite the emergence of a quickly expanding set of EU digital law instruments imposing a patchwork of governance obligations on public buyers, whether or not they adopt digital technologies (see here), the primary decision whether to adopt digital technologies is not subject to any specific constraints, and the substantive obligations that follow from the diverse EU law instruments tend to refer to open-ended standards that require advanced technical capabilities to operationalise them. This would not be altered by the proposed EU AI Act.

Procurement-related AI uses are classified as minimal risk under the EU AI Act, which leaves them subject only to voluntary self-regulation via codes of conduct—yet to be developed. Such codes of conduct should encourage voluntary compliance with the requirements applicable to high-risk AI uses—such as risk management systems, data and data governance requirements, technical documentation, record-keeping, transparency, or accuracy, robustness and cybersecurity requirements—‘on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.’ This seems to introduce a further element of proportionality or ‘adaptability’ requirement that could well water down the requirements applicable to minimal risk AI uses.

Importantly, while it is possible for Member States to draw such codes of conduct, the EU AI Act would pre-empt Member States from going further and mandating compliance with specific obligations (eg by imposing a blanket extension of the governance requirements designed for high-risk AI uses) across their public administrations. The emergent EU model is thus clearly limited to the development of voluntary codes of conduct and their likely content, while yet unknown, seems unlikely to impose the same standards applicable to the adoption of high-risk AI uses.

Governance of Procurement Digitalisation in the UK

Despite its deliberate light-touch approach to AI regulation and actively seeking to deviate from the EU, the UK is relatively advanced in the formulation of voluntary standards to govern procurement digitalisation. Indeed, the UK has adopted guidance for the use of AI in the public sector, and for AI procurement, and is currently piloting an algorithmic transparency standard (see here). The UK has also adopted additional guidance in the Digital, Data and Technology Playbook and the Technology Code of Practice. Remarkably, despite acknowledging the need for risk assessments—and even linking their conduct to spend approvals required for the acquisition of digital technologies by central government organisations—none of these instruments provides clear standards on how to assess (and mitigate) risks related to the adoption of digital technologies.

Thus, despite the proliferation of guidance documents, the substantive assessment of governance risks in digital procurement remains insufficiently addressed and left to undefined risk assessment standards and practices. The only exception concerns cyber security assessments, given the consolidated approach and guidance of the National Cyber Security Centre. This lack of precision in the substantive requirements applicable to data and algorithmic impact assessments clearly constrains the likely effectiveness of the UK’s approach to embedding technology-related impact assessments in the process of adoption of digital technologies for procurement governance (and, more generally, for public governance). In the absence of clear standards, data and algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also increase the need to access specialist expertise to design and carry out the assessments. Developing such standards and creating an effective institutional mechanism to ensure compliance therewith thus remain a challenge.

The Need for Strengthened Digital Procurement Governance

Both in the EU and the UK, the emerging model of AI risk regulation leaves digital procurement governance to compliance with voluntary measures such as (future) codes of conduct or transparency standards or impose open-ended obligations in areas without clear standards (which reduces the prospect of effective mandatory enforcement). This follows general trends of AI risk regulation and evidences the emergence of a (sub)model highly dependent on self-regulation and self-assessment. This approach is rather problematic.

Self-Regulation: Outsourcing Impact Assessment Regulation to the Private Sector

The absence of mandatory standards for data and algorithmic impact assessments, as well as the embedded flexibility in the standards for cyber security, are bound to outsource the setting of the substantive requirements for those impact assessments to private vendors offering solutions for digital procurement governance. With limited public sector digital capability preventing a detailed specification of the applicable requirements, it is likely that these will be limited to a general obligation for tenderers to provide an impact assessment plan, perhaps by reference to emerging (international private) standards. This would imply the outsourcing of standard setting for risk assessments to private standard-setting organisations and, in the absence of those standards, to the tenderers themselves. This generates a clear and problematic risk of regulatory capture. Moreover, this process of outsourcing or excessively reliance on private agents to commercially determine impact assessments requirements is not sufficiently exposed to scrutiny and contestation.

Self-Assessment: Inadequacy of Mechanisms for Contestability and Accountability

Public buyers will rarely develop the relevant technological solutions but rather acquire them from technological providers. In that case, the duty to carry out the self-assessment will (or should be) cascaded down to the technology provider through contractual obligations. This would place the technology provider as ‘first party’ and the public buyer as ‘second party’ in relation to assuring compliance with the applicable obligations. In a setting of limited public sector digital capability, and in part as a result of a lack of clear standards providing an applicable benchmark (as above), the self-assessment of compliance with risk management requirements will either be de facto outsourced to private vendors (through a lack of challenge of their practices), or carried out by public buyers with limited capabilities (eg during the oversight of contract implementation). Even where public buyers have the required digital capabilities to carry out a more thorough analysis, they lack independence. ‘Second party’ assurance models unavoidably raise questions about their integrity due to the conflicting interests of the assurance provider who wants to use the system (ie the public buyer).

This ‘second party’ assurance model does not include adequate challenge mechanisms despite efforts to disclose (parts of) the relevant self-assessments. Such disclosures are constrained by general problems with ‘comply or explain’ information-based governance mechanisms, with the emerging model showing design features that have proven problematic in other contexts (such as corporate governance and financial market regulation). Moreover, there is no clear mechanism to contest the decisions to adopt digital technologies revealed by the algorithmic disclosures. In many cases, shortcomings in the risk assessments and the related minimisation and mitigation measures will only become observable after the materialisation of the underlying harms. For example, the effects of the adoption of a defective digital solution for decision-making support (eg a recommender system) will only emerge in relation to challengeable decisions in subsequent procurement procedures that rely on such solution. At that point, undoing the effects of the use of the tool may be impossible or excessively costly. In this context, challenges based on procedure-specific harms, such as the possibility to challenge discrete procurement decisions under the general rules on procurement remedies, are inadequate. Not least, because there can be negative systemic harms that are very hard to capture in the challenge to discrete decisions, or for which no agent with active standing has adequate incentives. To avoid potential harms more effectively, ex ante external controls are needed instead.

Creating External Checks on Procurement Digitalisation

It is thus necessary to consider the creation of external ex ante controls applicable to these decisions, to ensure an adequate embedding of effective risk assessments to inform (and constrain) them. Two models are worth considering: certification schemes and independent oversight.

Certification or Conformity Assessments

While not applicable to procurement uses, the model of conformity assessment in the proposed EU AI Act offers a useful blueprint. The main potential shortcoming of conformity assessment systems is that they largely rely on self-assessments by the technology vendors, and thus on first party assurance. Third-party certification (or algorithmic audits) is possible, but voluntary. Whether there would be sufficient (market) incentives to generate a broad (voluntary) use of third-party conformity assessments remains to be seen. While it could be hoped that public buyers could impose the use of certification mechanisms as a condition for participation in tender procedures, this is a less than guaranteed governance strategy given the EU procurement rules’ functional approach to the use of labels and certificates—which systematically require public buyers to accept alternative means of proof of compliance. This thus seems to offer limited potential for (voluntary) certification schemes in this specific context.

Relatedly, the conformity assessment system foreseen in the EU AI Act is also weakened by its reliance on vague concepts with non-obvious translation into verifiable criteria in the context of a third-party assurance audit. This can generate significant limitations in the conformity assessment process. This difficulty is intended to be resolved through the development of harmonised standards by European standardisation organisations and, where those do not exist, through the approval by the European Commission of common specifications. However, such harmonised standards will largely create the same risks of commercial regulatory capture mentioned above.

Overall, the possibility of relying on ‘third-party’ certification schemes offers limited advantages over the self-regulatory approach.

Independent External Oversight

Moving beyond the governance limitations of voluntary third-party certification mechanisms and creating effective external checks on the adoption of digital technologies for procurement governance would require external oversight. An option would be to make the envisaged third-party conformity assessments mandatory, but that would perpetuate the risks of regulatory capture and the outsourcing of the assurance system to private parties. A different, preferable option would be to assign the approval of the decisions to adopt digital technologies and the verification of the relevant risks assessments to a centralised authority also tasked with setting the applicable requirements therefor. The regulator would thus be placed as gatekeeper of the process of transition to digital procurement governance, instead of the atomised imposition of this role on public buyers. This would be reflective of the general features of the system of external controls proposed in the US State of Washington’s Bill SB 5116 (for discussion, see here).

The main goal would be to introduce an element of external verification of the assessment of potential AI harms and the related taking of risks in the adoption of digital technologies. It is submitted that there is a need for the regulator to be independent, so that the system fully encapsulates the advantages of third-party assurance mechanisms. It is also submitted that the data protection regulator may not be best placed to take on the role as its expertise—even if advanced in some aspects of data-intensive digital technologies—primarily relates to issues concerning individual rights and their enforcement. The more diffuse collective interests at stake in the process of transition to a new model of public digital governance (not only in procurement) would require a different set of analyses. While reforming data protection regulators to become AI mega-regulators could be an option, that is not necessarily desirable and it seems that an easier to implement, incremental approach would involve the creation of a new independent authority to control the adoption of AI in the public sector, including in the specific context of procurement digitalisation.

Conclusion

An analysis of emerging regulatory approaches in the EU and the UK shows that the adoption of digital technologies by public buyers is largely unregulated and only subjected to voluntary measures, or to open-ended obligations in areas without clear standards (which reduces the prospect of effective mandatory enforcement). The emerging model of AI risk regulation in the EU and UK follows more general trends and points at the consolidation of a (sub)model of risk-based digital procurement governance that strongly relies on self-regulation and self-assessment.

However, given its limited digital capabilities, the public sector is not best placed to control or influence the process of self-regulation, which results in the outsourcing of crucial regulatory tasks to technology vendors and the consequent risk of regulatory capture and suboptimal design of commercially determined governance mechanisms. These risks are compounded by the emerging ‘second party assurance’ model, as self-assessments by technology vendors would not be adequately scrutinised by public buyers, either due to a lack of digital capabilities or the unavoidable structural conflicts of interest of assurance providers with an interest in the use of the technology, or both. This ‘second party’ assurance model does not include adequate challenge mechanisms despite efforts to disclose (parts of) the relevant self-assessments. Such disclosures are constrained by general problems with ‘comply or explain’ information-based governance mechanisms, with the emerging model showing design features that have proven problematic in other contexts (such as corporate governance and financial market regulation). Moreover, there is no clear mechanism to contest the decisions revealed by the disclosures, including in the context of (delayed) specific uses of the technological solutions.

The analysis also shows how a model of third-party assurance or certification would be affected by the same issues of outsourcing of regulatory decisions to private parties, and ultimately would largely replicate the shortcomings of the self-regulatory and self-assessed model. A certification model would thus only generate a marginal improvement over the emerging model—especially given the functional approach to the use of certification and labels in procurement.

Moving past these shortcomings requires assigning the approval of decisions whether to adopt digital technologies and the verification of the related impact assessments to an independent authority: the ‘AI in the Public Sector Authority’ (AIPSA). I will fully develop a proposal for such authority in coming months.

UK REGULATION AFTER BREXIT REVISITED -- PUBLIC PROCUREMENT

Negotiating the Future’ and ‘UK in a Changing Europe’ have published a second edition of their interesting report on ‘UK Regulation after Brexit - Revisited’. I had contributed a procurement chapter to the first edition (which has recently been cited in this interesting report for the European Committee of the Regions on the impact on regions and cities of the new trade and economic relations between EU-UK). So I was invited to update the analysis, paying special attention to the (slow) progress of reform of the UK procurement rulebook with the Procurement Bill.

The procurement analysis is below, but I would recommend reading the report in full, as it gives a rather comprehensive picture of how regulation is moving in the UK. For more targeted analysis on regulatory divergence with the EU, this other UK in a Changing Europe ‘Divergence Tracker’ (v5.0) will be of interest.

Public procurement

Public procurement regulation is the set of rules and policies that controls the award of public contracts for works, supplies, and services. Its main goal is to ensure probity and value for money in the spending of public funds – to prevent corruption, collusion, and wastage of taxpayers’ money. It does so by establishing procedural requirements leading to the award of a public contract, and by constraining discretion through requirements of equal treatment, competition, and proportionality. From a trade perspective, procurement law prevents favouritism and protectionism of domestic businesses by facilitating international competition.

In the UK, procurement rules have long been considered an excessive encumbrance on the discretion and flexibility of the public sector, as well as on its ability to deploy ambitious policies with social value to buy British products made by British workers. The EU origin of UK domestic rules, which ‘copied out’ EU Directives before Brexit, has long been blamed for perceived rigidity and constraint in the allocation of public contracts, even though a ‘WTO regime’ would look very similar.

Capitalising on that perception during the Brexit process, public procurement was ear-marked for reform. Boris Johnson promised a ‘bonfire of procurement red tape to give small firms a bigger slice of Government contracts’. The Johnson Government proposed to significantly rewrite and simplify the procurement rulebook, and to adopt an ambitious ‘Buy British’ policy, which would reserve some public contracts to British firms. However, although one of the flagship areas for regulatory reform, not much has changed in practical terms. Reforms are perhaps on the horizon in 2023 or 2024, but the extent to which they will result in material divergence from the pre-Brexit EU regulatory baseline remains to be seen.

Post-Brexit changes so far, plus ça change…

To avoid a regulatory cliff edge and speed up its realignment under international trade law, the UK sought independent membership of the World Trade Organisation Government Procurement Agreement (GPA) from 1 January 2021 on terms that replicate and give continuity to its previously indirect membership as an EU Member State. The UK’s current individual obligations under the GPA are the same as before Brexit. Moreover, to maintain market access, the EU-UK Trade and Cooperation Agreement (TCA) replicates obligations under EU law that go beyond the GPA in substantive and procedural elements (‘GPA+’), with only the exception of some contracts for healthcare services. The Free Trade Agreements (FTAs) with Australia and New Zealand, and the envisioned accession of the UK to the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP) foresee further GPA+ market access obligations and increasingly complicated constraints related to trade.

These commitments prevent the adoption of an expansive ‘Buy British’ policy and could in fact restrict it in some industries, although healthcare is explicitly excluded from procurement-related trade negotiations. Despite misleading claims to the contrary in UK governments reports, such as the January 2022 Benefits of Brexit report, which gives the impression that Brexit ‘enabled goods and services contracts below £138,760 (central government), £213,477 (sub-central authorities) and £5.3 million (construction throughout the public sector) to be reserved for UK suppliers’ (art 8), official procurement guidance makes clear that the situation remains unchanged. Contracts above the values quoted above – those covered by the GPA, the TCA, and Free Trade Agreements – remain open to international competition. In other words, the government has not achieved its stated Brexit aspiration of reserving ‘a bigger slice’ of procurement to domestic businesses.

A similar picture emerges in relation to procedural requirements under procurement law. While the UK Government declared that its aim was to ‘rewrite the rulebook’ (as discussed below), the pre-Brexit ‘copy out’ of EU procurement rules remains in effect as retained EU law. Brexit required some marginal technical adjustments, such as a change in the digital platform where contract opportunities are advertised and where high value contract opportunities are published in the Find a Tender portal rather than the EU’s official journal, or the substitution of the European Single Procurement Document (ESPD) with a near-identical Single Procurement Document (SPD). The main practical change following Brexit is the UK being disconnected from the e-Certis database. The lack of direct access to documentary evidence makes it more difficult and costly for businesses and public sector entities to complete pre-award checks, especially in cases of cross-border EU-UK tendering. However, TCA provisions seek to minimise these documentary requirements (Art 280) and could mitigate the practical implications of the UK no longer being part of the e-Certis system.

With Brexit, the Minister for the Cabinet Office assumed the powers and functions relating to compliance with procurement rules. Even if the bar was already quite low before Brexit, since virtually no infringement procedures had been opened against the UK for procurement breaches, this change is likely to result in a weakening of enforcement due to the lack of separation between Cabinet Office and other central government departments. The shortcomings of current oversight mechanisms are reflected in the proposed reforms discussed below, which include a proposal to create a dedicated Procurement Review Unit.

Future change

The government has been promoting the reform of the UK’s procurement rulebook. Its key elements were included in the 2020 Green Paper Transforming Public Procurement. The aim was ‘to speed up and simplify [UK] procurement processes, place value for money at their heart, and unleash opportunities for small businesses, charities and social enterprises to innovate in public service delivery’, through greater procedural flexibility, commercial discretion, data transparency, centralisation of a debarment mechanism, and regulatory space for non-economic considerations. The Green Paper envisaged the creation of a new Procurement Review Unit with oversight powers, as well as measures to facilitate the judicial review of procurement decisions. Despite the rhetoric, the proposals did not mark a significant departure from the current rules. They were ‘EU law+’, at best. However, a deregulatory approach that introduces more discretion and less procedural limitations carries potential for significantly complicating procurement practice by reducing procedural standardisation and increasing tendering costs.

The 2021’s government response to the consultation mostly confirmed the approach in the Green Paper and, on 11 May 2022, the Procurement Bill was introduced in the House of Lords, the day after the Queen’s Speech. The Procurement Bill is hardly an exemplar of legislative drafting and it was soon clear that it would need very significant amending. As of 1 September 2022, the Bill had reached its committee stage in the Lords. Five hundred amendments have been put forward with over three hundred of those originating from the government itself. The amendments affect the ‘transformative’ elements of the Bill, and sometimes there are competing amendments over the same clause that would result in different outcomes. It is difficult to gauge whether the government’s proposals will result in a legislative text that materially deviates from the current rules. It is also unclear to what extent the new Procurement Review Unit will have effective oversight powers, or enforcement powers.

The Procurement Bill, moreover, contains only the bare bones of a future regime. Secondary legislation and volumes of statutory guidance will be adopted and developed once the final legislation is in place. Given the uncertainty, the government has committed to provide at least six months’ notice of the new system. It is therefore unlikely that the new rules will be in place before mid-2023. The roll-out of the new rules will require a major training exercise, but most of the government’s training programme is directed towards the public sector. Business can expect to shoulder significant costs associated with the introduction of the new rules.

These legislative changes will not apply UK-wide. Scotland has decided to keep its own separate (EU-derived) procurement rules in place. Divergence between the rules in Scotland and those that apply in the rest of the UK is governed by the 2022 revised Common Framework for Public Procurement. The Common Framework allows for policy divergence, and has already resulted in different national procurement strategies for England, Wales and Scotland, as well as keeping in place a pre-existing policy for Northern Ireland. It is too early to judge, but different policy approaches may in the medium term fragment the UK internal market for public contracts, especially non-central government procurement.

Conclusion

The process of UK procurement reform may be the ‘perfect Brexit story’. Perceived pre-Brexit problems and dissatisfaction were largely a result of long-lasting underinvestment in public sector capacity and training and constraints that mostly derive from international treaties rather than EU law. As an EU member state, the UK could have decided to transpose EU rules other than copying them, thereby building a more comprehensive set of procurement rules that could address some of the shortcomings in the EU framework. It could have funded a better public sector training programme, implemented open procurement data standards and developed analytical dashboards, or centralised debarment decisions. It decided not to opt for any of these measures but blamed the EU for the issues that arose from that decision.

When Brexit rhetoric had to be translated into legal change, reality proved rather stubborn. International trade commitments were simply rolled over, thereby reducing any prospect of a ‘Buy British’ policy. Moreover, the ongoing reform of procurement law is likely to end up introducing more complexity, while only deviating marginally from EU standards in practice. Despite all the effort expended and resource invested, a Brexit dividend in public procurement remains elusive.

'Britannia II' abandoned. A true Brexit procurement story?

© 10 Downing Street/PA.

In May 2021, the Johnson government was ‘riding high’ after ‘getting Brexit done’ a few months back. Very much in that mood, they announced a project for a new national flagship to promote British businesses around the world. The official press release stressed that the ship would ‘be the first of its kind constructed in the UK, creating jobs and reinvigorating the shipbuilding industry’.

The news got a mixed reception, not least because of the expected cost, potentially well above £200 mn (and later on estimated at £250 mn plus £30 mn contingency). However, the possibility for it to be commissioned in the UK and for the project to act as a boost for the industry was (reluctantly) embraced by the opposition too.

Quite how it would be (legally) ensured that the ship would be constructed in the UK and that the project generated jobs to reinvigorate the UK shipbuilding industry was unclear, as the UK had already bound itself to the WTO Government Procurement Agreement (GPA). The UK’s GPA schedules of coverage clearly include tenders for ships, boats and floating structures except warships (annex 4). The UK government however planned to sidestep its international commitments by invoking a national security exemption to restrict competition to UK design and build.

The UK government was indeed trying to pass the flagship off as a defence procurement, as the Defence Secretary confirmed that the ‘capital cost of building the National Flagship will fall to the defence budget as part of the Government's wider commitment to the UK shipbuilding industry‘, and the project was led by a ‘National Flagship Taskforce’ set up within the Ministry of Defence (see eg the March 2022 National Shipbuilding Strategy, at 23). At the time, the Minister for Defence Procurement sought to justify this: ‘Under WTO there is a security exemption. The security of the vessel is incredibly key to how we think about it. Given the nature of what it will be doing, it is important that there are security ramifications around that, which is something we take very seriously. There are legitimate reasons, under WTO, why we can direct this to be a UK build, which it will be’ (Q209).

Legally, this is rather risible.

The security exemption does not relate to procurement objects that will need securing once acquired, but rather to procurement objects to be used for security purposes, or procurement objects that are crucial to security interests (eg critical infrastructure). There was no (public) evidence that the ship would meet those requirements. On the contrary, the declared (primary) role for the ship was ‘to promote British businesses around the world’, in particular by hosting trade events. This is not a defence and security use, even if the boat would of course require protecting. The Commons Defence Committee also stressed that it received ‘no evidence of the advantage to the Royal Navy of acquiring the National Flagship‘ (“We’re going to need a bigger Navy”, at [20]).

A trade dispute might well have been in the making…

Anyway. The project has now been abandoned by the Sunak government, despite the £2.5m of taxpayers’ money already spent on the “vanity project”. The trade dispute, if there was to be one, has been averted. But the ‘Britannia II’ story should serve as a reminder of why Brexit continues to be problematic in the field of procurement regulation — with some of it still permeating the proposals in the Procurement Bill and the National Procurement Policy Statement.

Other than the waste of public funds in yet another unnecessary project rather reminiscent of the ‘lost’ British Empire, the story clearly revolves around an uncashable Brexit dividend: protectionism through procurement. This was a clear goal of the reformist agenda in Brexiteer governments, but one that became simply (legally) unattainable with the UK’s accession to the GPA. And the space for a ‘mini’ Buy British procurement policy keeps reducing under the growing thicket of international trade agreements the UK is seeking to put in place.

The story also reminds us of the disregard for international law and international trade commitments of recent UK Governments, which one can only hope will now be systematically revisited and complied with by the current administration.

Registration open: TECH FIXES FOR PROCUREMENT PROBLEMS?

As previously announced, on 15 December, I will have the chance to discuss my ongoing research on procurement digitalisation with a stellar panel: Eliza Niewiadomska (EBRD), Jessica Tillipman (GW Law), and Sope Williams (Stellenbosch).

The webinar will provide an opportunity to take a hard look at the promise of tech fixes for procurement problems, focusing on key issues such as:

  • The ‘true’ potential of digital technologies in procurement.

  • The challenges arising from putting key enablers in place, such as an adequate big data architecture and access to digital skills in short supply.

  • The challenges arising from current regulatory frameworks and constraints not applicable to the private sector.

  • New challenges posed by data governance and cybersecurity risks.

The webinar will be held on December 15, 2022 at 9:00 am EST / 2:00 pm GMT / 3:00 pm CET-SAST. Full details and registration at: https://blogs.gwu.edu/law-govpro/tech-fixes-for-procurement-problems/.

Unpacking the logic behind the magic in the use of AI for anticorruption screening (re Pastor Sanz, 2022)

‘Network of public contracts, contracting bodies, and awarded companies in Spain’ in 2020 and 2021; Pastor Sanz (2022: 7).

[Note: please don’t be put off by talk of complex algorithms. The main point is precisely that we need to look past them in this area of digital governance!].

I have read a new working paper on the use of ‘blackbox algorithms’ as anti-corruption screens for public procurement: I Pastor Sanz, ‘A New Approach to Detecting Irregular Behavior in the Network Structure of Public Contracts’. The paper aims to detect corrupt practices by exploiting network relationships among participants in public contracts. The paper implements complex algorithms to support graphical analysis to cluster public contracts with the aim of identifying those at risk of corruption. The approach in the paper would create ‘risk maps’ to eg prioritise the investigation of suspected corrupt awards. Such an approach could be seen to provide a magical* solution to the very complex issue of corruption monitoring in procurement (or more generally). In this post, I unpack what is behind that magic and critically assess whether it follows a sound logic on the workings of corruption (which it really doesn’t).

The paper is technically very complex and I have to admit to not entirely understanding the specific workings of the graphical analysis algorithms. I think most people with an interest in anti-corruption in procurement would also struggle to understand it and, even data scientists (and even the author of the paper) would be unable to fully understand the reasons why any given contract award is flagged as potentially corrupt by the model, or to provide an adequate explanation. In itself, this lack of explainability would be a major obstacle to the deployment of the solution ‘in the real world’ [for discussion, see A Sanchez-Graells, ‘Procurement Corruption and Artificial Intelligence: Between the Potential of Enabling Data Architectures and the Constraints of Due Process Requirements’]. However, perhaps more interestingly, the difficulty in understanding the model creates a significant additional governance risk in itself: intellectual debt.

Intellectual debt as a fast-growing governance risk

Indeed, this type of very complex algorithmic approach creates a significant risk of intellectual debt. As clearly put by Zittrain,

‘Machine learning at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball – except they appear to be consistently right. When we accept those answers without independently trying to ascertain the theories that might animate them, we accrue intellectual debt’ (J Zittrain, ‘Intellectual Debt. With Great Power Comes Great Ignorance’, 178).

The point here is that, before relying on AI, we need to understand its workings and, more importantly, the underlying theories. In the case of AI for anti-corruption purposes, we should pay particular attention to the way corruption is conceptualised and embedded in the model.

Feeding the machine a corruption logic

In the paper, the model is developed and trained to translate ‘all the public contracts awarded in Spain in the years 2020 and 2021 into a bi-dimensional map with five different groups. These groups summarize the position of a contract in the network and their interactions with their awarded companies and public contracting bodies’ (at 14). Then, the crucial point from the perspective of a corruption logic comes in:

‘To determine the different profiles of the created groups in terms of corruption risk, news about bad practices or corruption scandals in public procurements in the same period (years 2020 and 2021) has been used as a reference. The news collection process has been manual and the 10 most important general information newspapers in Spain in terms of readership have been analyzed. Collected news about irregularities in public procurements identifies suspicions or ongoing investigations about one public contracting body and an awarded company. In these cases, all the contracts granted by the Public Administration to this company have been identified in the sample and flagged as “doubtful” contracts. The rest of the contracts, which means contracts without apparent irregularities or not uncovered yet, have been flagged as “normal” contracts. A total of 765 contracts are categorized as “doubtful”, representing 0.36% of total contracts … contracts belong to only 25 different companies, where only one company collects 508 granted contracts classified as “doubtful”’ (at 14-15, references omitted and emphasis added).

A sound logic?

This reflects a rather cavalier attitude to the absence of reliable corruption data and to difficulties in labelling datasets for that purpose [for discussion, again, see A Sanchez-Graells, ‘Procurement Corruption and Artificial Intelligence: Between the Potential of Enabling Data Architectures and the Constraints of Due Process Requirements’].

Beyond the data issue, this approach also reflects a questionable understanding of the mechanics of corruption. Even without getting into much detail, or trying to be exhaustive, it seems that this is a rather peculiar approach, perhaps rooted in a rather simplistic intuition of how tenderer-led corruption (such as bribery) could work. It seems to me to have some rather obvious shortcomings.

First, it treats companies as either entirely corrupt or not at all corrupt, whereas it seems plausible that corrupt companies will not necessarily always engage in corruption for every contract. Second, it treats the public buyer as a passive agent that ‘suffers’ the corruption and never seeks, or facilitates it. There does not seem to be any consideration to the idea that a public buyer that has been embroiled in a scandal with a given tenderer may also be suspicious of corruption more generally, and worth looking into. Third, in both cases, it treats institutions as monolithic. This is particularly problematic when it comes to treating the ‘public administration’ as a single entity, specially in an institutional context of multi-level territorial governance such as the Spanish one—with eg potentially very different propensities to corruption in different regions and in relation to different (local/not) players. Fourth, the approach is also monolithic in failing to incorporate the fact that there can be corrupt individuals within organisations and that the participation of different decision-makers in different procedures can be relevant. This can be particularly important in big, diversified companies, where a corrupt branch may have no influence on the behaviour of other branches (or even keep its corruption secret from other branches for rather obvious reasons).

If AI had been used to establish this approach to the identification of potentially corrupt procurement awards, the discussion would need to go on to scrutinise how a model was constructed to generate this hypothesis or insight (or the related dataset). However, in the paper, this approach to ‘conceptualising’ or ‘labelling corruption’ is not supported by machine learning at all, but rather depends on the manual analysis and categorisation of news pieces that are unavoidably unreliable in terms of establishing the existence of corruption, as eg the generation of the ‘scandals’ and the related news reporting is itself affected by a myriad of issues. At best, the approach would be suitable to identify the types of contracts or procurement agents most likely to attract corruption allegations and to have those reported in the media. And perhaps not even that. Of course, the labelling of ‘normal’ for contracts not having attracted such media attention is also problematic.

Final thoughts

All of this shows that we need to scrutinise ‘new approaches’ to the algorithmic detection of corruption (or any other function in procurement governance and more generally) rather carefully. This not only relates to the algorithms and the related assumptions of how socio-technical processes work, but also about the broader institutional and information setting in which they are developed (for related discussion, see here). Of course, this is in part a call for more collaboration between ‘technologists’ (such as data scientist or machine learning engineers) and domain experts. But it is also a call for all scholars and policy-makers to engage in critical assessment of logic or assumptions that can be buried in technical analysis or explanations and, as such, difficult to access. Only robust scrutiny of these issues can avoid incurring massive intellectual debt and, perhaps what could be worse, pinning our hopes of improved digital procurement governance on faulty tools.

_____________

* The reference to magic in the title and the introduction relates to Zittrain’s Magic-8 ball metaphor, but also his reference to the earlier observation by Arthur C. Clarke that any sufficiently advanced technology is indistinguishable from magic.

A hot potato? CJEU faces questions on rules applicable to cross-border procurement litigation (C-480/22)

The Court of Justice has received a very interesting preliminary reference from the Austrian Supreme Administrative Court (Verwaltungsgerichtshof) concerning international conflict of laws issues relating to cross-border public procurement involving contracting entities from different Member States (Case C-480/22, EVN Business Service and Others, hereafter the ‘EVN II’ case). The preliminary reference covers issues of judicial competence and applicable procedural law to cross-border challenges of procurement decisions.

Interestingly, the case concerns a negative conflict of jurisdiction, where neither the Bulgarian nor the (first instance) Austrian courts consider themselves competent. The case thus seems to be a bit of a hot potato—although the referring (higher) Austrian court seems interested in nipping the issue in the bud, presumably to avoid a situation of deprivation of procurement remedies that would ultimately violate EU procurement rules and general requirements of access to justice under the Charter of Fundamental Rights (though this is not explicit in the preliminary reference).

The root of the problem is that the conflict of laws dimension of the administrative review of procurement decisions involving contracting authorities from different Member States is not explicitly addressed in the 2014 Procurement Directives. Although the case concerns the interpretation of Article 57 of Directive 2014/25/EU, it is of direct relevance to the interpretation of Article 39 of Directive 2014/24/EU, as the wording of provisions is near identical (with the exception of references to contracting entities rather than contracting authorities in Art 57 Dir 2014/25/UE, and the suppression of specific public sector rules on awards under framework contracts that are not relevant to this case).

I have been interested in the regulatory gaps left by Art 39 Dir 2014/24/EU for a while. In this post, I address the first two questions posed to the CJEU, as the proposed answers would make it unnecessary to answer the third question. My analysis is based on my earlier writings on the topic: A Sanchez-Graells, ‘The Emergence of Trans-EU Collaborative Procurement: A “Living Lab” for European Public Law’ (2020) 29(1) PPLR 16-41 (hereafter Sanchez-Graells, ‘Living Lab’)); and idem, ‘Article 39 - Procurement involving contracting authorities from different Member States’ in R Caranta and A Sanchez-Graells (eds), European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar 2021) 436-447 (hereafter Sanchez-Graells, ‘Art 39’).

The ‘EVN II’ case

Based on the facts of the preliminary reference, the legal dispute originates in a ‘public house’ environment within the Austrian EVN group. The Land of Lower Austria owns 51% of EVN AG, which in turn indirectly wholly owns both (i) EVN Business Service GmbH (‘EBS GmbH’), an Austrian central purchasing body (CPB), and (ii) Elektrorazpredelenie YUG EAD (‘EY EAD’), a Bulgarian utilities company. EBS GmbH had the task of procuring services on behalf of and for the account of EY EAD through a framework agreement on the performance of electrical installation works and related construction and dismantling works divided into 36 lots, the place of performance being located in Bulgaria.

Notably, in the invitation to tender, the Landesverwaltungsgericht Niederösterreich (Regional Administrative Court, Lower Austria) was named as the competent body for appeal proceedings/review procedures. Austrian law is stated as the law applicable to the ‘procurement procedure and all claims arising therefrom’, and Bulgarian law as the law applicable to ‘the performance of the contract’.

Two Bulgarian companies unsuccessfully submitted tenders for several lots and subsequently sought to challenge the relevant award decisions. However, those claims were dismissed by the Austrian Regional Administrative Court on grounds of lack of competence. The Court argued that a decision on whether a Bulgarian undertaking may conclude a contract with a contracting entity located in Bulgaria, which is to be performed in Bulgaria and executed in accordance with Bulgarian law, would interfere massively with Bulgaria’s sovereignty, thereby giving rise to tension with the territoriality principle under international law. Moreover, the Court argued that it is not apparent from the Austrian Federal Law on public procurement which procedural law is to be applied to the review procedure.

The case thus raises both an issue of the competence for judicial review and the applicable procedural law. The conflict of jurisdiction is negative because the Bulgarian Supreme Administrative Court confirmed the lack of competence of the Bulgarian procurement supervisory authority.

An avoidable gap in the 2014 Directives

The issue of cross-border use of CPB services is regulated by Art 57(3) Dir 2014/25/EU, which in identical terms to Art 39(3) Dir 2014/24/EU, establishes that ‘The provision of centralised purchasing activities by a central purchasing body located in another Member State shall be conducted in accordance with the national provisions of the Member State where the central purchasing body is located.’

The main contention in the case is whether Article 57(3) of Directive 2014/25 must be interpreted as covering not only the procurement procedure itself, but also the rules governing the review procedure. The argument put forward by the Bulgarian challengers is that if the CPB is required to apply Austrian law from a substantive point of view, the appeal proceedings before the Austrian review bodies must also be conducted in accordance with Austrian procedural law.

As mentioned above, conflict of laws issues are not regulated in the 2014 Procurement Directives, despite explicit rules having been included by the European Commission in the 2011 proposal for a new utilities procurement directive (COM(2011) 895 final, Art 52) and the 2011 proposal for a new public sector procurement directive (COM(2011) 896 final, Art 38). With identical wording, the proposed rule was that

Several contracting [authorities/entities] may purchase works, supplies and/or services from or through a central purchasing body located in another Member State. In that case, the procurement procedure shall be conducted in accordance with the national provisions of the Member State where the central purchasing body is located [Art 52(2)/Art 38(2) of the respective proposals].

Decisions on the award of public contracts in cross-border public procurement shall be subject to the ordinary review mechanisms available under the national law applicable [Art 52(8)/Art 38(8) of the respective proposals].

The 2011 proposals would thus have resolved the conflict of laws in favour of the jurisdiction where the CPB is based. Reference to subjection ‘to the ordinary review mechanisms available under the national law applicable’ would also have encompassed the issue of applicable procedural law. The 2011 proposals also included explicit rules on the mutual recognition and collaboration in the cross-border execution of procurement review decisions (for discussion, see Sanchez-Graells, ‘Living Lab’, 25-26).

However, the 2014 Directives omit such rules. While there are indications in the recitals that the ‘new rules on cross-border joint procurementshould determine the conditions for cross-border utilisation of central purchasing bodies and designate the applicable public procurement legislation, including the applicable legislation on remedies’ (rec (82) Dir 2014/25/EU and, identically, rec (73) Dir 2014/24/EU), this is not reflected in the provisions of the Directives. While the position in the recitals could be seen as interpretive guide to the effect that the system of conflict of laws rules implicit in the Directives is unitary and the location of the CPB is determinative of the jurisdiction and applicable law for the review of its procurement decisions, this is not necessarily a definitive argument as the CJEU has made clear that recitals may be insufficient to create rules [see C-215/88, Casa Fleischhandel v BALM, EU:C:1989:331, para 31; Sanchez-Graells, ‘Art 39’, para 39.26. For discussion, see S Treumer and E Werlauff, ‘The leverage principle: Secondary Community law as a lever for the development of primary Community law’ (2003) 28(1) European Law Review 124-133].

Questions before the CJEU — and proposed answers

Given the lack of explicit solution in the 2014 Procurement Directives, the CJEU now faces two relevant questions in the EVN II case. The first question concerns the scope of the rules on the provision of cross-border CPB services, which is slightly complicated by the ‘public house’ background of the case. The second question concerns whether the rules subjecting such procurement to the law of the CPB extend to both the legislation applicable to review procedures and the competence of the review body.

Question 1 - contracting authorities/entities from different Member States

In the EVN II case, the CJEU is first asked to establish whether Art 57(3) Dir 2014/25/EU (and, implicitly Art 39(3) Dir 2014/24/EU) should be interpreted as meaning that the provision of centralised purchasing activities by a CPB located in another Member State exists where the contracting entity – irrespective of the question as to the attribution of the control exercised over that contracting entity – is located in a Member State other than that of the CPB. The issue of attribution of control arises from the fact that, in the case at hand, the ‘client’ Bulgarian contracting entity is financially controlled by an Austrian regional authority—which, incidentally, also controls the CPB providing the centralised purchasing services. This raises the question whether the client entity is ‘truly’ foreign, or whether it needs to be reclassified as Austrian on the basis of the financial control.

While I see the logic of the question in terms of the formal applicability of the Directive, from a functional perspective, the question does not make much sense and an answer other than yes would create significant complications.

The question does not make much sense because the aim of the rule in Art 57(3) does not gravitate on the first part of the article: ‘The provision of centralised purchasing activities by a central purchasing body located in another Member State shall be conducted in accordance with the national provisions of the Member State where the central purchasing body is located.’ Rather, the relevance of the rule is in the extension of the law of the CPB to ‘(a) the award of a contract under a dynamic purchasing system; [and] (b) the conduct of a reopening of competition under a framework agreement’ by the ‘client’ (foreign) contracting authority or entity. The purpose of Art 57(3) Dir 2014/25/EU is thus the avoidance of potentially conflicting rules in the creation of cross-border CPB procurement vehicles and in the call-offs from within those vehicles (Sanchez-Graells, ‘Art 39’, paras 39.13-39.15).

Functionally, then, the logic of the entirety of Art 57(3) (and Art 39(3)) rests on the avoidance of a risk of conflicting procurement rules applicable to the cross-border use of CPB services, presumably for the benefit of participating economic operators, as well as in search of broader consistency of the substantive legal framework. Either such a risk exists, because the ‘client’ contracting entity or authority would otherwise be subjected to a different procurement legislation than that applicable to the CPB, or it doesn’t. That is in my view the crucial functional aspect.

If this approach is correct, the issue of (potential) Austrian control over the Bulgarian contracting entity is irrelevant, as the crucial issue is whether it is generally subjected to Bulgarian utilities procurement law or not when conducting covered procurement. There is no information in the preliminary reference, but I would assume it is. Primarily because of the formal criteria determining subjection to the domestic implementation of the EU Directives, which tends to be (implicitly) based on the place of location of the relevant entity or authority.

More fundamentally, if this approach is correct, the impingement on Bulgarian sovereignty feared by the Austrian first instance court is a result of EU procurement law. There is no question that the 2014 Directives generate the legal effect that contracting authorities of a given Member State (A) are bound to comply with the procurement legislation of a different Member State (B) when they resort to the services of that State (B) CPB and then implement their own call-off procedures, potentially leading to the award of a contract to an undertaking in their own Member State (A). This potentially puts the legislation of State B in the position of determining whether an undertaking of State A may conclude a contract with a contracting entity located in State A, which is to be performed in State A and executed in accordance with the law of State A. It is thus not easily tenable under EU law that this represents a massive interference with State A’s sovereignty—unless one is willing to challenge the EU’s legal competence for the adoption of the 2014 Directives (see Sanchez-Graells, ‘Living Lab’, 31-33).

A further functional consideration is that the cross-border provision of CPB services does not need to be limited to a two-country setting. If the CPB of country B is eg creating a framework agreement that can be used by contracting authorities and entities from countries A, C, D, and E, the applicability of Art 57(3) Dir 2014/25/EU (and Art 39(3) Dir 2014/24/EU) could not vary for entities from those different countries, or from within a country, depending on a case-by-case analysis of the location of the entities controlling the ‘client’ authorities and entities. In other words, Art 57(3) Dir 2014/25/EU (and Art 39(3) Dir 2014/24/EU) cannot reasonably be of variable application within a single procurement.

Taking the facts of the EVN II case, imagine that in addition to EY EAD, other Bulgarian utilities were also able to draw from the (same lots of the) framework agreement put in place by EBS GmbH. How could it be that Art 57(3) controlled the procurement for the ‘clearly’ Bulgarian utilities, whereas it may not be applicable for the Bulgarian utility controlled by an Austrian authority?

In my view, all of this provides convincing argumentation for the CJEU to answer the first question by clarifying that, from a functional perspective, the need to create a unitary legal regime applicable to procurement tenders led by CPBs where there is a risk of conflicting substantive procurement rules requires interpreting Art 57(3) Dir 2014/25/EU (and Art 39(3) Dir 2014/24/EU) as applicable where the location of ‘client’ contracting authorities or entities is in one or more Member States other than that where the CPB is itself located.

Question 2 - presumption of jurisdiction and applicable law

The second question put to the CJEU builds on the applicability of Art 57(3) Dir 2014/25/EU and asks whether its ‘conflict-of-law rule … according to which the “provision of centralised purchasing activities” by a [CPB] located in another Member State is to be conducted in accordance with the national provisions of the Member State where the [CPB] is located, also cover[s] both the legislation applicable to review procedures and the competence of the review body’. Other than on the basis of the interpretive guide included in the recitals of Dir 2014/25/EU (and Dir 2014/24/EU) as above, I think there are good reasons to answer this question in the affirmative.

The first line of arguments is systematic and considers the treatment of conflict of laws situations within Art 57 Dir 2014/25/EU (and 39 Dir 2014/24/EU; see Sanchez-Graells, ‘Living Lab’, 21-24). In that regard, while there is a hard conflict of laws rule in Art 57(3) (and 39(3)) that selects the law of the CPB to the entirety of the procurement procedure, including ‘foreign’ call-offs, the situation is very different in the remainder of the provision. Indeed, when it comes to occasional cross-border joint procurement, in the absence of a binding international agreement, the choice of the applicable substantive procurement legislation is left to the agreement of the participating contracting authorities or entities (Art 57(4) Dir 2014/25/EU, and Art 39(4) Dir 2014/24/EU). Similarly, where the cross-border procurement is carried out through a joint entity, including European Groupings of territorial cooperation, the participating contracting authorities have a choice between the law of the Member State where the joint entity has its registered office, or that of the Member State where the joint entity is carrying out its activities (Art 57(5) Dir 2014/25/EU, and Art 39(5) Dir 2014/24/EU). This indicates that the choice of law rule applicable to the cross-border provision of CPB services leaves much less space (indeed, no space) to the application of a substantive procurement law other than that of the CPB. An extension of this argument supports answering the question in the affirmative and extending the choice of law rule to both the legislation applicable to review procedures and the competence of the review body.

A second line of argument concerns the effectiveness of the available procurement remedies. Such effectiveness would, on the one hand, be increased by a reduced judicial burden of considering foreign procurement law where the location of the CPB determines jurisdiction and procedural applicable law, which can also be expected to be coordinated with substantive procurement law. On the other hand, answering the question in the affirmative would require economic operators to challenge decisions concerning potential contracts with a domestic contracting authority or entity in a foreign court. However, given that the substantive rules are those of the foreign jurisdiction and that they were expected to tender (or tendered) in that jurisdiction, the effect may be relatively limited where the CPB decisions are being challenged—as compared to a challenge of the call-off decision carried out by their domestic contracting authority or entity, but subject to foreign procurement law. In my view, the last set of circumstances is very unlikely, as the applicability of the ‘foreign’ law of the CPB generates a very strong incentive for the CPBs to also carry out the call-off phase on behalf of the client authority or entity (Sanchez-Graells, ‘Art 39’, 39.14).

Overall, in my view, the CJEU should answer the second question by clarifying that the reference to the national provisions of the Member State where the CPB is located in Art Art 57(3) Dir 2014/25/EU (and 39(3) Dir 2014/24/EU, also covers both the legislation applicable to review procedures and the competence of the review body.

Some further thoughts

Beyond the specific issues before the CJEU, the EVN II case raises broader concerns around the flexible contractualised approach (not to say the absence of an approach) to conflict of laws issues in the 2014 Procurement Directives—which leave significant leeway to participating contracting authorities and entities to craft the applicable legal regime.

While the situation can be relatively easy to sort out with an expansive interpretation of Art 57(3) Dir 2014/25/EU and Art 39(3) Dir 2014/24/EU in the relatively simple case of the cross-border provision of CPB services (as above), these issues will be much more complex in other types of procurement involving contracting authorities from (multiple) different Member States. The approach followed by the first instance Austrian court in EVN II seems to me reflective of more generalised judicial approaches and attitudes towards unregulated conflict of laws situations where they can be reluctant to simply abide by whatever is published in the relevant procurement notices—as was the case in EVN II, where the invitation to tender was explicit about allocation of jurisdiction and selection of applicable procedural law and, that notwithstanding, the first instance court found issues on both grounds.

This can potentially be a major blow to the ‘contractualised’ approach underpinning the 2014 Procurement Directives, especially where situations arise that require domestic courts of a Member State to make decisions imposing liability on contracting authorities of another Member State, and the subsequent need to enforce that decision. The issue of the conflict of laws dimension of the administrative review of procurement decisions involving contracting authorities from different Member States will thus not be entirely addressed by the Judgement of the CJEU in EVN II, although the CJEU could hint at potential solutions, depending on how much it decided to rely on the 2011 proposals as a steppingstone towards an expansive interpretation of the current provisions—which is by no means guaranteed, as the suppression of explicit rules could as easily be interpreted as a presumption or as a rejection of those rules by the CJEU.

It seems clearer than ever that the procurement remedies Directives need to be reformed to create a workable and transparent system of conflict of laws dimension of the administrative review of procurement decisions involving contracting authorities from different Member States, as well as explicit rules on cross-border enforcement of those decisions (Sanchez-Graells, ‘Living Lab’, 39-40).

Save the date: 15 Dec, Tech fixes for procurement problems?

If you are interested in procurement digitalisation, please save the date for an online workshop on ‘Tech fixes for procurement problems?’ on 15 December 2022, 2pm GMT. I will have the chance to discuss my ongoing research (scroll down for a few samples) with a stellar panel: Eliza Niewiadomska (EBRD), Jessica Tillipman (GW Law), and Sope Williams (Stellenbosch). We will also have plenty time for a conversation with participants. Do not let other commitments get on the way of joining the discussion!

More details and registration coming soon. For any questions, please email me: a.sanchez-graells@bristol.ac.uk.

Emerging risks in digital procurement governance

In a previous blog post, I drew a technology-informed feasibility boundary to assess the realistic potential of digital technologies in the specific context of procurement governance. I suggested that the potential benefits from the adoption of digital technologies within that feasibility boundary had to be assessed against new governance risks and requirements for their mitigation.

In a new draft chapter (num 8) for my book project, I now explore the main governance risks and legal obligations arising from the adoption of digital technologies, which revolve around data governance, algorithmic transparency, technological dependency, technical debt, cybersecurity threats, the risks stemming from the long-term erosion of the skills base in the public sector, and difficult trade-offs due to the uncertainty surrounding immature and still changing technologies within an also evolving regulatory framework.

The analysis is not carried out in a vacuum, but in relation to the increasingly complex framework of EU digital law, including: the Open Data Directive; the Data Governance Act; the proposed Data Act; the NIS 2 Directive on cybersecurity measures, including its interaction with the Cybersecurity Act, and the proposed Directive on the resilience of critical entities and Cyber Resilience Act; as well as some aspects of the proposed EU AI Act.

This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4254931.

current and Imminent digital governance obligations for public buyers

Public buyers already shoulder, and will very soon face further digital governance obligations, even if they do not directly engage with digital technologies. These concern both data governance and cybersecurity obligations.

Data governance obligations

The Open Data Directive imposes an obligation to facilitate access to and re-use of procurement data for commercial or non-commercial purposes, and generates the starting position that data held by public buyers needs to be made accessible. Access is however excluded in relation to data subject to third-party rights, such as data protected by intellectual property rights (IPR), or data subject to commercial confidentiality (including business, professional, or company secrets). Moreover, in order to ensure compliance with the EU procurement rules, access should also be excluded to data subject to procurement-related confidentiality (Art 21 Dir 2014/24/EU), and data which disclosure should be withheld because the release of such information would impede law enforcement or would otherwise be contrary to the public interest … or might prejudice fair competition between economic operators (Art 55 Dir 2014/24/EU). Compliance with the Open Data Directive can thus not result in a system where all procurement data becomes accessible.

The Open Data Directive also falls short of requiring that access is facilitated through open data, as public buyers are under no active obligation to digitalise their information and can simply allow access to the information they hold ‘in any pre-existing format or language’. However, this will change with the entry into force of the rules on eForms (see here). eForms will require public buyers to hold (some) procurement information in digital format. This will trigger the obligation under the Open Data Directive to make that information available for re-use ‘by electronic means, in formats that are open, machine-readable, accessible, findable and re-usable, together with their metadata’. Moreover, procurement data that is not captured by the eForms but in other ways (eg within the relevant e-procurement platform) will also be subject to this regime and, where making that information available for re-use by electronic means involves no ‘disproportionate effort, going beyond a simple operation’, it is plausible that the obligation of publication by electronic means will extend to such data too. This will potentially significantly expand the scope of open procurement data obligations, but it will be important to ensure that it does not result in excessive disclosure of third-party data or competition-sensitive data.

Some public buyers may want to go further in facilitating (controlled) access to procurement data not susceptible of publication as open data. In that case, they will have to comply with the requirements of the Data Governance Act (and the Data Act, if adopted). In this case, they will need to ensure that, despite authorising access to the data, ‘the protected nature of data is preserved’. In the case of commercially confidential information, including trade secrets or content protected by IPR, this can require ensuring that the data has been ‘modified, aggregated or treated by any other method of disclosure control’. Where ‘anonymising’ information is not possible, access can only be given with permission of the third-party, and in compliance with the applicable IPR, if any. The Data Governance Act explicitly imposes liability on the public buyer if it breaches the duty not to disclose third-party data, and it also explicitly requires that data access complies with EU competition law.

This shows that public buyers have an inescapable data governance role that generates tensions in the design of open procurement data mechanisms. It is simply not possible to create a system that makes all procurement data open. Data governance requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions (as I already proposed a few years ago, see here). While the need to balance procurement transparency and the protection of data subject to the rights of others and competition-sensitive data is not a new governance challenge, the digital management of this information creates heightened risks to the extent that the implementation of data management solutions is tendentially open access. Moreover, the assessment of the potential competition impact of data disclosure can be a moving target. The risk of distortions of competition is heightened by the possibility that the availability of data allows for the deployment of technology-supported forms of collusive behaviour (as well as corrupt behaviour).

Cybersecurity obligations

Most public buyers will face increased cybersecurity obligations once the NIS 2 Directive enters into force. The core substantive obligation will be a mandate to ‘take appropriate and proportionate technical, operational and organisational measures to manage the risks posed to the security of network and information systems which those entities use for their operations or for the provision of their services, and to prevent or minimise the impact of incidents on recipients of their services and on other services’. This will require a detailed assessment of what is proportionate to the cybersecurity exposure of a public buyer.

In that analysis, the public buyer will be able to take into account ‘the state of the art and, where applicable, relevant European and international standards, as well as the cost of implementation’, and in ‘assessing the proportionality of those measures, due account shall be taken of the degree of the entity’s exposure to risks, its size, the likelihood of occurrence of incidents and their severity, including their societal and economic impact’.

Public buyers may not have the ability to carry out such an assessment with internal capabilities, which immediately creates a risk of outsourcing of the cybersecurity risk assessment, as well as other measures to comply with the related substantive obligations. This can generate further organisational dependency on outside capability, which can itself be a cybersecurity risk. As discussed below, imminent cybersecurity obligations heighten the need to close the current gaps in digital capability.

Increased governance obligations for public buyers ‘going digital’

Public buyers that are ‘going digital’ and experimenting with or deploying digital solutions face increased digital governance obligations. Given the proportionality of the cybersecurity requirements under the NIS 2 Directive (above), public buyers that use digital technologies can expect to face more stringent substantive obligations. Moreover, the adoption of digital solutions generates new or increased risks of technological dependency, of two main types. The first type refers to vendor lock-in and interoperability, and primarily concerns the increasing need to develop advanced strategies to manage IPR, algorithmic transparency, and technical debt—which could largely be side-stepped by an ‘open source by default’ approach. The second concerns the erosion of the skills base of the public buyer as technology replaces the current workforce, which generates intellectual debt and operational dependency.

Open Source by Default?

The problem of technological lock-in is well understood, even if generally inadequately or insufficiently managed. However, the deployment of Artificial Intelligence (AI), and Machine Learning (ML) in particular, raise the additional issue of managing algorithmic transparency in the context of technological dependency. This generates specific challenges in relation with the administration of public contracts and the obligation to create competition in their (re)tendering. Without access to the algorithm’s source code, it is nigh impossible to ensure a level playing field in the tender of related services, as well as in the re-tendering of the original contract for the specific ML or AI solution. This was recognised by the CJEU in a software procurement case (see here), which implies that, under EU law, public buyers are under an obligation to ensure that they have access and dissemination rights over the source code. This goes beyond emerging standards on algorithmic transparency, such as the UK’s, or what would be required if the EU AI Act was applicable, as reflected in the draft contract clauses for AI procurement. This creates a significant governance risk that requires explicit and careful consideration by public buyers, and which points at the need of embedding algorithmic transparency requirements as a pillar of technological governance related to the digitalisation of procurement.

Moreover, the development of digital technologies also creates a new wave of lock-in risks, as digital solutions are hardly off-the-shelf and can require a high level of customisation or co-creation between the technology provider and the public buyer. This creates the need for careful consideration of the governance of IPR allocation—with some of the guidance seeking to promote leaving IPR rights with the vendor needing careful reconsideration. A nuanced approach is required, as well as coordination with other legal regimes (eg State aid) where IPR is left with the contractor. Following some recent initiatives by the European Commission, an ‘open source by default’ approach would be suitable, as there can be high value derived from using and reusing common solutions, not only in terms of interoperability and a reduction of total development costs—but also in terms of enabling the emergence of communities of practice that can contribute to the ongoing improvement of the solutions on the basis of pooled resources, which can in turn mitigate some of the problems arising from limited access to digital skills.

Finally, it should be stressed that most of these technologies are still emergent or immature, which generates additional governance risks. The adoption of such emergent technologies generates technical debt. Technical debt is not solely a financial issue, but a structural barrier to digitalisation. Technical debt risks stress the importance of the adoption of the open source by default approach mentioned above, as open source can facilitate the progressive collective repayment of technical debt in relation to widely adopted solutions.

(Absolute) technological dependency

As mentioned, a second source of technological dependency concerns the erosion of the skills base of the public buyer as technology replaces the current workforce. This is different from dependence on a given technology (as above), and concerns dependence on any technological solution to carry out functions previously undertaken by human operators. This can generate two specific risks: intellectual debt and operational dependency.

In this context, intellectual debt refers to the loss of institutional knowledge and memory resulting from eg the participation in the development and deployment of the technological solutions by agents no longer involved with the technology (eg external providers). There can be many forms of intellectual debt risk, and some can be mitigated or excluded through eg detailed technical documentation. Other forms of intellectual debt risk, however, are more difficult to mitigate. For example, situations where reliance on a technological solution (eg robotic process automation, RPA) erases institutional knowledge of the reason why a specific process is carried out, as well as how that process is carried out (eg why a specific source of information is checked for the purposes of integrity screening and how that is done). Mitigating against this requires keeping additional capability and institutional knowledge (and memory) to be able to explain in full detail what specific function the technology is carrying out, why, how that is done, and how that would be done in the absence of the technology (if it could be done at all). To put it plainly, it requires keeping the ability to ‘do it by hand’—or at the very least to be able to explain how that would be done.

Where it would be impossible or unfeasible to carry out the digitised task without using technology, digitalisation creates absolute operational dependency. Mitigating against such operational dependency requires an assessment of ‘system critical’ technological deployments without which it is not possible to carry out the relevant procurement function and, most likely, to deploy measures to ensure system resilience (including redundancy if appropriate) and system integrity (eg in relation to cybersecurity, as above). It is however important to acknowledge that there will always be limits to ensuring system resilience and integrity, which should raise questions about the desirability of generating situations of absolute operational dependency. While this may be less relevant in the context of procurement governance than in other contexts, it can still be an important consideration to factor into decision-making as technological practice can fuel a bias towards (further) technological practice that can then help support unquestioned technological expansion. In other words, it will be important to consider what are the limits of absolute technological delegation.

The crucial need to boost in-house digital skills in the public sector

The importance of digital capabilities to manage technological governance risks emerges a as running theme. The specific governance risks identified in relation to data and systems integrity, including cybersecurity risks, as well as the need to engage in sophisticated management of data and IPR, show that skills shortages are problematic in the ongoing use and maintenance of digital solutions, as their implementation does not diminish, but rather expands the scope of technology-related governance challenges.

There is an added difficulty in the fact that the likelihood of materialisation of those data, systems integrity, and cybersecurity risks grows with reduced digital capabilities, as the organisation using digital solutions may be unable to identify and mitigate them. It is not only that the technology carries risks that are either known knowns or known unknowns (as above), but also that the organisation may experience them as unknown unknowns due to its limited digital capability. Limited digital skills compound those governance risks.

There is a further risk that digitalisation and the related increase in digital capability requirements can embed an element of (unacknowledged) organisational exposure that mirrors the potential benefits of the technologies. While technology adoption can augment the organisation’s capability (eg by reducing administrative burdens through automation), this also makes the entire organisation dependent on its (disproportionately small) digital capabilities. This makes the organisation particularly vulnerable to the loss of limited capabilities. From a governance perspective, this places sustainable access to digital skills as a crucial element of the critical vulnerabilities and resilience assessment that should accompany all decisions to deploy a digital technology solution.

A plausible approach would be to seek to mitigate the risk of insufficient access to in-house skills through eg the creation of additional, standby or redundant contracted capability, but this would come with its own costs and governance challenges. Moreover, the added complication is that the digital skills gap that exposes the organisation to these risks in the first place, can also fuel a dynamic of further reliance on outside capabilities (from consultancy firms) beyond the development and adoption of those digital solutions. This has the potential to exacerbate the long-term erosion of the skills base in the public sector. Digitalisation heightens the need for the public sector to build up its expertise and skills, as the only way of slowing down or reducing the widening digital skills gap and ensuring organisational resilience and a sustainable digital transition.

Conclusion

Public buyers already face significant digital governance obligations, and those and the underlying risks can only increase (potentially, very significantly) with further progress in the path of procurement digitalisation. Ultimately, to ensure adequate digital procurement governance, it is not only necessary to take a realistic look at the potential of the technology and the required enabling factors (see here), but also to embed a comprehensive mechanism of risk assessment in the process of technological adoption, which requires enhanced public sector digital capabilities, as stressed here. Such an approach can mitigate against the policy irresistibility that surrounds these technologies (see here) and contribute to a gradual and sustainable process of procurement digitalisation. The ways in which such risk assessment should be carried out require further exploration, including consideration of whether to subject the adoption of digital technologies for procurement governance to external checks (see here). This will be the object of forthcoming analysis.

Will public buyers be covered by new EU cybersecurity requirements? (Spoiler alert: some will, all should)

EU legislators have reached provisional agreement on a significant revamp of cybersecurity rules, likely to enter into force at some point in late 2024 or 2025. The future Directive (EU) 2022/... of the European Parliament and of the Council of .... on measures for a high common level of cybersecurity across the Union, repealing Directive (EU) 2016/1148 (NIS 2 Directive) will significantly expand the obligations imposed on Member States and on ‘essential’ and ‘important’ entities.

Given the importance of managing cybersecurity as public buyers complete their (late) transition to e-procurement, or further progress down the procurement digitalisation road, the question arises whether the NIS 2 Directive will apply to public buyers. I address that issue in this blog post.

Conflicting definitions?

Different from other recent legislative instruments that adopt the definitions under the EU procurement rules to establish the scope of the ‘public sector bodies’ to which they apply (such as the Open Data Directive, Art 2(1) and (2); or the Data Governance Act, Art 2(17) and (18)), the NIS 2 Directive establishes its own approach. Art 4(23)* defines ‘public administration entities’ as:

an entity recognised as such in a Member State in accordance with national law, that complies with the following criteria:

(a) it is established for the purpose of meeting needs in the general interest and does not have an industrial or commercial character;

(b) it has legal personality or it is entitled by law to act on behalf of another entity with legal personality;

(c) it is financed, for the most part, by the State, regional authority, or by other bodies governed by public law; or it is subject to management supervision by those authorities or bodies; or it has an administrative, managerial or supervisory board, more than half of whose members are appointed by the State, regional authorities, or by other bodies governed by public law;

(d) it has the power to address to natural or legal persons administrative or regulatory decisions affecting their rights in the cross-border movement of persons, goods, services or capital.

Procurement lawyers will immediately raise their eyebrows. Does the definition capture all contracting authorities covered by the EU procurement rules?

Some gaps

Let’s take Directive 2014/24/EU for comparison [see A Sanchez-Graells, ‘Art 2’ in R Caranta and idem (eds), European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar 2021) 2.06-2.18].

Under Arts 1(1) and 2(1)(2), it is clear that Directive 2014/24/EU applies to ‘contracting authorities’, defined as ‘the State, regional or local authorities, bodies governed by public law or associations formed by one or more such authorities or one or more such bodies governed by public law’.

Regarding the ‘State, regional or local authorities’, it seems clear that the NIS 2 Directive in principle covers them (more below), to the extent that they are recognised as a ‘public administration entity’ under national law. This does not seem problematic, although it will of course depend on the peculiarities of each Member State (not least because Directive 2014/24/EU operates a list system and refers to Annex I to establish what are central government authorities).

‘Bodies governed by public law’ are also largely covered by the definition of the NIS 2 Directive, as the material requirements of the definition map on to those under Art 2(1)(4) of Directive 2014/24/EU. However, there are two key deviations.

The first one concerns the addition of the requirement (d) that the body must have ‘the power to address to natural or legal persons administrative or regulatory decisions affecting their rights in the cross-border movement of persons, goods, services or capital’. In my view, this is unproblematic, as all decisions concerning a procurement process covered by the EU rules have the potential to affect free movement rights and, to the extent that the body governed by public law can make those decisions, it meets the requirement.

The second deviation is that, under the ‘financing and control’ criterion (c), the NIS 2 Directive does not include finance or control by local authorities. This leaves out local-level bodies governed by public law, but only those that are not financed or influenced by other (local-level) bodies governed by public law (which is odd). However, this is aligned with the fact that the NIS 2 Directive does not cover local public administration entities (Art 2(2a)* NIS 2 Directive), although it foresees that Member States can extend its regime to local authorities. In such a case, the definitions would have to be carefully reworked in the process of domestic transposition.

A final issue is then whether the definition in the NIS 2 Directive covers ‘associations formed by one or more [central or sub-central] authorities or one or more such bodies governed by public law’. Here the position is much less clear, and it seems to depend on a case-by-case assessment of whether a given association meets all requirements under the definition, which can prove problematic and raise difficult interpretive questions—despite eg having extended the legal personality criterion (b) to the possibility of being ‘entitled by law to act on behalf of another entity with legal personality’. It is thus possible that some associations will not be covered by the NIS 2 Directive, eg if their status under domestic law is unclear.

More gaps

Although the NIS 2 Directive definition in principle covers the State and regional authorities (as above), it should stressed that the scope of application of the Directive only extends to public administration entities of central governments, and those at regional level ‘which following a risk based assessment, provide services the disruption of which could have a significant impact on critical economic or societal activities’ (Art 2(2a)* NIS 2 Directive).

In relation to regional procurement authorities, then, the question arises whether Member States will consider that the disruption of their activities ‘could have a significant impact on [other] critical economic or societal activities’. I submit that this will necessarily be the case, as the procurement function enables the performance of the general activities of the public administration and the provision of public services. However, there seems to be some undesirable legal wriggle room that could create legal uncertainty.

Moreover, the NIS 2 Directive does not apply ‘to public administration entities that carry out their activities in the areas of defence, national security, public security, or law enforcement, including the investigation, detection and prosecution of criminal offences’ (Art 2(3a)* NIS 2 Directive). This is another marked deviation from the treatment of entities in the defence and security sectors under the procurement rules [see B Heuninckx, ‘Art 15’ in Caranta and Sanchez-Graells, Commentary, above].

At a minimum, the reference to entities carrying out ‘the investigation, detection and prosecution of criminal offences’ raises questions on the applicability of the NIS 2 Directive to public buyers formally inserted in eg the Ministry of Justice and/or the judiciary, at Member State level. Whether this is a relevant practical issue will depend on the relevant national context, but it would have been preferable to take an approach that directly mapped onto the scope of Directive 2009/81/EC in determining the relevant activities.

Why is this a problem?

The potential inconsistencies between the scope of application of the NIS 2 Directive and the EU procurement rules are relevant in the context of the broader digitalisation of procurement, but also in the narrow context of the entry into force of the new rules on eForms (see here) and the related obligations under the Open Data Directive, which will require public buyers to make data collected by eForms available in electronic format.

Cutting a long story short, it has been stressed by eg the OECD that opening information systems to make data accessible may ‘expose parts of an organisation to digital security threats that can lead to incidents that disrupt the availability, integrity or confidentiality of data and information systems on which economic and social activities rely’. Moreover, given that the primary purpose of making procurement data open is to enable the development of AI solutions, such risks need to be considered in that context and cybersecurity of data sources has been raised as a key issue by eg the European Union Agency for Cybersecurity (ENISA).

Given that all procurement data systems will be interconnected (via APIs), and that they can provide the data architecture for other AI solutions, cybersecurity risks are a systemic issue that would benefit from a systemic approach. Having some (or most) but not all public buyers comply with high standards of cybersecurity may not eliminate significant vulnerabilities if the remaining points of access generate relevant cybersecurity risks.

How to fix it?

In my view, Member States should extend the obligations under the NIS 2 Directive not only to their local ‘public administration entities’, as envisaged by the Directive, but to all entities covered by significant data governance rules, such as the Open Data Directive. This would ensure high levels of cybersecurity to protect the integrity of the new procurement open data systems. It would also have the added benefit of ensuring alignment with the EU procurement rules and, in that regard, it would contribute to a clear regulatory framework for the governance of digital procurement across the EU. _________________________

* Please note that Articles in the provisional text of the NIS 2 Directive will have to be renumbered.

Digital procurement governance: drawing a feasibility boundary

In the current context of generalised quick adoption of digital technologies across the public sector and strategic steers to accelerate the digitalisation of public procurement, decision-makers can be captured by techno hype and the ‘policy irresistibility’ that can ensue from it (as discussed in detail here, as well as here).

To moderate those pressures and guide experimentation towards the successful deployment of digital solutions, decision-makers must reassess the realistic potential of those technologies in the specific context of procurement governance. They must also consider which enabling factors must be put in place to harness the potential of the digital technologies—which primarily relate to an enabling big data architecture (see here). Combined, the data requirements and the contextualised potential of the technologies will help decision-makers draw a feasibility boundary for digital procurement governance, which should inform their decisions.

In a new draft chapter (num 7) for my book project, I draw such a technology-informed feasibility boundary for digital procurement governance. This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Revisiting the promise: A feasibility boundary for digital procurement governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4232973.

Data as the main constraint

It will hardly be surprising to stress again that high quality big data is a pre-requisite for the development and deployment of digital technologies. All digital technologies of potential adoption in procurement governance are data-dependent. Therefore, without adequate data, there is no prospect of successful adoption of the technologies. The difficulties in generating an enabling procurement data architecture are detailed here.

Moreover, new data rules only regulate the capture of data for the future. This means that it will take time for big data to accumulate. Accessing historical data would be a way of building up (big) data and speeding up the development of digital solutions. Moreover, in some contexts, such as in relation with very infrequent types of procurement, or in relation to decisions concerning previous investments and acquisitions, historical data will be particularly relevant (eg to deploy green policies seeking to extend the use life of current assets through programmes of enhanced maintenance or refurbishment; see here). However, there are significant challenges linked to the creation of backward-looking digital databases, not only relating to the cost of digitisation of the information, but also to technical difficulties in ensuring the representativity and adequate labelling of pre-existing information.

An additional issue to consider is that a number of governance-relevant insights can only be extracted from a combination of procurement and other types of data. This can include sources of data on potential conflict of interest (eg family relations, or financial circumstances of individuals involved in decision-making), information on corporate activities and offerings, including detailed information on products, services and means of production (eg in relation with licensing or testing schemes), or information on levels of utilisation of public contracts and satisfaction with the outcomes by those meant to benefit from their implementation (eg users of a public service, or ‘internal’ users within the public administration).

To the extent that the outside sources of information are not digitised, or not in a way that is (easily) compatible or linkable with procurement information, some data-based procurement governance solutions will remain undeliverable. Some developments in digital procurement governance will thus be determined by progress in other policy areas. While there are initiatives to promote the availability of data in those settings (eg the EU’s Data Governance Act, the Guidelines on private sector data sharing, or the Open Data Directive), the voluntariness of many of those mechanisms raises important questions on the likely availability of data required to develop digital solutions.

Overall, there is no guarantee that the data required for the development of some (advanced) digital solutions will be available. A careful analysis of data requirements must thus be a point of concentration for any decision-maker from the very early stages of considering digitalisation projects.

Revised potential of selected digital technologies

Once (or rather, if) that major data hurdle is cleared, the possibilities realistically brought by the functionality of digital technologies need to be embedded in the procurement governance context, which results in the following feasibility boundary for the adoption of those technologies.

Robotic Process Automation (RPA)

RPA can reduce the administrative costs of managing pre-existing digitised and highly structured information in the context of entirely standardised and repetitive phases of the procurement process. RPA can reduce the time invested in gathering and cross-checking information and can thus serve as a basic element of decision-making support. However, RPA cannot increase the volume and type of information being considered (other than in cases where some available information was not being taken into consideration due to eg administrative capacity constraints), and it can hardly be successfully deployed in relation to open-ended or potentially contradictory information points. RPA will also not change or improve the processes themselves (unless they are redesigned with a view to deploying RPA).

This generates a clear feasibility boundary for RPA deployment, which will generally have as its purpose the optimisation of the time available to the procurement workforce to engage in information analysis rather than information sourcing and basic checks. While this can clearly bring operational advantages, it will hardly transform procurement governance.

Machine Learning (ML)

Developing ML solutions will pose major challenges, not only in relation to the underlying data architecture (as above), but also in relation to specific regulatory and governance requirements specific to public procurement. Where the operational management of procurement does not diverge from the equivalent function in the (less regulated) private sector, it will be possible to see the adoption or adaptation of similar ML solutions (eg in relation to category spend management). However, where there are regulatory constraints on the conduct of procurement, the development of ML solutions will be challenging.

For example, the need to ensure the openness and technical neutrality of procurement procedures will limit the possibilities of developing recommender systems other than in pre-procured closed lists or environments based on framework agreements or dynamic purchasing systems underpinned by electronic catalogues. Similarly, the intended use of the recommender system may raise significant legal issues concerning eg the exercise of discretion, which can limit their deployment to areas of information exchange or to merely suggestion-based tasks that could hardly replace current processes and procedures. Given the limited utility (or acceptability) of collective filtering recommender solutions (which is the predominant type in consumer-facing private sector uses, such as Netflix or Amazon), there are also constraints on the generality of content-based recommender systems for procurement applications, both at tenderer and at product/service level. This raises a further feasibility issue, as the functional need to develop a multiplicity of different recommenders not only reopens the issue of data sufficiency and adequacy, but also raises questions of (economic and technical) viability. Recommender systems would mostly only be susceptible of feasible adoption in highly centralised procurement settings. This could create a push for further procurement centralisation that is not neutral from a governance perspective, and that can certainly generate significant competition issues of a similar nature, but perhaps a different order of magnitude, than procurement centralisation in a less digitally advanced setting. This should be carefully considered, as the knock-on effects of the implementation of some ML solutions may only emerge down the line.

Similarly, the development and deployment of chatbots is constrained by specific regulatory issues, such as the need to deploy closed domain chatbots (as opposed to open domain chatbots, ie chatbots connected to the Internet, such as virtual assistants built into smartphones), so that the information they draw from can be controlled and quality assured in line with duties of good administration and other legal requirements concerning the provision of information within tender procedures. Chatbots are suited to types of high-volume information-based queries only. They would have limited applicability in relation to the specific characteristics of any given procurement procedure, as preparing the specific information to be used by the chatbot would be a challenge—with the added functionality of the chatbot being marginal. Chatbots could facilitate access to pre-existing and curated simple information, but their functionality would quickly hit a ceiling as the complexity of the information progressed. Chatbots would only be able to perform at a higher level if they were plugged to a knowledge base created as an expert system. But then, again, in that case their added functionality would be marginal. Ultimately, the practical space for the development of chatbots is limited to low added value information access tasks. Again, while this can clearly bring operational advantages, it will hardly transform procurement governance.

ML could facilitate the development and deployment of ‘advanced’ automated screens, or red flags, which could identify patterns of suspicious behaviour to then be assessed against the applicable rules (eg administrative and criminal law in case of corruption, or competition law, potentially including criminal law, in case of bid rigging) or policies (eg in relation to policy requirements to comply with specific targets in relation to a broad variety of goals). The trade off in this type of implementation is between the potential (accuracy) of the algorithmic screening and legal requirements on the explainability of decision-making (as discussed in detail here). Where the screens were not used solely for policy analysis, but acting on the red flag carried legal consequences (eg fines, or even criminal sanctions), the suitability of specific types of ML solutions (eg unsupervised learning solutions tantamount to a ‘black box’) would be doubtful, challenging, or altogether excluded. In any case, the development of ML screens capable of significantly improving over RPA-based automation of current screens is particularly dependent on the existence of adequate data, which is still proving an insurmountable hurdle in many an intended implementation (as above).

Distributed ledger technology (DLT) systems and smart contracts

Other procurement governance constraints limit the prospects of wholesale adoption of DLT (or blockchain) technologies, other than for relatively limited information management purposes. The public sector can hardly be expected to adopt DLT solutions that are not heavily permissioned, and that do not include significant safeguards to protect sensitive, commercially valuable, and other types of information that cannot be simply put in the public domain. This means that the public sector is only likely to implement highly centralised DLT solutions, with the public sector granting permissions to access and amend the relevant information. While this can still generate some (degrees of) tamper-evidence and permanence of the information management system, the net advantage is likely to be modest when compared to other types of secure information management systems. This can have an important bearing on decisions whether DLT solutions meet cost effectiveness or similar criteria of value for money controlling their piloting and deployment.

The value proposition of DLT solutions could increase if they enabled significant procurement automation through smart contracts. However, there are massive challenges in translating procurement procedures to a strict ‘if/when ... then’ programmable logic, smart contracts have limited capability that is not commensurate with the volumes and complexity of procurement information, and their development would only be justified in contexts where a given smart contract (ie specific programme) could be used in a high number of procurement procedures. This limits its scope of applicability to standardised and simple procurement exercises, which creates a functional overlap with some RPA solutions. Even in those settings, smart contracts would pose structural problems in terms of their irrevocability or automaticity. Moreover, they would be unable to generate off-chain effects, and this would not be easily sorted out even with the inclusion of internet of things (IoT) solutions or software oracles. This comes to largely restrict smart contracts to an information exchange mechanism, which does not significantly increase the value added by DLT plus smart contract solutions for procurement governance.

Conclusion

To conclude, there are significant and difficult to solve hurdles in generating an enabling data architecture, especially for digital technologies that require multiple sources of information or data points regarding several phases of the procurement process. Moreover, the realistic potential of most technologies primarily concerns the automation of tasks not involving data analysis of the exercise of procurement discretion, but rather relatively simple information cross-checks or exchanges. Linking back to the discussion in the earlier broader chapter (see here), the analysis above shows that a feasibility boundary emerges whereby the adoption of digital technologies for procurement governance can make contributions in relation to its information intensity, but not easily in relation to its information complexity, at least not in the short to medium term and not in the absence of a significant improvement of the required enabling data architecture. Perhaps in more direct terms, in the absence of a significant expansion in the collection and curation of data, digital technologies can allow procurement governance to do more of the same or to do it quicker, but it cannot enable better procurement driven by data insights, except in relatively narrow settings. Such settings are characterised by centralisation. Therefore, the deployment of digital technologies can be a further source of pressure towards procurement centralisation, which is not a neutral development in governance terms.

This feasibility boundary should be taken into account in considering potential use cases, as well as serve to moderate the expectations that come with the technologies and that can fuel ‘policy irresistibility’. Further, it should be stressed that those potential advantages do not come without their own additional complexities in terms of new governance risks (eg data and data systems integrity, cybersecurity, skills gaps) and requirements for their mitigation. These will be explored in the next stage of my research project.

Urgent: 'no eForms, no fun' -- getting serious about building a procurement data architecture in the EU

EU Member States only have about one year to make crucial decisions that will affect the procurement data architecture of the EU and the likelihood of successful adoption of digital technologies for procurement governance for years or decades to come’. Put like that, the relevance of the approaching deadline for the national implementation of new procurement eForms may grab more attention than the alternative statement that ‘in just about a year, new eForms will be mandatory for publication of procurement notices in TED’.

This latter more technical (obscure, and uninspiring?) understanding of the new eForms seems to have been dominating the approach to eForms implementation, which does not seem to have generally gained a high profile in domestic policy-making at EU Member State level despite the Publications Office’s efforts.

In this post, I reflect about the strategic importance of the eForms implementation for the digitalisation of procurement, the limited incentives for an ambitious implementation that stem from the voluntary approach of the most innovative aspects of the new eForms, and the opportunity that would be lost with a minimalistic approach to compliance with the new rules. I argue that it is urgent for EU Member States to get serious about building a procurement data architecture that facilitates the uptake of digital technologies for procurement governance across the EU, which requires an ambitious implementation of eForms beyond their minimum mandatory requirements.

eForms: some background

The EU is in the process of reforming the exchange of information about procurement procedures. This information exchange is mandated by the EU procurement rules, which regulate a variety of procurement notices with the two-fold objective of (i) fostering cross-border competition for public contracts and (ii) facilitating the oversight of procurement practices by the Member States, both in relation to the specific procedure (eg to enable access to remedies) and from a broad policy perspective (eg through the Single Market Scoreboard). In other words, this information exchange underpins the EU’s approach to procurement transparency, which mainly translates into publication of notices in the Tenders Electronic Daily (TED).

A 2019 Implementing Regulation established new standard forms for the publication of notices in the field of public procurement (eForms). The Implementing Regulation is accompanied by a detailed Implementation Handbook. The transition to eForms is about to hit a crucial milestone with the authorisation for their voluntary use from 14 November 2022, in parallel with the continued use of current forms. Following that, eForms will be mandatory and the only accepted format for publication of TED notices from 25 October 2023. There will thus have been a very long implementation period (of over four years), including an also lengthy (11-month) experimentation period about to start. This contrasts with previous revisions of the TED templates, which had given under six months’ notice (eg in 2015) or even just a 20-day implementation period (eg in 2011). This extended implementation period is reflective of the fact that the transition of eForms is not merely a matter of replacing a set of forms with another.

Indeed, eForms are not solely the new templates for the collection of information to be published in TED. eForms represent the EU’s open standard for publishing public procurement data — or, in other words, the ‘EU OCDS’ (which goes much beyond the OCDS mapping of the current TED forms). The importance of the implementation of a new data standard has been highlighted at strategic level, as this is the cornerstone of the EU’s efforts to improve the availability and quality of procurement data, which remain suboptimal (to say the least) despite continued efforts to improve the quality and (re)usability of TED data.

In that regard, the 2020 European strategy for data, emphasised that ‘Public procurement data are essential to improve transparency and accountability of public spending, fighting corruption and improving spending quality. Public procurement data is spread over several systems in the Member States, made available in different formats and is not easily possible to use for policy purposes in real-time. In many cases, the data quality needs to be improved.’ The European Commission now stresses how ‘eForms are at the core of the digital transformation of public procurement in the EU. Through the use of a common standard and terminology, they can significantly improve the quality and analysis of data’ (emphasis added).

It should thus be clear that the eForms implementation is not only about low level form-filling, but also (or primarily) about building a procurement data architecture that facilitates the uptake of digital technologies for procurement governance across the EU. Therefore, the implementation of eForms and the related data standard seeks to achieve two goals: first, to ensure the data quality (eg standardisation, machine-readability) required to facilitate its automated treatment for the purposes of publication of procurement notices mandated by EU law (ie their primary use); and, second, to build a data architecture that can facilitate the accumulation of big data so that advanced data analytics can be deployed by re-users of procurement data. This second(ary) goal is particularly relevant to our discussion. This requires some unpacking.

The importance of data for the deployment of digital technologies

It is generally accepted that quality (big) data is the primary requirement for the deployment of digital technologies to extract data-driven insights, as well as to automate menial back-office tasks. In a detailed analysis of these technologies, I stress the relevance of procurement data across technological solutions that could be deployed to improve procurement governance. In short, the outcome of robotic process automation (RPA) can only be as good as its sources of information, and adequate machine learning (ML) solutions can only be trained on high-quality big data—which thus conditions the possibility of developing recommender systems, chatbots, or algorithmic screens for procurement monitoring and oversight. Distributed Ledger Technology (DLT) systems (aka blockchain) can manage data, but cannot verify its content, accuracy, or reliability. Internet of Things (IoT) applications and software oracles can automatically capture data, which can alleviate some of the difficulties in generating an adequate data infrastructure. But this is only in relation with the observation of the ‘real world’ or in relation to digitally available information, which quality raises the same issues as other sources of data. In short, all digital technologies are data-centric or, more clearly, data-dependent.

Given the crucial relevance of data across digital technologies, it is hard to emphasise how any shortcomings in the enabling data architecture curtail the likelihood of successful adoption of digital technologies for procurement governance. With inadequate data, it may simply be impossible to develop digital solutions at all. And the development and adoption of digital solutions developed on poor or inadequate data can generate further problems—eg skewing decision-making on the basis of inadequately derived ‘data insights’. Ultimately, then, ensuring that adequate data is available to develop digital governance solutions is a challenging but unavoidable requirement in the process of procurement digitalisation. Success, or lack of it, in the creation of an enabling data architecture will determine the viability of the deployment of digital technologies more generally. From this perspective, the implementation of eForms gains clear strategic importance.

eForms Implementation: a flexible model

Implementing eForms is not an easy task. The migration towards eForms requires a complete redesign of information exchange mechanisms. eForms are designed around universal business language and involve the use of a much more structured information schema, compatible with the EU’s eProcurement Ontology, than the current TED forms. eForms are also meant to collect a larger amount of information than current TED forms, especially in relation to sub-units within a tender, such as lots, or in relation to framework agreements. eForms are meant to be flexible and regularly revised, in particular to add new fields to facilitate data capture in relation to specific EU-mandated requirements in procurement, such as in relation with the clean vehicles rules (with some changes already coming up, likely in November 2022).

From an informational point of view, the main constraint that remains despite the adoption of eForms is that their mandatory content is determined by existing obligations to report and publish tender-specific information under the current EU procurement rules, as well as to meet broader reporting requirements under international and EU law (eg the WTO GPA). This mandatory content is thus rather limited. Ultimately, eForms’ main concentration is on disseminating details of contract opportunities and capturing different aspects of decision-making by the contracting authorities. Given the process-orientedness and transactional focus of the procurement rules, most of the information to be mandatorily captured by the eForms concerns the scope and design of the tender procedure, some aspects concerning the award and formal implementation of the contract, as well as some minimal data points concerning its material outcome—primarily limited to the winning tender. As the Director-General of the Publications Office put it an eForms workshop yesterday, the new eForms will provide information on ‘who buys what, from whom and for what price’. While some of that information (especially in relation to the winning tender) will be reflective of broader market conditions, and while the accumulation of information across procurement procedures can progressively generate a broader view of (some of) the relevant markets, it is worth stressing that eForms are not designed as a tool of market intelligence.

Indeed, eForms do not capture the entirety of information generated by a procurement process and, as mentioned, their mandatory content is rather limited. eForms do include several voluntary or optional fields, and they could be adapted for some voluntary uses, such as in relation to detection of collusion in procurement, or in relation to the beneficial ownership of tenderers and subcontractors. Extensive use of voluntary fields and the development of additional fields and uses could contribute to generating data that enabled the deployment of digital technologies for the purposes of eg market intelligence, integrity checks, or other sorts of (policy-related) analysis. For example, there are voluntary fields in relation to green, social or innovation procurement, which could serve as the basis for data-driven insights into how to maximise the effects of such policy interventions. There are also voluntary fields concerning procurement challenges and disputes, which could facilitate a monitoring of eg areas requiring guidance or training. However, while the eForms are flexible, include voluntary fields, and the schema facilitates the development of additional fields, is it unclear that adequate incentives exist for adoption beyond their mandatory minimum content.

Implementation in two tiers

The fact that eForms are in part mandatory and in part voluntary will most likely result in two separate tiers of eForms implementation across the EU. Tier 1 will solely concern the collection and exchange of information mandated by EU law, that is the minimum mandatory eForm content. Tier 2 will concern the optional collection and exchange of a much larger volume of information concerning eg the entirety of tenders received, as well as qualitative information on eg specific policy goals embedded in a tender process. Of course, in the absence of coordination, a (large) degree of variation within Tier 2 can be expected. Tier 2 is potentially very important for (digital) procurement governance, but there is no guarantee that Member States will decide to implement eForms covering it.

One of the major obstacles to the broad adoption of a procurement data model so far, at least in the European Union, relates to the slow uptake of e-procurement (as discussed eg here). Without an underlying highly automated e-procurement system, the generation and capture of procurement data is a main challenge, as it is a labour-intensive process prone to input error. The entry into force of the eForms rules could serve as a further push for the completion of the transition to e-procurement—at least in relation to procurement covered by EU law (as below thresholds procurement is a voluntary potential use of eForms). However, it is also possible that low e-procurement uptake and generalised unsophisticated approaches to e-procurement (eg reduced automation) will limit the future functionality of eForms, with Member States that have so far lagged behind restricting the use of eForms to tier 1. Non life-cycle (automated) e-procurement systems may require manual inputs into the new eForms (or the databases from which they can draw information) and this implies that there is a direct cost to the implementation of each additional (voluntary) data field. Contracting authorities may not perceive the (potential) advantages of incurring those costs, or may more simply be constrained by their available budget. A collective action problem arises here, as the cost of adding more data to the eForms is to be shouldered by each public buyer, while the ensuing big data would potentially benefit everyone (especially as it will be published—although there are also possibilities to capture but not publish information that should be explored, at least to prevent excessive market transparency; but let’s park that issue for now) and perhaps in particular data re-users offering for pay added-value services.

In direct relation to this, and compounding the (dis)incentives problem, the possibility (or likelihood) of minimal implementation is compounded by the fact that, in many Member States, the operational adaptation to eForms does not directly concern public sector entities, but rather their service providers. e-procurement services providers compete for the provision of large volume, entirely standardised platform services, which are markets characterised by small operational margins. This creates incentives for a minimal adaptation of current e-sending systems and disincentives for the inclusion of added-value (data) services potentially unlikely to be used by public buyers. Some (or most) optional aspects of the eForm implementation will thus remain unused due to these market structure and dynamics, which does not clearly incentivise a race to the top (unless there is clear demand pull for it).

With some more nuance, it should be stressed that it is also possible that the adoption of eForms is uneven within a given jurisdiction where the voluntary character of parts of the eForm is kept (rather than made mandatory across the board through domestic legislation), with advanced procurement entities (eg central purchasing bodies, or large buyers) adopting tier 2 eForms, and (most) other public buyers limiting themselves to tier 1.

Ensuing data fragmentation

While this variety of approaches across the EU and within a Member State would not pose legal challenges, it would have a major effect on the utility of the eForms-generated data for the purposes of eg developing ML solutions, as the data would be fragmented, hardly representative of important aspects of procurement (markets), and could hardly be generalisable. The only consistent data would be that covered by tier 1 (ie mandatory and standardised implementation) and this would limit the potential use cases for the deployment of digital technologies—with some possibly limited to the procurement remit of the specific institutions with tier 2 implementations.

Relatedly, it should be stressed that, despite the effort to harmonise the underlying data architecture and link it to the Procurement Ontology, the Implementation Handbook makes clear that ‘eForms are not an “off the shelf” product that can be implemented only by IT developers. Instead, before developers start working, procurement policy decision-makers have to make a wide range of policy decisions on how eForms should be implemented’ in the different Member States.

This poses an additional challenge from the perspective of data quality (and consistency), as there are many fields to be tailored in the eForms implementation process that can result in significant discrepancies in the underlying understanding or methodology to determine them, in addition to the risk of potential further divergence stemming from the domestic interpretation of very similar requirements. This simply extends to the digital data world the current situation, eg in relation to diverging understandings of what is ‘recyclable’ or what is ‘social value’ and how to measure them. Whenever open-ended concepts are used, the data may be a poor source for comparative and aggregate analysis. Where there are other sources of standardisation or methodology, this issue may be minimised—eg in relation to the green public procurement criteria developed in the EU, if they are properly used. However, where there are no outside or additional sources of harmonisation, it seems that there is scope for quite a few difficult issues in trying to develop digital solutions on top of eForms data, except in relation to quantitative issues or in relation to information structured in clearly defined categories—which will mainly link back to the design of the procurement.

An opportunity about to be lost?

Overall, while the implementation of eForms could in theory build a big data architecture and facilitate the development of ML solutions, there are many challenges ahead and the generalised adoption of tier 2 eForms implementations seems unlikely, unless Member States make a positive decision in the process of national adoption. The importance of an ambitious tier 2 implementation of eForms should be assessed in light of its downstream importance for the potential deployment of digital technologies to extract data-driven insights and to automate parts of the procurement process. A minimalistic implementation of eForms would significantly constrain future possibilities of procurement digitalisation. Primarily in the specific jurisdiction, but also with spillover effects across the EU.

Therefore, a minimalistic eForms implementation approach would perpetuate (most of the) data deficit that prevents effective procurement digitalisation. It would be a short-sighted saving. Moreover, the effects of a ‘middle of the road’ approach should also be considered. A minimalistic implementation with a view to a more ambitious extension down the line could have short-term gains, but would delay the possibility of deploying digital technologies because the gains resulting from the data architecture are not immediate. In most cases, it will be necessary to wait for the accumulation of sufficiently big data. In some cases of infrequent procurement, missing data points will generate further time lags in the extraction of valuable insights. It is no exaggeration that every data point not captured carries an opportunity cost.

If Member States are serious about the digitalisation of public procurement, they will make the most of the coming year to develop tier 2 eForms implementations in their jurisdiction. They should also keep an eye on cross-border coordination. And the European Commission, both DG GROW and the Publications Office, would do well to put as much pressure on Member States as possible.

Public procurement governance as an information-intensive exercise, and the allure of digital technologies

I have just started a 12-month Mid-Career Fellowship funded by the British Academy with the purpose of writing up the monograph Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming).

In the process of writing up, I will be sharing some draft chapters and other thought pieces. I would warmly welcome feedback that can help me polish the final version. As always, please feel free to reach out: a.sanchez-graells@bristol.ac.uk.

In this first draft chapter (num 6), I explore the technological promise of digital governance and use public procurement as a case study of ‘policy irresistibility’. The main ideas in the chapter are as follows:

This Chapter takes a governance perspective to reflect on the process of horizon scanning and experimentation with digital technologies. The Chapter stresses how aspirations of digital transformation can drive policy agendas and make them vulnerable to technological hype, despite technological immaturity and in the face of evidence of the difficulty of rolling out such transformation programmes—eg regarding the still ongoing wave of transition to e-procurement. Delivering on procurement’s goals of integrity, efficiency and transparency requires facing challenges derived from the information intensity and complexity of procurement governance. Digital technologies promise to bring solutions to such informational burden and thus augment decisionmakers’ ability to deal with that complexity and with related uncertainty. The allure of the potential benefits of deploying digital technologies generates ‘policy irresistibility’ that can capture decision-making by policymakers overly exposed to the promise of technological fixes to recalcitrant governance challenges. This can in turn result in excessive experimentation with digital technologies for procurement governance in the name of transformation. The Chapter largely focuses on the EU policy framework, but the insights derived from this analysis are easily exportable.

Another draft chapter (num 7) will follow soon with more detailed analysis of the feasibility boundary for the adoption of digital technologies for procurement governance purposes. The full details of this draft chapter are as follows: A Sanchez-Graells, ‘The technological promise of digital governance: procurement as a case study of “policy irresistibility”’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4216825.

Interesting legislative proposal to make procurement of AI conditional on external checks

Procurement is progressively put in the position of regulating what types of artificial intelligence (AI) are deployed by the public sector (ie taking a gatekeeping function; see here and here). This implies that the procurement function should be able to verify that the intended AI (and its use/foreseeable misuse) will not cause harms—or, where harms are unavoidable, come up with a system to weigh, and if appropriate/possible manage, that risk. I am currently trying to understand the governance implications of this emerging gatekeeping role to assess whether procurement is best placed to carry it out.

In the context of this reflection, I found a very useful recent paper: M E Kaminski, ‘Regulating the Risks of AI’ (2023) 103 Boston University Law Review forthcoming. In addition to providing a useful critique of the treatment of AI harms as risk and of the implications in terms of the regulatory baggage that (different types of) risk regulation implies, Kaminski provides an overview of a very interesting legislative proposal: Washington State’s Bill SB 5116.

Bill SB 5116 is a proposal for new legislation ‘establishing guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability'. The governance approach underpinning the Bill is interesting in two respects.

First, the Bill includes a ban on certain uses of AI in the public sector. As Kaminski summarises: ‘Sec. 4 of SB 5116 bans public agencies from engaging in (1) the use of an automated decision system that discriminates, (2) the use of an “automated final decision system” to “make a decision impacting the constitutional or legal rights… of any Washington resident” (3) the use of an “automated final decision system…to deploy or trigger any weapon;” (4) the installation in certain public places of equipment that enables AI-enabled profiling, (5) the use of AI-enabled profiling “to make decisions that produce legal effects or similarly significant effects concerning individuals’ (at 66, fn 398).

Second, the Bill subjects the procurement of the AI to approval by the director of the office of the chief information officer. As Kaminski clarifies: ‘The bill’s assessment process is thus more like a licensing scheme than many proposed impact assessments in that it envisions a central regulator serving a gatekeeping function (albeit probably not an intensive one, and not over private companies, which aren’t covered by the bill at all). In fact, the bill is more protective than the GDPR in that the state CIO must make the algorithmic accountability report public and invite public comment before approving it’ (at 66, references omitted).

What the Bill does, then, is to displace the gatekeeping role from the procurement function itself to the data protection regulator. It also sets the specific substantive criteria the regulator has to apply in deciding whether to authorise the procurement of the AI.

Without getting into the detail of the Washington Bill, this governance approach seems to have two main strengths over the current emerging model of procurement self-regulation of the gatekeeping role (in the EU).

First, it facilitates a standardisation of the substantive criteria to be applied in assessing the potential harms resulting from AI adoption in the public sector, with a concentration on the specific characteristics of decision-making in this context. Importantly, it creates a clear area of illegality. Some of it is in line with eg the prohibition of certain AI uses in the Draft EU AI Act (profiling), or in the GDPR (prohibition of solely automated individual-decision making, including profiling — although it may go beyond it). Moreover, such an approach would allow for an expansion of prohibited uses in the specific context of the public sector, which the EU AI Act mostly fails to tackle (see here). It would also allow for the specification of constraints applicable to the use of AI by the public sector, such as a heightened obligation to provide reasons (see M Fink & M Finck, ‘Reasoned A(I)dministration: Explanation Requirements in EU Law and the Automation of Public Administration‘ (2022) 47(3) European Law Review 376-392).

Second, it introduces an element of external (independent) verification of the assessment of potential AI harms. I think this is a crucial governance point because most proposals relying on the internal (self) assessment by the procurement team fail to consider the extent to which such approach ensures (a) adequate resourcing (eg specialism and experience in the type of assessment) and (b) sufficient objectivity in the assessment. On the second point, with procurement teams often being told to ‘just go and procure what is needed’, moving to a position of gatekeeper or controller could be too big an ask (depending on institutional aspects that require closer consideration). Moreover, this would be different from other aspects of gatekeeping that procurement has progressively been asked to carry out (also excessively, in my view: see here).

When the procurement function is asked to screen for eg potential contractors’ social or environmental compliance track record, it is usually at arms’ length from those being reviewed (and the rules on conflict of interest are there to strengthen that position). Conversely, when the procurement function is asked to screen for the likely impact on citizens and/or users of public services of an initiative promoted by the operational part of the organisation to which it belongs, things are much more complicated.

That is why some systems (like the US FAR) create elements of separation between the procurement team and those in charge of reviewing eg competition issues (by means of the competition advocate). This is a model reflected in the Washington Bill’s approach to requiring external (even if within the public administration) verification and approval of the AI impact assessment. If procurement is to become a properly functioning gatekeeper of the adoption of AI by the public sector, this regulatory approach (ie having an ‘AI Harms Controller’) seems promising. Definitely a model worth thinking about for a little longer.

Happy summer and holidays

Dear HTCaN friends,

As I break for some summer holidays, I wanted to wish you a good period of rest and fun.

I hope to see you again in the blog in September or October. During academic year 2022/23, I will be mainly blogging about draft chapters of my forthcoming monograph on “Digital technologies and procurement governance. Gatekeeping and experimentation in digital public governance”, and related topics. I hope we will have interesting exchanges about the ideas for the book.

Until then, all best wishes for the rest of the summer,
Albert

© liebeslakritze / Flickr.

Digital technologies, hype, and public sector capability

© Martin Brandt / Flickr.

By Albert Sanchez-Graells (@How2CrackANut) and Michael Lewis (@OpsProf).*

The public sector’s reaction to digital technologies and the associated regulatory and governance challenges is difficult to map, but there are some general trends that seem worrisome. In this blog post, we reflect on the problematic compound effects of technology hype cycles and diminished public sector digital technology capability, paying particular attention to their impact on public procurement.

Digital technologies, smoke, and mirrors

There is a generalised over-optimism about the potential of digital technologies, as well as their likely impact on economic growth and international competitiveness. There is also a rush to ‘look digitally advanced’ eg through the formulation of ‘AI strategies’ that are unlikely to generate significant practical impacts (more on that below). However, there seems to be a big (and growing?) gap between what countries report (or pretend) to be doing (eg in reports to the OECD AI observatory, or in relation to any other AI readiness ranking) and what they are practically doing. A relatively recent analysis showed that European countries (including the UK) underperform particularly in relation to strategic aspects that require detailed work (see graph). In other words, there are very few countries ready to move past signalling a willingness to jump onto the digital tech bandwagon.

Some of that over-optimism stems from limited public sector capability to understand the technologies themselves (as well as their implications), which leads to naïve or captured approaches to policymaking (on capture, see the eye-watering account emerging from the #Uberfiles). Given the closer alignment (or political meddling?) of policymakers with eg research funding programmes, including but not limited to academic institutions, naïve or captured approaches impact other areas of ‘support’ for the development of digital technologies. This also trickles down to procurement, as the ‘purchasing’ of digital technologies with public money is seen as a (not very subtle) way of subsidising their development (nb. there are many proponents of that approach, such as Mazzucato, as discussed here). However, this can also generate further space for capture, as the same lack of capability that affects high(er) level policymaking also affects funding organisations and ‘street level’ procurement teams. This results in a situation where procurement best practices such as market engagement result in the ‘art of the possible’ being determined by private industry. There is rarely co-creation of solutions, but too often a capture of procurement expenditure by entrepreneurs.

Limited capability, difficult assessments, and dependency risk

Perhaps the universalist techno-utopian framing (cost savings and efficiency and economic growth and better health and new service offerings, etc.) means it is increasingly hard to distinguish the specific merits of different digitalisation options – and the commercial interests that actively hype them. It is also increasingly difficult to carry out effective impact assessments where the (overstressed) benefits are relatively narrow and short-termist, while the downsides of technological adoption are diffuse and likely to only emerge after a significant time lag. Ironically, this limited ability to diagnose ‘relative’ risks and rewards is further exacerbated by the diminishing technical capability of the state: a negative mirror to Amazon’s flywheel model for amplifying capability. Indeed, as stressed by Bharosa (2022): “The perceptions of benefits and risks can be blurred by the information asymmetry between the public agencies and GovTech providers. In the case of GovTech solutions using new technologies like AI, Blockchain and IoT, the principal-agent problem can surface”.

As Colington (2021) points out, despite the “innumerable papers in organisation and management studies” on digitalisation, there is much less understanding of how interests of the digital economy might “reconfigure” public sector capacity. In studying Denmark’s policy of public sector digitalisation – which had the explicit intent of stimulating nascent digital technology industries – she observes the loss of the very capabilities necessary “for welfare states to develop competences for adapting and learning”. In the UK, where it might be argued there have been attempts, such as the Government Digital Services (GDS) and NHS Digital, to cultivate some digital skills ‘in-house’, the enduring legacy has been more limited in the face of endless demands for ‘cost saving’. Kattel and Takala (2021) for example studied GDS and noted that, despite early successes, they faced the challenge of continual (re)legitimization and squeezed investment; especially given the persistent cross-subsidised ‘land grab’ of platforms, like Amazon and Google, that offer ‘lower cost and higher quality’ services to governments. The early evidence emerging from the pilot algorithmic transparency standard seems to confirm this trend of (over)reliance on external providers, including Big Tech providers such as Microsoft (see here).

This is reflective of Milward and Provan’s (2003) ‘hollow state’ metaphor, used to describe "the nature of the devolution of power and decentralization of services from central government to subnational government and, by extension, to third parties – nonprofit agencies and private firms – who increasingly manage programs in the name of the state.” Two decades after its formulation, the metaphor is all the more applicable, as the hollowing out of the State is arguably a few orders of magnitude larger due the techno-centricity of reforms in the race towards a new model of digital public governance. It seems as if the role of the State is currently understood as being limited to that of enabler (and funder) of public governance reforms, not solely implemented, but driven by third parties—and primarily highly concentrated digital tech giants; so that “some GovTech providers can become the next Big Tech providers that could further exploit the limited technical knowledge available at public agencies [and] this dependency risk can become even more significant once modern GovTech solutions replace older government components” (Bharosa, 2022). This is a worrying trend, as once dominance is established, the expected anticompetitive effects of any market can be further multiplied and propagated in a setting of low public sector capability that fuels risk aversion, where the adage “Nobody ever gets fired for buying IBM” has been around since the 70s with limited variation (as to the tech platform it is ‘safe to engage’).

Ultimately, the more the State takes a back seat, the more its ability to steer developments fades away. The rise of a GovTech industry seeking to support governments in their digital transformation generates “concerns that GovTech solutions are a Trojan horse, exploiting the lack of technical knowledge at public agencies and shifting decision-making power from public agencies to market parties, thereby undermining digital sovereignty and public values” (Bharosa, 2022). Therefore, continuing to simply allow experimentation in the GovTech market without a clear strategy on how to reign the industry in—and, relatedly, how to build the public sector capacity needed to do so as a precondition—is a strategy with (exponentially) increasing reversal costs and an unclear tipping point past which meaningful change may simply not be possible.

Public sector and hype cycle

Being more pragmatic, the widely cited, if impressionistic, “hype cycle model” developed by Gartner Inc. provides additional insights. The model presents a generalized expectations path that new technologies follow over time, which suggests that new industrial technologies progress through different stages up to a peak that is followed by disappointment and, later, a recovery of expectations.

Although intended to describe aggregate technology level dynamics, it can be useful to consider the hype cycle for public digital technologies. In the early phases of the curve, vendors and potential users are actively looking for ways to create value from new technology and will claim endless potential use cases. If these are subsequently piloted or demonstrated – even if ‘free’ – they are exciting and visible, and vendors are keen to share use cases, they contribute to creating hype. Limited public sector capacity can also underpin excitement for use cases that are so far removed from their likely practical implementation, or so heavily curated, that they do not provide an accurate representation of how the technology would operate at production phase in the generally messy settings of public sector activity and public sector delivery. In phases such as the peak of inflated expectations, only organisations with sufficient digital technology and commercial capabilities can see through sophisticated marketing and sales efforts to separate the hype from the true potential of immature technologies. The emperor is likely to be naked, but who’s to say?

Moreover, as mentioned above, international organisations one step (upwards) removed from the State create additional fuel for the hype through mapping exercises and rankings, which generate a vicious circle of “public sector FOMO” as entrepreneurial bureaucrats and politicians are unlikely to want to be listed bottom of the table and can thus be particularly receptive to hyped pitches. This can leverage incentives to support *almost any* sort of tech pilots and implementations just to be seen to do something ‘innovative’, or to rush through high-risk implementations seeking to ‘cash in’ on the political and other rents they can (be spun to) generate.

However, as emerging evidence shows (AI Watch, 2022), there is a big attrition rate between announced and piloted adoptions, and those that are ultimately embedded in the functioning of the public sector in a value-adding manner (ie those that reach the plateau of productivity stage in the cycle). Crucially, the AI literacy and skills in the staff involved in the use of the technology post-pilot are one of the critical challenges to the AI implementation phase in the EU public sector (AI Watch, 2021). Thus, early moves in the hype curve are unlikely to translate into sustainable and expectations-matching deployments in the absence of a significant boost of public sector digital technology capabilities. Without committed long-term investment in that capability, piloting and experimentation will rarely translate into anything but expensive pet projects (and lucrative contracts).

Locking the hype in: IP, data, and acquisitions markets

Relatedly, the lack of public sector capacity is a foundation for eg policy recommendations seeking to avoid the public buyer acquiring (and having to manage) IP rights over the digital technologies it funds through procurement of innovation (see eg the European Commission’s policy approach: “There is also a need to improve the conditions for companies to protect and use IP in public procurement with a view to stimulating innovation and boosting the economy. Member States should consider leaving IP ownership to the contractors where appropriate, unless there are overriding public interests at stake or incompatible open licensing strategies in place” at 10).

This is clear as mud (eg what does overriding public interest mean here?) but fails to establish an adequate balance between public funding and public access to the technology, as well as generating (unavoidable?) risks of lock-in and exacerbating issues of lack of capacity in the medium and long-term. Not only in terms of re-procuring the technology (see related discussion here), but also in terms of the broader impact this can have if the technology is propagated to the private sector as a result of or in relation to public sector adoption.

Linking this recommendation to the hype curve, such an approach to relying on proprietary tech with all rights reserved to the third-party developer means that first mover advantages secured by private firms at the early stages of the emergence of a new technology are likely to be very profitable in the long term. This creates further incentives for hype and for investment in being the first to capture decision-makers, which results in an overexposure of policymakers and politicians to tech entrepreneurs pushing hard for (too early) adoption of technologies.

The exact same dynamic emerges in relation to access to data held by public sector entities without which GovTech (and other types of) innovation cannot take place. The value of data is still to be properly understood, as are the mechanisms that can ensure that the public sector obtains and retains the value that data uses can generate. Schemes to eg obtain value options through shares in companies seeking to monetise patient data are not bullet-proof, as some NHS Trusts recently found out (see here, and here paywalled). Contractual regulation of data access, data ownership and data retention rights and obligations pose a significant challenge to institutions with limited digital technology capabilities and can compound IP-related lock-in problems.

A final further complication is that the market for acquisitions of GovTech and other digital technologies start-ups and scale-ups is very active and unpredictable. Even with standard levels of due diligence, public sector institutions that had carefully sought to foster a diverse innovation ecosystem and to avoid contracting (solely) with big players may end up in their hands anyway, once their selected provider leverages their public sector success to deliver an ‘exit strategy’ for their founders and other (venture capital) investors. Change of control clauses clearly have a role to play, but the outside alternatives for public sector institutions engulfed in this process of market consolidation can be limited and difficult to assess, and particularly challenging for organisations with limited digital technology and associated commercial capabilities.

Procurement at the sharp end

Going back to the ongoing difficulty (and unwillingness?) in regulating some digital technologies, there is a (dominant) general narrative that imposes a ‘balanced’ approach between ensuring adequate safeguards and not stifling innovation (with some countries clearly erring much more on the side of caution, such as the UK, than others, such as the EU with the proposed EU AI Act, although the scope of application of its regulatory requirements is narrower than it may seem). This increasingly means that the tall order task of imposing regulatory constraints on the digital technologies and the private sector companies that develop (and own them) is passed on to procurement teams, as the procurement function is seen as a useful regulatory mechanism (see eg Select Committee on Public Standards, Ada Lovelace Institute, Coglianese and Lampmann (2021), Ben Dor and Coglianese (2022), etc but also the approach favoured by the European Commission through the standard clauses for the procurement of AI).

However, this approach completely ignores issues of (lack of) readiness and capability that indicate that the procurement function is being set up to fail in this gatekeeping role (in the absence of massive investment in upskilling). Not only because it lacks the (technical) ability to figure out the relevant checks and balances, and because the levels of required due diligence far exceed standard practices in more mature markets and lower risk procurements, but also because the procurement function can be at the sharp end of the hype cycle and (pragmatically) unable to stop the implementation of technological deployments that are either wasteful or problematic from a governance perspective, as public buyers are rarely in a position of independent decision-making that could enable them to do so. Institutional dynamics can be difficult to navigate even with good insights into problematic decisions, and can be intractable in a context of low capability to understand potential problems and push back against naïve or captured decisions to procure specific technologies and/or from specific providers.

Final thoughts

So, as a generalisation, lack of public sector capability seems to be skewing high level policy and limiting the development of effective plans to roll it out, filtering through to incentive systems that will have major repercussions on what technologies are developed and procured, with risks of lock-in and centralisation of power (away from the public sector), as well as generating a false comfort in the ability of the public procurement function to provide an effective route to tech regulation. The answer to these problems is both evident, simple, and politically intractable in view of the permeating hype around new technologies: more investment in capacity building across the public sector.

This regulatory answer is further complicated by the difficulty in implementing it in an employment market where the public sector, its reward schemes and social esteem are dwarfed by the high salaries, flexible work conditions and allure of the (Big) Tech sector and the GovTech start-up scene. Some strategies aimed at alleviating the generalised lack of public sector capability, e.g. through a GovTech platform at the EU level, can generate further risks of reduction of (in-house) public sector capability at State (and regional, local) level as well as bottlenecks in the access of tech to the public sector that could magnify issues of market dominance, lock-in and over-reliance on GovTech providers (as discussed in Hoekstra et al, 2022).

Ultimately, it is imperative to build more digital technology capability in the public sector, and to recognise that there are no quick (or cheap) fixes to do so. Otherwise, much like with climate change, despite the existence of clear interventions that can mitigate the problem, the hollowing out of the State and the increasing overdependency on Big Tech providers will be a self-fulfilling prophecy for which governments will have no one to blame but themselves.

 ___________________________________

* We are grateful to Rob Knott (@Procure4Health) for comments on an earlier draft. Any remaining errors and all opinions are solely ours.

Algorithmic transparency: some thoughts on UK's first four published disclosures and the standards' usability

© Fabrice Jazbinsek / Flickr.

The Algorithmic Transparency Standard (ATS) is one of the UK’s flagship initiatives for the regulation of public sector use of artificial intelligence (AI). The ATS encourages (but does not mandate) public sector entities to fill in a template to provide information about the algorithmic tools they use, and why they use them [see e.g. Kingsman et al (2022) for an accessible overview].

The ATS is currently being piloted, and has so far resulted in the publication of four disclosures relating to the use of algorithms in different parts of the UK’s public sector. In this post, I offer some thoughts based on these initial four disclosures, in particular from the perspective of the usability of the ATS in facilitating an enhanced understanding of AI use cases, and accountability for those.

The first four disclosed AI use cases

The ATS pilot has so far published information in two batches (on 1 June and 6 July 2022), comprising the following four AI use cases:

  1. Within Cabinet Office, the GOV.UK Data Labs team piloted the ATS for their Related Links tool; a recommendation engine built to aid navigation of GOV.UK (the primary UK central government website) by providing relevant onward journeys from a content page, with the aim of helping users find useful information and content, aiding navigation.

  2. In the Department for Health and Social Care and NHS Digital, the QCovid team piloted the ATS with a COVID-19 clinical tool used to predict how at risk individuals might be from COVID-19. The tool was developed for use by clinicians in support of conversations with patients about personal risk, and it uses algorithms to combine a number of factors such as age, sex, ethnicity, height and weight (to calculate BMI), and specific health conditions and treatments in order to estimate the combined risk of catching coronavirus and being hospitalised or catching coronavirus and dying. Importantly, “The original version of the QCovid algorithms were also used as part of the Population Risk Assessment to add patients to the Shielded Patient List in February 2021. These patients were advised to shield at that time were provided support for doing so, and were prioritised for COVID-19 vaccination.

  3. The Information Commissioner's Office has piloted the ATS with its Registration Inbox AI, which uses a machine learning algorithm to categorise emails sent to the Information Commissioner's Office’s registration inbox and to send out an auto-reply where the algorithm “detects … a request about changing a business address. In cases where it detects this kind of request, the algorithm sends out an autoreply that directs the customer to a new online service and points out further information required to process a change request. Only emails with an 80% certainty of a change of address request will be sent an email containing the link to the change of address form.”

  4. The Food Standards Agency piloted the ATS with its Food Hygiene Rating Scheme (FHRS) – AI, which is an algorithmic tool to help local authorities to prioritise inspections of food businesses based on their predicted food hygiene rating by predicting which establishments might be at a higher risk of non-compliance with food hygiene regulations. Importantly, the tool is of voluntary use and “it is not intended to replace the current approach to generate a FHRS score. The final score will always be the result of an inspection undertaken by [a local authority] officer.

Harmless (?) use cases

At first glance, and on the basis of the implications of the outcome of the algorithmic recommendation, it would seem that the four use cases are relatively harmless, i.e..

  1. If GOV.UK recommends links to content that is not relevant or helpful, the user may simply ignore them.

  2. The outcome of the QCovid tool simply informs the GPs’ (or other clinicians’) assessment of the risk of their patients, and the GPs’ expertise should mediate any incorrect (either over-inclusive, or under-inclusive) assessments by the AI.

  3. If the ICO sends an automatic email with information on how to change their business address to somebody that had submitted a different query, the receiver can simply ignore that email.

  4. Incorrect or imperfect prioritisation of food businesses for inspection could result in the early inspection of a low-risk restaurant, or the late(r) inspection of a higher-risk restaurant, but this is already a risk implicit in allowing restaurants to open pending inspection; AI does not add risk.

However, this approach could be too simplistic or optimistic. It can be helpful to think about what could really happen if the AI got it wrong ‘in a disaster scenario’ based on possible user reactions (a useful approach promoted by the Data Hazards project). It seems to me that, on ‘worse case scenario’ thinking (and without seeking to be exhaustive):

  1. If GOV.UK recommends content that is not helpful but is confusing, the user can either engage in red tape they did not need to complete (wasting both their time and public resources) or, worse, feel overwhelmed, confused or misled and abandon the administrative interaction they were initially seeking to complete. This can lead to exclusion from public services, and be particularly problematic if these situations can have a differential impact on different user groups.

  2. There could be over-reliance on the QCovid algorithm by (too busy) GPs. This could lead to advising ‘as a matter of routine’ the taking of excessive precautions with significant potential impacts on the day to day lives of those affected—as was arguably the case for some of the citizens included in shielding categories in the earlier incarnation of the algorithm. Conversely, GPs that identified problems in the early use of the algorithm could simply ignore it, thus potentially losing the benefits of the algorithm in other cases where it could have been helpful—potentially leading to under-precaution by individuals that could have otherwise been better safeguarded.

  3. Similarly to 1, the provision of irrelevant and potentially confusing information can lead to waste of resource (e.g. users seeking to change their business registration address because they wrongly think it is a requirement to process their query or, at a lower end of the scale, users having to read and consider information about an administrative process they have no interest in). Beyond that, the classification algorithm could generate loss of queries if there was no human check to verify that the AI classification was correct. If this check takes place anyway, the advantages of automating the sending of the initial email seem rather marginal.

  4. Similar to 2, the incorrect prediction of risk can lead to misuse of resources in the carrying out of inspections by local authorities, potentially pushing down the list of restaurants pending inspection some that are high-risk and that could thus be seen their inspection repeatedly delayed. This could have important public health implications, at least for those citizens using the to be inspected restaurants for longer than they would otherwise have. Conversely, inaccurate prioritisations that did not seem to catch more ‘risky’ restaurants could also lead to local authorities abandoning its use. There is also a risk of profiling of certain types of businesses (and their owners), which could lead to victimisation if the tool was improperly used, or used in relation to restaurants that have been active for a longer period (eg to trigger fresh (re)inspections).

No AI application is thus entirely harmless. Of course, this is just a matter of theoretical speculation—as could also be speculated whether reduced engagement with the AI would generate a second tier negative effect, eg if ‘learning’ algorithms could not be revised and improved on the basis of ‘real-life’ feedback on whether their predictions were or not accurate.

I think that this sort of speculation offers a useful yardstick to assess the extent to which the ATS can be helpful and usable. I would argue that the ATS will be helpful to the extent that (a) it provides information susceptible of clarifying whether the relevant risks have been taken into account and properly mitigated or, failing that (b) it provides information that can be used to challenge the insufficiency of any underlying risk assessments or mitigation strategies. Ultimately, AI transparency is not an end in itself, but simply a means of increasing accountability—at least in the context of public sector AI adoption. And it is clear that any degree of transparency generated by the ATS will be an improvement on the current situation, but is the ATS really usable?

Finding out more on the basis of the ATS disclosures

To try to answer that general question on whether the ATS is usable and serves to facilitate increased accountability, I have read the four disclosures in full. Here is my summary/extracts of the relevant bits for each of them.

GOV.UK Related Links

Since May 2019, the tool has been using an algorithm called node2vec (machine learning algorithm that learns network node embeddings) to train a model on the last three weeks of user movement data (web analytics data). The benefits are described as “the tool … predicts related links for a page. These related links are helpful to users. They help users find the content they are looking for. They also help a user find tangentially related content to the page they are on; it’s a bit like when you are looking for a book in the library, you might find books that are relevant to you on adjacent shelves.

The way the tool works is described in some more detail: “The tool updates links every three weeks and thus tracks changes in user behaviour.” “Every three weeks, the machine learning algorithm is trained using the last three weeks of analytics data and trains a model that outputs related links that are published, overwriting the existing links with new ones.” “The average click through rate for related links is about 5% of visits to a content page. For context, GOV.UK supports an average of 6 million visits per day (Jan 2022). True volumes are likely higher owing to analytics consent tracking. We only track users who consent to analytics cookies …”.

The decision process is fully automated, but there is “a way for publishers to add/amend or remove a link from the component. On average this happens two or three times a month.” “Humans have the capability to recommend changes to related links on a page. There is a process for links to be amended manually and these changes can persist. These human expert generated links are preferred to those generated by the model and will persist.” Moreover, “GOV.UK has a feedback link, “report a problem with this page”, on every page which allows users to flag incorrect links or links they disagree with.” The tool was subjected to a Data Protection Impact Assessment (DPIA), but no other impact assessments (IAs) are listed.

When it comes to risk identification and mitigation, the disclosure indicates: “A recommendation engine can produce links that could be deemed wrong, useless or insensitive by users (e.g. links that point users towards pages that discuss air accidents).” and that, as mitigation: “We added pages to a deny list that might not be useful for a user (such as the homepage) or might be deemed insensitive (e.g. air accident reports). We also enabled publishers or anyone with access to the tagging system to add/amend or remove links. GOV.UK users can also report problems through the feedback mechanisms on GOV.UK.

Overall, then, the risk I had identified is only superficially identified, in that the ATS disclosure does not show awareness of the potential differing implications of incorrect or useless recommendations across the spectrum. The narrative equating the recommendations to browsing the shelves of a library is quite suggestive in that regard, as is the fact that the quality controls are rather limited.

Indeed, it seems that the quality control mechanisms require a high level of effort by every publisher, as they need to check every three weeks whether the (new) related links appearing in each of the pages they publish are relevant and unproblematic. This seems to have reversed the functional balance of convenience. Before the implementation of the tool, only approximately 2,000 out of 600,000 pieces of content on GOV.UK had related links, as they had to be created manually (and thus, hopefully, were relevant, if not necessarily unproblematic). Now, almost all pages have up to five related content suggestions, but only two or three out of 600,000 pages see their links manually amended per month. A question arises whether this extremely low rate of manual intervention is reflective of the high quality of the system, or the reverse evidence of lack of resource to quality-assure websites that previously prevented 98% of pages from having this type of related information.

However, despite the queries as to the desirability of the AI implementation as described, the ATS disclosure is in itself useful because it allows the type of analysis above and, in case someone considers the situation unsatisfactory or would like to prove it further, there are is a clear gateway to (try to) engage the entity responsible for this AI deployment.

QCovid algorithm

The algorithm was developed at the onset of the Covid-19 pandemic to drive government decisions on which citizens to advise to shield, support during shielding, and prioritise for vaccination rollout. Since the end of the shielding period, the tool has been modified. “The clinical tool for clinicians is intended to support individual conversations with patients about risk. Originally, the goal was to help patients understand the reasons for being asked to shield and, where relevant, help them do so. Since the end of shielding requirements, it is hoped that better-informed conversations about risk will have supported patients to make appropriate decisions about personal risk, either protecting them from adverse health outcomes or to some extent alleviating concerns about re-engaging with society.

In essence, the tool creates a risk calculation based on scoring risk factors across a number of data fields pertaining to demographic, clinical and social patient information.“ “The factors incorporated in the model include age, ethnicity, level of deprivation, obesity, whether someone lived in residential care or was homeless, and a range of existing medical conditions, such as cardiovascular disease, diabetes, respiratory disease and cancer. For the latest clinical tool, separate versions of the QCOVID models were estimated for vaccinated and unvaccinated patients.

It is difficult to assess how intensely the tool is (currently) used, although the ATS indicates that “In the period between 1st January 2022 and 31st March 2022, there were 2,180 completed assessments” and that “Assessment numbers often move with relative infection rate (e.g. higher infection rate leads to more usage of the tool).“ The ATS also stresses that “The use of the tool does not override any clinical decision making but is a supporting device in the decision making process.” “The tool promotes shared decision making with the patient and is an extra point of information to consider in the decision making process. The tool helps with risk/benefit analysis around decisions (e.g. recommendation to shield or take other precautionary measures).

The impact assessment of this tool is driven by those mandated for medical devices. The description is thus rather technical and not very detailed, although the selected examples it includes do capture the possibility of somebody being misidentified “as meeting the threshold for higher risk”, as well as someone not having “an output generated from the COVID-19 Predictive Risk Model”. The ATS does stress that “As part of patient safety risk assessment, Hazardous scenarios are documented, yet haven’t occurred as suitable mitigation is introduced and implemented to alleviate the risk.” That mitigation largely seems to be that “The tool is designed for use by clinicians who are reminded to look through clinical guidance before using the tool.

I think this case shows two things. First, that it is difficult to understand how different parts of the analysis fit together when a tool that has had two very different uses is the object of a single ATS disclosure. There seems to be a good argument for use case specific ATS disclosures, even if the underlying AI deployment is the same (or a closely related one), as the implications of different uses from a governance perspective also differ.

Second, that in the context of AI adoption for healthcare purposes, there is a dual barrier to accessing relevant (and understandable) information: the tech barrier and the medical barrier. While the ATS does something to reduce the former, the latter very much remains in place and perhaps turn the issue of trustworthiness of the AI to trustworthiness of the clinician, which is not necessarily entirely helpful (not only in this specific use case, but in many other one can imagine). In that regard, it seems that the usability of the ATS is partially limited, and more could be done to increase meaningful transparency through AI-specific IAs, perhaps as proposed by the Ada Lovelace Institute.

In this case, the ATS disclosure has also provided some valuable information, but arguably to a lesser extent than the previous case study.

ICO’s Registration Inbox AI

This is a tool that very much resembles other forms of email classification (e.g. spam filters), as “This algorithmic tool has been designed to inspect emails sent to the ICO’s registration inbox and send out autoreplies to requests made about changing addresses. The tool has not been designed to automatically change addresses on the requester’s behalf. The tool has not been designed to categorise other types of requests sent to the inbox.

The disclosure indicates that “In a significant proportion of emails received, a simple redirection to an online service is all that is required. However, sifting these types of emails out would also require time if done by a human. The algorithm helps to sift out some of these types of emails that it can then automatically respond to. This enables greater capacity for [Data Protection] Fees Officers in the registration team, who can, consequently, spend more time on more complex requests.” “There is no manual intervention in the process - the links are provided to the customer in a fully automated manner.

The tool has been in use since May 2021 and classifies approximately 23,000 emails a month.

When it comes to risk identification and mitigation, the ATS disclosure stresses that “The algorithmic tool does not make any decisions, but instead provides links in instances where it has calculated the customer has contacted the ICO about an address change, giving the customer the opportunity to self-serve.” Moreover, it indicates that there is “No need for review or appeal as no decision is being made. Incorrectly classified emails would receive the default response which is an acknowledgement.” It further stresses that “The classification scope is limited to a change of address and a generic response stating that we have received the customer’s request and that it will be processed within an estimated timeframe. Incorrectly classified emails would receive the default response which is an acknowledgement. This will not have an impact on personal data. Only emails with an 80% certainty of a change of address request will be sent an email containing the link to the change of address form.”

In my view, this disclosure does not entirely clarify the way the algorithm works (e.g. what happens to emails classified as having requested information on change of address? Are they ‘deleted’ from the backlog of emails requiring a (human) non-automated response?). However, it does provide sufficient information to further consolidate the questions arising from the general description. For example, it seems that the identification of risks is clearly partial in that there is not only a risk of someone asking for change of address information not automatically receiving it, but also a risk of those asking for other information receiving the wrong information. There is also no consideration of additional risks (as above), and the general description makes the claim of benefits doubtful if there has to be a manual check to verify adequate classification.

The ATS disclosure does not provide sufficient contact information for the owner of the AI (perhaps because they were contracted on limited after service terms…), although there is generic contact information for the ICO that could be used by someone that considered the situation unsatisfactory or would like to prove it further.

Food Hygiene Rating Scheme – AI

This tool is also based on machine learning to make predictions. “A machine learning framework called LightGBM was used to develop the FHRS AI model. This model was trained on data from three sources: internal Food Standards Agency (FSA) FHRS data, publicly available Census data from the 2011 census and open data from HERE API. Using this data, the model is trained to predict the food hygiene rating of an establishment awaiting its first inspection, as well as predicting whether the establishment is compliant or not.” “Utilising the service, the Environmental Health Officers (EHOs) are provided with the AI predictions, which are supplemented with their knowledge about the businesses in the area, to prioritise inspections and update their inspection plan.”

Regarding the justification for the development, the disclosure stresses that “the number of businesses classified as ‘Awaiting Inspection’ on the Food Hygiene Rating Scheme website has increased steadily since the beginning of the pandemic. This has been the key driver behind the development of the FHRS AI use case.” “The objective is to help local authorities become more efficient in managing the hygiene inspection workload in the post-pandemic environment of constrained resources and rapidly evolving business models.

Interestingly, the disclosure states that the tool “has not been released to actual end users as yet and hence the maintenance schedule is something that cannot be determined at this point in time (June 2022). The Alpha pilot started at the beginning of April 2022, wherein the end users (the participating Local Authorities) have access to the FHRS AI service for use in their day-to-day workings. This section will be updated depending on the outcomes of the Alpha Pilot ...” It remains to be seen whether there will be future updates on the disclosure, but an error in copy-pasting in the ATS disclosure makes it contain the same paragraph but dated February 2022. This stresses the need to date and reference (eg v.1, v.2) the successive versions of the same disclosure, which does not seem to be a field of the current template, as well as to create a repository of earlier versions of the same disclosure.

The section on oversight stresses that “the system has been designed to provide decision support to Local Authorities. FSA has advised Local Authorities to never use this system in place of the current inspection regime or use it in isolation without further supporting information”. It also stresses that “Since there will be no change to the current inspection process by introducing the model, the existing appeal and review mechanisms will remain in place. Although the model is used for prioritisation purposes, it should not impact how the establishment is assessed during the inspection and therefore any challenges to a food hygiene rating would be made using the existing FHRS appeal mechanism.”

The disclosure also provides detailed information on IAs: “The different impact assessments conducted during the development of the use case were 1. Responsible AI Risk Assessment; 2. Stakeholder Impact Assessment; [and] 3. Privacy Impact Assessment.” Concerning the responsible AI risk assessment, in addition to a personal data issue that should belong in the DPIA, the disclosure reports three identified risks very much in line with the ones I had hinted at above: “2. Potential bias from the model (e.g. consistently scoring establishments of a certain type much lower, less accurate predictions); 3. Potential bias from inspectors seeing predicted food hygiene ratings and whether the system has classified the establishment as compliant or not. This may have an impact on how the organisation is perceived before receiving a full inspection; 4. With the use of AI/ML there is a chance of decision automation bias or automation distrust bias occurring. Essentially, this refers to a user being over or under reliant on the system leading to a degradation of human-reasoning.”

The disclosure presents related mitigation strategies as follows: “2. Integration of explainability and fairness related tooling during exploration and model development. These tools will also be integrated and monitored post-alpha testing to detect and mitigate potential biases from the system once fully operational; 3. Continuously reflect, act and justify sessions with business and technical subject matter experts throughout the delivery of the project, along with the use of the three impact assessments outlined earlier to identify, assess and manage project risks; 4. Development of usage guidance for local authorities specifically outlining how the service is expected to be used. This document also clearly states how the service should not be used, for example, the model outcome must not be the only indicator used when prioritising businesses for inspection.

In this instance, the ATS disclosure is in itself useful because it allows the type of analysis above and, in case someone considers the situation unsatisfactory or would like to prove it further, there are is a clear gateway to (try to) engage the entity responsible for this AI deployment. It is also interesting to see that the disclosure specifies that the private provider was engaged “As well as [in] a development role [… to provide] Responsible AI consulting and delivery services, including the application of a parallel Responsible AI sprint to assess risk and impact, enable model explainability and assess fairness, using a variety of artefacts, processes and tools”. This is clearly reflected in the ATS disclosure and could be an example of good practice where organisations lack that in-house capability and/or outsource the development of the AI. Whether that role should fall with the developer, or should rather be separate to avoid organisational conflicts of interest is a discussion for another day.

Final thoughts

There seems to be a mixed picture on the usability of the ATS disclosures, with some of them not entirely providing (full) usability, or a clear pathway to engage with the specific entity in charge of the development of the algorithmic tool, specifically if it was an outsourced provider. In those cases, the public authority that has implemented the AI (even if not the owner of the project) will have to deal with any issues arising from the disclosure. There is also a mixed practice concerning linking to resources other than previously available (open) data (eg open source code, data sources), with only one project (GOV.UK) including them in the disclosures discussed above.

It will be interesting to see how this assessment scales up (to use a term) once disclosures increase in volume. There is clearly a research opportunity arising as soon as more ATS disclosures are published. As a hypothesis, I would submit that disclosure quality is likely to reduce with volume, as well as with the withdrawal of whichever support the pilot phase has meant for those participating institutions. Let’s see how that empirical issue can be assessed.

The other reflection I have to offer based on these first four disclosures is that there are points of information in the disclosures that can be useful, at least from an academic (and journalistic?) perspective, to assess the extent to which the public sector has the capabilities it needs to harness digital technologies (more on that soon in this blog).

The four reviewed disclosures show that there was one in-house development (GOV.UK), while the other ones were either procured (QCovid, which disclosure includes a redacted copy of the contract), or contracted out, perhaps even directly awarded (ICO email classifier FSA FHRS - AI). And there are some in between the line indications that some of the implementations may have been relatively randomly developed, unless there was strong pre-existing reliable statistical data (eg on information requests concerning change of business address). Which in itself triggers questions on the procurement or commissioning strategy developed by institutions seeking to harness AI potential.

From this perspective, the ATS disclosures can be a useful source of information on the extent to which the adoption of AI by the public sector depends as strongly on third party capabilities as the literature generally hypothesises or/and is starting to demonstrate empirically.