Competition Law and Policy and the Technologies of the Fourth Industrial Revolution
12 December 2024
The roundtable event was hosted by UCL CLES, Sciences Po, and the Inclusive Competition Forum.
On Thursday 5th of December 2024, the Centre for Law, Economics and Society at UCL, Sciences Po, and the Inclusive Competition Forum hosted a roundtable event on Competition Law and Policy and the Technologies of the Fourth Industrial Revolution. The roundtable was convened by Prof. Ioannis Lianos (UCL) and Prof. Dina Waked (Sciences Po).
The roundtable was organised during the OECD competition week in Paris, and brought together sixteen representatives from competition authorities, academics and representatives from civil society. The technologies of the fourth industrial revolution – which include generative AI, synthetic biology, robotic automation, and quantum computing – are rapidly developing. As explored in a recent publication by Prof. Ioannis Lianos, they raise pressing questions about how regulators can safeguard competition, especially in light of how the third industrial revolution has led to the creation of digital markets, which have often tipped towards oligopolistic and monopolistic market structures.
The event focused, in particular, on generative AI. It was split into two halves, with the first dedicated to AI collusion and the second on the intersection between AI and economic power. Each discussion was moderated by a chair, held under Chatham House rules, and structured in an egalitarian manner such that anybody could enter a queue to contribute to the discussion. This led to a free and frank discourse about the legal, technological, economic, institutional and conceptual challenges facing competition law in this area.
Of particular interest is the participation to this event of civil society organizations that are often not included in such discussions between top level competition authorities and regulators, something that was noted by all participants who valued their contributions considerably.
The following summary aims to provide some of the main insights on the discussion, if not an exhaustive account of the quite rich conversation.
Algorithmic Coordination
The first panel focused on the thorny issue of algorithmic coordination. For a long time, the received wisdom of competition economists was that AI agents would not be able to collude on account of the complex behaviour, such as the enforcement of punishment, required to pursue such a strategy. Yet recent years have shown AI agents to beat humans in all sorts of similar situations, including in Chess, Go and Poker. Further, it has long been known that autonomous agents applying an almost trivial tit-for-tat strategy can effectively ‘collude’ with each other to overcome prisoners' dilemmas, and more recent experimental evidence has shown that reinforcement learning algorithms can develop elaborate strategies to achieve the same ends in more challenging environments. It emerged that the question at the forefront of our debate was whether autonomous collusion between AI agents should be considered as tacit or explicit collusion, and therefore, how far liability should extend to firms in situations where AI agents learn to collude.
The predominant view in the room was perhaps that competition law had evolved in a time when collusion was de facto explicit, and business executives would meet in “smoke filled rooms” to discuss sharing markets with a view to maximising profit. As the behaviour of businesses evolved, and explicit collusion started to happen via letter, telephone, or through other means, the competition laws evolved such that these fact patterns could be matched with their underlying intent: to suspend the process of rivalry between firms, and extract surplus from society that would not be accessible under a regime of competition. In short, the mode of collusion had changed, but the substance had not. AI agents are, under such a view, just another means of agreeing to not compete, and should be condemned as such.
Yet, several participants pointed out that things are not so simple. Cases thus far, such as RealPage or online sales of posters and frames, have involved deliberate collusion via software which was purposefully designed to collude. In RealPage, the FTC was able to use its expert technologists to pinpoint where, in the codebase, collusive functionality had been hard-coded in. As such, the FTC was able to map its fact pattern of the case relatively well onto existing law. However, competition authorities will surely, at some point, be faced with a more complex scenario where collusion was (plausibly) not deliberate, but rather a strategy learned by several autonomous agents. AI agents raise the possibility that collusive behaviour might not fit neatly into the familiar dichotomy of being tacit or explicit, but rather may lie somewhere in-between. Should AI agents learn to collude on their own - and having them optimise for profit maximisation would likely direct them towards such strategies - then tricky questions arise pertaining to intent and liability for competition law.
In such scenarios, it was argued that responsibility and liability should rest with the firms deploying the AI technology. Just as companies have a duty to train employees to not violate the competition laws, they must also exercise due diligence to ensure that their automated software also doesn’t break the law. As one participant said, AI tools need to be conceptualized as employees, not spreadsheets. Antitrust-infringing software might then be confined to “robot jail” - banned from being used so that it cannot do so again in the future. The authors note that this could constitute, perhaps, a distinct kind of “negligent” collusion besides the conventional tacit/explicit distinction. Participants remarked that conventional competition law, such as Article 101 TFEU or Section 1 of the Sherman Act may be somewhat rigid when it comes to prosecuting such cases, particularly given the criminal or quasi-criminal standard of proof required to prosecute such cases. It was suggested that market investigations, or another New Competition Tool could fill this gap.
The debate also touched on the institutional and procedural changes necessary for agencies to successfully understand and prosecute anti-competitive behaviour by autonomous agents. The consensus was that competition regulators must a) staff themselves with appropriate technical expertise, and b) develop their own software tools to help prosecute these cases. With regards to the former component, agencies face hiring challenges, especially vis-a-vis large tech companies, and there is not yet a consensus on how exactly such expertise should be best put to use. Likewise, software could be used to detect collusion by continuously monitoring firms’ pricing decisions and flagging suspicious patterns to authorities. Luckily, both strategies are already coming to fruition, as several competition authorities worldwide feature data units and deploy AI tools
Economic Power and AI
The second panel of the roundtable focused on the issue of market concentration in the AI stack, as has been highlighted in several recent reports (such as the ones issued by the CMA, the French Competition Authority, the US FTC or the Portuguese Competition Authority ) and a Joint Statement by leading competition authorities. Some participants pointed out that there are perhaps too many reports and memos on the competitive landscape of AI, and what we need is to give teeth to these insights and make them actionable. Although a minority of participants held the view that AI is not inherently different from other technology, and that its deployment is rather “pedestrian,” most participants seemed to share a concern about the concentration of private power in this space. There was significant concern about the various dimensions along which power can be acquired and maintained by dominant AI firms, especially control over different “inputs” to the AI stack, such as labour, compute capacity, data, intellectual property, and energy. Participants noted how research mapping the patent activity in the AI space but also authors of groundbreaking AI research papers and the authors of AI patents owned by dominant companies shows that there is a significant monopolisation of knowledge in the industry by a minority of firms.
Participants also highlighted the vast differences in bargaining power arising between dominant AI firms and other entities. This includes governments and other public entities, which find themselves beholden to powerful AI companies which control access to, and ability to deploy, technology which could play a key role in unlocking innovation and productivity gains. It was noted that this issue is particularly acute for small and developing countries, particularly the Global South, but also holds for countries with well and developed economies. Realistically, only the very largest countries have the legal and technical capacity, as well as the geopolitical heft, to bargain fairly - in the public interest - with the multinational corporations which dominate the AI space.
It was also emphasised that private entities are not immune from unequal bargaining power with Big Tech and AI firms, despite having significant economic power in their own right, owing to the oligopolistic nature of the AI market, and the fact that AI is typically sold as part of a vertically integrated package. This means that, owing to the difficulty of creating and deploying AI technology from scratch, there is a propensity for Big Tech firms selling AI products to lock other firms into their ecosystems, and profit not only from selling the AI capabilities themselves, but also from, for instance, selling cloud computing. This dynamic risks creating a relation of dependence between large segments of the global economy and AI firms.
The positive take-away is that the multi-faceted approach that AI companies have taken to furthering their economic power permits competition authorities several dimensions along which to control its possible abuse. Participants called for ambitious and novel theories of harm under conventional competition law, as well as for the creation and use of other tools. For instance, it was noted that France has an abuse of economic dependence tool, which could be usefully applied in these markets. Several participants suggested that, in the case of acquisitions by Big Tech players, the burden of proof could be shifted to the acquirer, meaning that mergers would only be approved if their pro-competitive effects were demonstrated. Furthermore, others pointed out that the merger control assessment should also look at whether mergers are the only way to secure the promised efficiency and innovation gains. The plethora of partnerships that we witness in the AI space testifies that inter-firm collaborations come in many flavours, and the gains from joint action could be achieved by looser structures that do not reach the level of mergers.
Lastly, it was pointed out that we need to engage seriously with the role of competition law in the broader regulatory landscape of AI. AI sector-specific instruments, such as the AI Act, are silent about the governance dynamics of the AI industry. Whereas most questions so far have revolved around the risks of developing and deploying AI systems, the twin issues of how the AI themselves players should be governed and what market dynamics we want to foster have been neglected. Competition law may need to fill this “governance gap” which will eventually require a more holistic perspective. Some have suggested to shift the focus of the discussion altogether: the question is not whether small AI players need cloud access (which they do), but who should be their provider(s), and suggested to open the discussion on cloud as public infrastructure and “public led stacks.”
Conclusion
It is clear that the nexus of AI and competition law is increasingly important, and that there are many questions - legal, academic, technological and institutional - to be resolved. In many ways, the discussion created more questions than it did answers, but it also served to help articulate - with the combined input of academics, policymakers and civil society - a problématique appropriate for the space, and a research agenda to approach it with.
The summary was prepared by Todd Davies (UCL) and Teodora Groza (Sciences Po).