This post examines whether competition law can remain effective in prospective AI development scenarios by looking at six variables for AI development: capability of AI systems, speed of development, key inputs, technical architectures, number of actors, and the nature and relationship of these actors. For each of these, we analyse how different scenarios could impact effective enforceability. In some of these scenarios, EU competition law would remain a strong lever of control; in others it could be significantly weakened. We argue that despite challenges to regulators‘ ability to detect and remedy breaches, in many future scenarios the effective enforceability of EU competition law remains strong. EU competition law can significantly influence AI development and help ensure that its future development is safe and beneficial. We hope that our scenario-based framework for analysing different possibilities for future development can assist with further work at this intersection.
Why EU Competition Law is a Key Lever
In a world being transformed by artificial intelligence (AI), EU competition law is likely to be a powerful tool that could both help or hinder other aims of AI governance. EU competition law has jurisdiction over foreign companies that are active in the EU, such as US Big Tech (indeed these companies have been the focus on EU competition law enforcement in recent years).
But EU competition law is also profoundly challenged by AI progress. The rapid and complex development of AI is already challenging effective enforceability – as well as reshaping our economies, politics and societies. ›Effective enforceability‹ is a term that we use to refer to how effective competition law is in achieving its objectives. Effective enforceability can depend on a wide variety of factors. For present purposes, we will focus on whether:
- the conduct in question falls within the jurisdictional scope of competition law (and is not e.g. protected by sovereign immunity rules);
- the law is written and applied by the courts in a way that is in line with the legislators‹ intentions (and there are not lacunae or loopholes);
- regulators have the independence, resources and expertise to effectively detect and bring a successful infringement; and
- competition law can effectively remedy and sanction the breach in a way that addresses the harm.
The field of AI shares a goal of Artificial General Intelligence (AGI): »highly autonomous systems that outperform humans at most economically valuable work“. AI is a general-purpose technology like the steam engine or electricity, and if it continues its rapid progress its impacts could be as ›transformative‹ as the industrial revolution in terms of its impacts on the economy and society – hence the term ›transformative AI‹ (TAI).
Can EU competition law remain ›effectively enforceable‹ in future AI development scenarios, especially towards the development of TAI? There is little agreement amongst experts about AI development trajectories, so anticipatory governance needs to consider a range of scenarios. We map this along six development variables, as set out below.
Effective Enforceability of Competition Law across Six Variables
1. Capability. This variable refers to the state of technological capabilities: the tasks and ›work‹ that can be accomplished by an AI system or collection of systems.
substantial wealth. On the one hand, this could lead to more antitrust scrutiny because the wealth increase could lead to a backlash and more regulatory attention on the actor. On the other hand, if the wealth is generated in a less perceptible way, it could lead to extreme regulatory capture and therefore reducing the willingness of regulators to bring a case.
2. Speed of development. AI could be developed in a rapid and general way (›rapid take-off’) or through more incremental, sequential and prosaic development, or anywhere on the spectrum between these two extremes. Progress in chess-playing was slow, but in language models has been fast. Speed could also vary throughout the development process.
Figure 2: Speed of AI development
Regulatory enforcement, and within that competition law, will likely be weaker the faster the speed of development. New technologies may breach the law in novel ways that should be caught by existing rules but instead fall through the cracks or give rise to lacunae in the law. The regulator may struggle to bring a case quickly enough to address the harm. A fine several years down the line may not be enough to restore competition because e.g. competitors have already been forced to exit the market. And the perpetrator firm has already made windfalls over a number of years to make the conduct worthwhile.
3- Key inputs into AI development. Three key inputs drive advances in AI: algorithmic innovation, computational resources (hardware or ›compute’), and data. All three are important, yet we can conceive of one of these inputs being the most constrained and therefore a bottleneck.
Figure 3: Extent to which each major input is a constraint.
The key input driving AI advancements could be relevant as part of the assessment of market power. An assessment of market power is particularly pertinent in an abuse of dominance or merger control scenario. The effective enforceability of competition law may depend on the type of key input that is the bottleneck. In comparison with data and talent, if compute is the bottleneck, and progress relies on large amounts of computing power, then competition law may be more relevant as it is likely to be easier to regulate relative to data or talent. This is because compute is more easily measured and quantified as part of any market power assessment. As a remedy, compute may also be more easily ›transferred‹ or distributed compared to talent (involves flight risk) or data (which faces data protection issues). ›Structural‹ separation of compute may also be easier (e.g. a divestment in a merger scenario to create a competitor).
4. Model of AI system. The technical-level model of a highly capable AI system could vary on a spectrum between at one end a singular agential model (such as a goal-directed autonomous RL agent) and on the other a more distributed, disaggregated ›Comprehensive AI Services‹ (CAIS) model.
do not benefit from state immunity. Overall, though, it will be more difficult to apply competition law on the private actor that is or acts like a state.
Conclusion
The future of AI development is highly uncertain. We sought to reduce that uncertainty by using a scenario-based framework to examine how different variables affect the effective enforceability of competition law. We summarise these below. Despite challenges, we find that effective enforceability remains strong in many scenarios, and therefore that competition law will likely remain a key shaper of future AI development. The AI governance and competition law fields must work together to help ensure future AI development is safe and beneficial to consumers and society, and we hope these findings will be useful to future work at this intersection.