Europe and the Global Race to Regulate AI

The EU wants to set the global rule book for AI. This blog explains the complex “risk hierarchy” that pervades the proposed AI Act, currently in the final stages of trilogue negotiation. This contrasts with the US focus on “national security risks”. We point out shortcomings of the EU approach requiring comprehensive risk assessments (ex ante), at the level of technology development. Using economic analysis, we distinguish exogenous and endogenous sources of potential AI harm arising from input data. We are sceptical that legislators can anticipate the future of a general purpose technology, such as AI. We propose that from the perspective of encouraging ongoing innovation, (ex post) liability rules can provide the right incentives to improve data quality and AI safety.

Who Decides What Counts as Disinformation in the EU?

Who decides what counts as “disinformation” in the EU? Not public authorities, because disinformation is not directly sanctioned in the Digital Service Act (DSA) or other secondary legislation. Nor Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSes), which avoid editorial decisions to maintain their legal status as intermediaries with limited liability. Instead, the delicate task of identifying disinformation is being undertaken by other private organisations whose place of administration and activity, purpose, funding and organizational structure appear problematic in terms of the legitimacy and even legality of the fight against disinformation. This blog post maps out the relevant (private) actors, namely the ad industry, fact checking organizations and so-called source-raters.

A Step Forward in Fighting Online Antisemitism

Online antisemitism is on the rise. Especially since the recent terror attack by Hamas in Southern Israel, platforms like X are (mis)used to propel antisemitism. Against this backdrop, this blog post analyses the legal framework for combatting online antisemitism in the EU and the regulatory approaches taken so far. It addresses the new Digital Services Act (DSA), highlighting some of the provisions that might become particularly important in the fight against antisemitism. The DSA improves protection against online hate speech in general and antisemitism in particular by introducing procedural and transparency obligations. However, it does not provide any substantive standards against which the illegality of such manifestations can be assessed. In order to effectively reduce online antisemitism in Europe, we need to think further, as outlined in the following blog post.

Automated Decision-Making and the Challenge of Implementing Existing Laws

Who loves the latest shiny thing? Children maybe? Depends on the kid. Cats and dogs perhaps? Again, probably depends. What about funders, publishers, and researchers? Now that is an easier question to answer. Whether in talks provided by the tax-exempt ‘cult of TED’, or in open letters calling for a moratorium, the attention digital technologies receive today is extensive, especially those that are labelled ‘artificial intelligence’. This noise comes with calls for a new ad hoc human right against being subject to automated decision-making (ADM). While there is merit in adopting new laws dedicated to so-called AI, the procedural mechanisms that can implement existing law require strengthening. The perceived need for new substantive rules to govern new technology is questionable at best, and distracting at worst. Here we would like to emphasise the importance of implementing existing law more effectively in order to better regulate ADM. Improving procedural capacities across the legal frameworks on data protection, non-discrimination, and human rights is imperative in this regard.

Be Careful What You Wish For

The European Court of Human Rights has issued some troubling statements on how it imagines content moderation. In May, the Court stated in Sanchez that “there can be little doubt that a minimum degree of subsequent moderation or automatic filtering would be desirable in order to identify clearly unlawful comments as quickly as possible”. Recently, it reiterated this position. This shows not only a surprising lack of knowledge on the controversial discussions surrounding the use of filter systems (in fact, there’s quite a lot of doubt), but also an uncritical and alarming approach towards AI based decision-making in complex human issues.

The Legal Art of Judging Art

In another round of the case "Metall auf Metall", the German Federal Court of Justice is asking the Court of Justice of the European Union how to define the concept of pastiche. The CJEU response will not only be crucial for the rules of artistic imitation, but also set the legal frame for the digital reference culture of millions, as expressed in Memes and GIFs every day. This Article takes the referral to the CJEU as an opportunity to recapitulate the proceedings with a sideways glance at the Supreme Court’s  Warhol case. Its discussion of transformative use addresses the questions the CJEU will have to answer when defining “pastiche”. How should we deal with the art of imitation?

An Interdisciplinary Toolbox for Researching the AI-Act

The proposed AI-act (AIA) will fundamentally transform the production, distribution, and use of AI-systems across the EU. Legal research has an important role to play in both clarifying and evaluating the AIA. To this end, legal researchers may employ a legal-doctrinal method, and focus on the AIA’s provisions and recitals to describe or evaluate its obligations. However, legal-doctrinal research is not a panacea that can fully operationalize or evaluate the AIA on its own. Rather, with the support of interdisciplinary research, we can better understand the AIA’s vague provisions, test its real-life application, and create practical design requirements for the developers of AI-systems. This blogpost gives a short glimpse into the methodological toolbox for researching the AI-act.

Europe’s Digital Constitution

In the United States, European reforms of the digital economy are often met with criticism. Repeatedely, eminent American voices called for an end to Europe’s “techno-nationalism.” However, this common argument focusing on digital protectionism is plausible, yet overly simplistic. Instead, this blog post argues that European digital regulations reflect a host of values that are consistent with the broader European economic and political project. The EU’s digital agenda reflects its manifest commitment to fundamental rights, democracy, fairness, and redistribution, as well as its respect for the rule of law. These normative commitments, and the laws implementing those commitments, can be viewed in aggregate as Europe’s digital constitution.

Trivialising Privacy through Tribunals in India

On 11th August 2023, India’s Digital Personal Data Protection Act, 2023 (‘DPDP Act’) has received Presidential assent. The Act’s passing is critical in light of increasing concerns about data security and surveillance in India, including allegations that the government has illegally been using spyware against activists. Moreover, the government and its agencies are major data fiduciaries, having access to various identification and biometric data that have in the past been breached on a large scale. Given this, it is vital that the DPDP Act is able to function effectively and independently against the government in cases of non-compliance. However, a novel provision bestowing appellate jurisdiction on a Tribunal that lacks both the necessary expertise and independence is likely to hinder this goal.

A Plea for Proportionality

In recent months the burning of the Koran in Sweden has caused headlines and severe anger in many parts of the Muslim world as well as bewilderment across the EU as to why Sweden continues to permit the practice. The Government is currently looking into how the law can be changed to include a ban on the burning of the Koran in the Public Order Act. In this blog post, I explain why it might be wise to do so and how this might be done.