Articles for tag: Content-ModerationPlattformregulierung

Auditing Platforms under the Digital Services Act

Taming the power of online platforms has become one of the central areas of the European Union's policy in the digital age. The DSA increases the accountability of very large online platforms and very large search engines by introducing an auditing system. The audit process as defined by the DSA risks producing counterproductive consequences for the European policy objectives. From a constitutional perspective, the outsourcing of competence and decision-making from public to private actors articulates a system of compliance and enforcement based on multiple centres of power.

ByteDance v. Commission

The Digital Markets Act (DMA) is a revolutionary tool to regulate EU digital markets, it complements competition law by imposing ex ante obligations on the largest digital undertakings. The General Court judgement in the ByteDance case was the first test of the limits of this expediated enforcement and resulted in a remarkable win for the Commission. The Court dismissed ByteDance’s appeal against the European Commission’s decision to designate ByteDance with its social network TikTok as gatekeeper under the DMA.

Deepfakes, the Weaponisation of AI Against Women and Possible Solutions

In January 2024, social media platforms were flooded with intimate images of pop icon Taylor Swift, quickly reaching millions of users. However, the abusive content was not real; they were deepfakes – synthetic media generated by artificial intelligence (AI) to depict a person’s likeness. But the threat goes beyond celebrities. Virtually anyone (with women being disproportionately targeted) can be a victim of non-consensual intimate deepfakes (NCID). Albeit most agree that companies must be held accountable for disseminating potentially extremely harmful content like NCIDs, effective legal responsibility mechanisms remain elusive. This article proposes concrete changes to content moderation rules as well as enhanced liability for AI providers that enable such abusive content in the first place.

A Primer on the UK Online Safety Act

The Online Safety Act (OSA) has now become law, marking a significant milestone in platform regulation in the United Kingdom. The OSA introduces fresh obligations for technology firms to address illegal online content and activities, covering child sexual exploitation, fraud, and terrorism, adding the UK to the array of jurisdictions that have recently introduced new online safety and platform accountability regulations. However, the OSA is notably short on specifics. In this post, we dissect key aspects of the OSA structure and draw comparisons with similar legislation, including the EU Digital Services Act (DSA).

A Step Forward in Fighting Online Antisemitism

Online antisemitism is on the rise. Especially since the recent terror attack by Hamas in Southern Israel, platforms like X are (mis)used to propel antisemitism. Against this backdrop, this blog post analyses the legal framework for combatting online antisemitism in the EU and the regulatory approaches taken so far. It addresses the new Digital Services Act (DSA), highlighting some of the provisions that might become particularly important in the fight against antisemitism. The DSA improves protection against online hate speech in general and antisemitism in particular by introducing procedural and transparency obligations. However, it does not provide any substantive standards against which the illegality of such manifestations can be assessed. In order to effectively reduce online antisemitism in Europe, we need to think further, as outlined in the following blog post.

Data After Life

Contract law in Europe currently has little grasp on the balancing of interests of social media users, their heirs, platforms, and society at large, which means that platforms play a key role in determining how digital legacies are handled. A human rights perspective can offer starting points for reforms that do more justice to the protection of digital identities of social media users.

Monetising Harmful Content on Social Media

The possibility to profit from the dissemination of harmful content triggering views, engagement, and ultimately monetisation does not only concern the contractual relationship between social media and  influencers, but also affects how other users enjoy digital spaces. The monetisation of harmful content by influencers should be a trigger, first, to expand the role of consumer law as a form of content regulation fostering transparency and, second, to propose a new regulatory approach to mitigate the imbalance of powers between influencers and users in social media spaces.

Rethinking the Regulation of Financial Influencers

The growth of social media has led to an unprecedented rise in financial influencers, so-called finfluencers, who share investment ideas and opinions with a global audience, even if they are not qualified or licensed to provide financial advice. This can be particularly dangerous for retail investors with low levels of financial literacy. The regulation of financial influencers is a complex and multifaceted issue that demands a comprehensive approach; the current regulatory framework may not be adequate.

Pay to Play

The rise of subscription-based business models in social media is part of a broader trend that can be observed in many industries. Against this background, it is necessary to adapt European consumer law to the new risks of the subscription economy.However, it is not enough to give consumers rights on paper. Nor is it sufficient to inform consumers about their rights in the small print. Effective consumer protection in digital markets requires a user interface design that enables consumers to exercise their rights with a simple click.

The Shape of Personalisation to Come

While targeted advertising is still a money-making machine for social media platforms, its motor has begun to sputter. However, with artificial intelligence, the potential is even greater for companies to discover and exploit biases and vulnerabilities in consumers that they themselves may not be aware of. The point of this dive into economic engineering of personalised environments on digital platforms is to highlight the intentional creation of algorithmically curated choice sets for consumers. How can the law ensure their fairness?