Articles for tag: AI RegulationEUKIKI-Verordnung

The EU AI Act’s Impact on Security Law

The process of integrating European security law is imperfect and unfinished – given the constraints posed by the European Treaties, it is likely to remain that way for the foreseeable future. This inevitable imperfection, lamentable as it may be, creates opportunities for legal scholarship. Legal scholars are needed to explore the gaps and cracks in this new security architecture and to ultimately develop proposals for how to fix them. This debate series, being a product of VB Security and Crime, takes the recently adopted AI Act as an opportunity to do just that: It brings together legal scholars, both German and international, in order to explain, analyze and criticize the EU AI Act’s impact on security law from both an EU and German national law perspective.

Who Let the Bots Out

As artificial intelligence revolutionizes modern warfare, systems like Israel’s Lavender and Ukraine’s Clearview AI are transforming combat with precision and efficiency. This advancement has sparked an urgent debate on the responsible use and governance of AI in military, with 57 countries signing the Political Declaration on AI’s military applications, urging adherence to international law. Central to this is the accountability – who is responsible when AI systems violate laws? This blog post argues that state responsibility for AI violations remains viable within existing legal frameworks.

Frisch gewagt ist nur halb gewonnen

Kein Erfolg ohne Training und gutes Trainingsmaterial. Was schon seither für Menschen gilt, ist auch für Künstliche Intelligenz („KI“) nicht anders zu beurteilen. Diese benötigt quantitativ und qualitativ hochwertige Datensätze, um menschenähnlich kreativen Output generieren zu können. Teil dieser Datensätze sind urheberrechtlich geschützte Werke (etwa Fotos oder Texte), derer sich Unternehmen auch bedienen, ohne vorher die Einwilligung der Urheber einzuholen. Ein Urteil des LG Hamburg versucht nun dieses Spannungsfeld aufzulösen – dies gelingt allerdings nur teilweise.

Of Artificial Intelligence and Fundamental Rights Charters

The Council of Europe has adopted the Framework Convention on Artificial Intelligence – the first of its kind. Notably, the Framework Convention includes provisions specifically tailored to enable the EU’s participation. At the same time, the EU has developed its own framework around AI. I argue that the EU should adopt the Framework Convention, making an essential first step toward integrating the protection of fundamental rights of the EU Charter. Ultimately, this should create a common constitutional language and bridge the EU and the Council of Europe to strengthen fundamental rights in Europe.

Deepfakes, the Weaponisation of AI Against Women and Possible Solutions

In January 2024, social media platforms were flooded with intimate images of pop icon Taylor Swift, quickly reaching millions of users. However, the abusive content was not real; they were deepfakes – synthetic media generated by artificial intelligence (AI) to depict a person’s likeness. But the threat goes beyond celebrities. Virtually anyone (with women being disproportionately targeted) can be a victim of non-consensual intimate deepfakes (NCID). Albeit most agree that companies must be held accountable for disseminating potentially extremely harmful content like NCIDs, effective legal responsibility mechanisms remain elusive. This article proposes concrete changes to content moderation rules as well as enhanced liability for AI providers that enable such abusive content in the first place.

Gaza, Artificial Intelligence, and Kill Lists

The Israeli army has developed an artificial intelligence-based system called “Lavender”. This approach promises faster and more accurate targeting; however, human rights organizations such as Human Rights Watch (HRW) and the International Committee of the Red Cross (ICRC) have warned of deficits in responsibility for violations of International Humanitarian Law (IHL). In the following, we will examine these concerns and show how responsibility for violations of IHL remains attributable to a state that uses automated or semi-automated systems in warfare.