Skip to content

Research Project FAKE-ID

FAKE-ID: Ethics & Law

As with the regulation of AI in general, the regulation of deepfakes was initially approached from an ethical rather than legal perspective. However, it would be wrong to separate legal and ethical regulations, as there is a complementary relationship between the two regulatory perspectives. The line between legal and ethical regulatory frameworks can become blurred, as shown by the UNESCO Recommendation on AI Ethics (SHS/BIO/REC AIETHICS/2021), adopted by all 193 UNESCO member states in November 2021. Despite the title of the document, the principles contained therein are not only ethical but also legal in nature, although without binding legal force.

A regulation to regulate AI is currently being developed at the European level. The draft AI regulation also contains regulations on deepfakes. When preparing its proposal for an AI regulation, the European Commission relied on the AI ​​Ethics Guidelines published by the High-Level Expert Group in April 2019, which will immediately be binding for all EU Member States upon their final adoption and entry into force.

With regard to deepfakes, Article 52(3) of the draft AI Regulation will be the key provision. This requires the labelling of all deepfakes circulating online as deepfakes unless their use is by law enforcement authorities for legal purposes or involves the exercise of freedoms and rights while respecting the rights and freedoms of third parties.

Regardless of whether one takes a legal or ethical perspective, common but general principles for AI regulation have emerged that also apply to deepfakes, in particular transparency, explainability and traceability throughout development and application. However, these principles remain relatively abstract and require further clarification in individual cases, particularly with regard to existing risks. For example, regulating deepfakes does not present the same challenges as regulating AI-powered facial recognition tools, although the two technologies overlap in some cases.

The draft AI regulation is based on the knowledge that deepfakes also pose risks in the context of legitimate use cases. The risks associated with deepfakes include fraud, non-consensual pornography, disinformation and false evidence. The European Commission’s proposal for an AI regulation takes a risk-based approach, dividing AI systems into four groups: prohibited, high-risk, low-risk and minimal or risk-free. Deepfakes will fall into a medium-risk sui generis category and are subject to transparency obligations. However, when police forces use deepfakes for law enforcement purposes or create deepfakes, they fall into the high-risk category.

Related News




Anna Louban
Hochschule für Wirtschaft und Recht – FB 5
Alt-Friedrichsfelde 60
10315 Berlin

Phone: +49 30 30877 2695

Logo des Forums Sicherheitsforschung (BMBF)