Skip to content

Inquiring about the EU AI Act's control over deepfakes: Yes or No?

EU Regulation on AI: Does It Control Deepfakes? - Guide for KYC/AML Compliance by Sumsuber

Inquiring about the EU AI Act: Does it cover deepfake technology?
Inquiring about the EU AI Act: Does it cover deepfake technology?

Inquiring about the EU AI Act's control over deepfakes: Yes or No?

Watermarking and Deepfake Regulation under the EU AI Act: A Closer Look

The European Union's Artificial Intelligence Act (EU AI Act) introduces groundbreaking regulations for AI systems, including those that generate or manipulate image, audio, or video content, such as deepfakes, or text intended for matters of public interest [1]. To increase transparency, the EU AI Act mandates the use of watermarking, a unique signature attached to the output of an AI model, as a technical measure [2].

Article 52 of the EU AI Act requires providers of AI systems generating synthetic content to mark their outputs in a machine-readable format and detectable as artificially generated or manipulated [2]. This labelling aims to help platforms and users distinguish between real and AI-generated media, potentially reducing the unintentional spreading of disinformation.

However, watermarking has its limitations. Technical robustness might be compromised through editing, recompression, or other manipulations, limiting its reliability over time or across platforms [2]. Adoption and enforcement also present challenges, as watermarking requires all AI providers, deployers, and platforms to adopt compatible standards and detection tools [2]. Malicious actors could potentially bypass watermarks by intentionally omitting them in harmful deepfakes or producing new versions without watermarks [1][3].

Moreover, user awareness and trust are crucial factors. Watermarks alone do not educate users about deepfakes or guarantee they will understand or notice the markers, necessitating complementary educational efforts [3]. Watermarking addresses transparency and identification but does not prevent harm from deepfakes nor enforce legal ownership rights over faces and voices [3][4].

To enhance the effectiveness of watermarking, several improvements can be made. Standardization across the industry, robust detection methods, legal integration, public education, and technical innovation could all play a part in bolstering watermarking as a deepfake regulation tool [2][3][4].

In summary, watermarking serves as a key transparency mechanism under the EU AI Act’s framework for managing deepfakes, but its effectiveness depends on broad adoption, technical robustness, and integration with legal and educational measures to fully address the challenges posed by synthetic media. The EU AI Act distinguishes deepfakes as "limited risk" AI requiring transparency, relying on such approaches to balance innovation and public trust in AI-generated content [1][2].

Tune in to our bi-weekly Q&A series, hosted every other Thursday on The Sumsuber and social media platforms (Instagram and LinkedIn), where our AI Policy and Compliance Specialist, Natalia Fritzen, will discuss the EU AI Act and deepfake regulations, as well as answer your questions. Don't forget to submit your own questions via Instagram and LinkedIn!

The EU AI Act was approved by the European Parliament on March 13, 2024, and will become enforceable 20 days after its publication in the Official Journal of the EU. Despite some concerns about the effectiveness of watermarks, the EU AI Act acknowledges the potential negative disruptive effect of "synthetic content" (including deepfakes) on modern societies [1]. However, the circumstances under which the disclosure requirement for deployers can be loosened are not specified in the EU AI Act, generating uncertainty. Additionally, the EU AI Act does not offer concrete measures against cases of non-compliance with its provisions, nor does it prohibit deepfakes but sets transparency requirements for providers and deployers of technologies capable of creating synthetic content.

Read also:

Latest