Blockchain

European Commission targets AI-generated disinformation ahead of elections

To protect European elections from misinformation, the European Commission is mandating major technology platforms to detect AI-generated content and underlines a robust approach to maintaining democratic integrity.

In a proactive measure to protect the integrity of the upcoming European elections, the European Commission has ordered tech giants such as TikTok, X (formerly Twitter) and Facebook to step up efforts to detect AI-generated content. This initiative is part of a broader strategy to combat misinformation and protect democratic processes from potential threats posed by generative AI and deepfakes.

Mitigation measures and public consultation

The Commission has prepared draft election security guidelines under the Digital Services Act (DSA). This guidance emphasizes the importance of clear and consistent labeling of AI-generated content that may substantially resemble or misrepresent real people, objects, places, entities, or events. These guidelines also highlight the need for platforms to provide users with tools to label AI-generated content, enhancing transparency and accountability across digital spaces.​​​​​​

A public consultation period is underway for stakeholders to provide feedback on these draft guidelines until March 7. It focuses on implementing “reasonable, proportional and effective” mitigation measures to prevent the creation and dissemination of AI-generated misinformation. Key recommendations include watermarking AI-generated content for easy recognition and applying content moderation systems to enable platforms to efficiently detect and manage such content.​​​

Emphasis on transparency and user empowerment

The proposed guidelines advocate for transparency and urge platforms to disclose the sources of information used to generate AI content. This approach aims to enable users to distinguish between authentic and misleading content. Big tech companies would also be encouraged to incorporate safeguards to prevent the creation of false content that could influence user behavior, especially in election situations.

EU legislative framework and industry response

The guidance takes inspiration from the EU’s recently approved AI law and the non-binding AI Pact, and highlights the EU’s commitment to regulating the use of generative AI tools such as OpenAI’s ChatGPT. Meta, the parent company of Facebook and Instagram, has announced its intention to label AI-generated posts, in line with EU calls for greater transparency and user protection against fake news.

The role of the Digital Services Act

DSA plays a key role in this initiative, applying to a wide range of digital businesses and imposing additional obligations on Very Large Online Platforms (VLOPs) to mitigate systemic risks in areas such as democratic processes. The provisions of the DSA aim to ensure that information provided using generative AI relies on reliable sources, particularly in election situations, and to ensure that platforms take proactive steps to limit the impact of AI-generated “illusions.”​​​​

conclusion

As the European Commission prepares for the June elections, these guidelines represent an important step towards ensuring that the online ecosystem remains a space for fair, informed and democratic participation. By addressing the challenges posed by AI-generated content, the EU aims to strengthen electoral processes against disinformation and maintain the integrity and security of democratic institutions.​​​

Image source: Shutterstock

Related Articles

Back to top button