On 11 December 2025, CCIA Europe hosted its European AI Roundtable in Brussels, bringing together EU policymakers, industry representatives, academics, and civil society to discuss Article 50 of the EU Artificial Intelligence Act and the forthcoming Transparency Code of Practice. CCIA Europe is the European arm of the Computer & Communications Industry Association, an international, not-for-profit trade association representing a broad cross-section of communications and technology companies.
During the Roundtable, Joan Barata presented a new study on the AI Act’s transparency obligations. In his study, Joan Barata argues that the Transparency Code should take a risk-based approach. Rather than applying indiscriminately to all AI-generated content, the Code should focus on AI systems that pose real risks of deception, impersonation, or manipulation. The analysis warns that overly broad or rigid transparency obligations may undermine user trust and quickly become outdated as technologies evolve.

The event brought together EU policymakers, industry representatives, academics, and civil society to discuss the implementation of Article 50 of the AI Act and the forthcoming Transparency Code of Practice. Discussions centred on how transparency requirements—such as the labelling of AI-generated content and interactive AI systems—can be made effective for users without generating excessive compliance burdens or information overload.
Participants at the Roundtable echoed these concerns, highlighting the technical and practical limitations of current watermarking and content detection mechanisms. The discussion underscored the need for flexible and proportionate rules that protect users while remaining workable for developers.
Joan´s study is available at:
Barata, Joan, Transparency Obligations for all AI Systems: Article 50 of the AI Act (December 10, 2025). Available at SSRN: https://ssrn.com/abstract=5902402 or http://dx.doi.org/10.2139/ssrn.5902402