This Saturday, August 2, 2025 marks a key milestone for artificial intelligence in the European Union with the entry into force of the second phase of the AI Act, one year after its adoption.
This regulation, which aims to govern the development and use of AI, imposes new obligations on industry players while laying the foundations for responsible and transparent regulation.
The Yiaho team looks back at what you need to know about this regulatory step forward and its impact on Europe’s tech landscape.
AI ACT: Greater transparency for AI models
One of the pillars of this new chapter concerns the transparency requirements imposed on general-purpose AI models.
Companies will now have to provide detailed information about the data used to train their algorithms, including data protected by copyright. This measure aims to protect content creators by enabling them to identify whether their works were used and, if so, to request fair compensation. A significant step forward for rights holders, who will have concrete tools to assert their rights.
For the most powerful AI models, described as “carrying systemic risks,” the rules go further.
These technologies, capable of influencing entire systems at scale, will have to comply with strict obligations, such as rapid reporting of serious incidents and the implementation of robust mechanisms to manage risks. This approach aims to ensure that AI remains a safe, controlled tool, even in its most advanced applications.
Also read: AI ACT: 45 companies call for a postponement of the European AI regulation
Which AI is signing up to the AI ACT code of good practice?
The AI Act also introduces a “code of good practice,” a set of recommendations designed to make it easier for companies to comply. Signatories to this code will benefit from a reduced administrative burden to demonstrate their compliance with European standards.
This guide, published on July 10, 2025, details expectations in terms of transparency (model descriptions, training methods, limitations), respect for copyright, and security (rigorous testing, cybersecurity).
Several major AI players have already announced their intention to commit.
- Our Yiaho platform,
- The start-up Mistral AI,
- The US giant OpenAI, known for its ChatGPT,
- Google with Gemini.
Meta, however, has chosen not to align, criticizing a regulation it considers too restrictive and a source of legal uncertainty.
Heavy penalties on the horizon
The AI Act doesn’t just set rules: it provides oversight mechanisms and penalties to ensure compliance. From August 2, 2026, new AI models will have to fully comply with the regulation, while existing models will have until 2027 to adapt.
Member States, starting this Saturday, must designate their supervisory authorities. In France, the CNIL could take on this role, with the task of ensuring the rules are applied.
Companies that fail to meet these obligations face substantial fines: up to €35 million or 7% of their annual worldwide turnover. A significant financial threat that underscores the EU’s commitment to enforcing this legislation.
See also: The AI Act: a legal framework that’s too restrictive for AI in Europe?
A balance between innovation and responsibility
With this second phase of the AI Act, the European Union is seeking to reconcile technological innovation with citizen protection. By imposing clear rules, it aims to create an environment where AI can develop while respecting fundamental rights and minimizing risks. While some tech giants, like Meta, see it as a brake on innovation, others see it as an opportunity to build more ethical and transparent AI.
August 2, 2025 therefore marks a turning point for the AI sector in Europe. It remains to be seen how companies will adapt to these new requirements and whether this regulatory framework will become a model for other regions of the world. One thing is certain: European AI is entering a new era, where transparency and accountability will be at the heart of the debate.


