EU Mandates Transparency for AI Models, With Voluntary Rules to Take Effect in August

The European Union has unveiled a code of practice to enhance transparency in AI development, requiring companies to publicly track how and when AI models fail. The rules, which focus on copyright protections, transparency, and public safety, will initially be voluntary for major AI developers starting August 2 but will become mandatory under the EU's AI Act in August 2026. Companies adhering to the voluntary guidelines may face reduced administrative burdens, while those that reject them could incur higher compliance costs.
Key provisions include a ban on pirating materials for AI training, requiring companies to disclose details about their training data and model design choices. The EU also expects firms to respect paywalls and robots.txt restrictions, preventing unauthorized scraping of websites. Additionally, companies must report serious incidents, such as cybersecurity breaches or physical harm caused by AI, within five to ten days.
The AI industry participated in drafting the rules, but some companies have urged the EU to delay enforcement, warning of potential innovation stifling. The code's implementation marks a significant step toward regulating AI while balancing innovation and accountability. Companies like Google and Meta are reviewing the guidelines, with Google stating its support for secure AI models and innovation-friendly environments.
Published: 7/10/2025