The EU AI Act is live!
European Commission’s AI Act is the first comprehensive legal framework to guide the future use of Artificial Intelligence in the bloc and foster use of trustworthy AI worldwide.
The European Commission today set the ball rolling on the AI Act, the first-ever comprehensive framework guiding Artificial Intelligence usage in the bloc, with the ambition to foster trustworthy AI worldwide.
The law – agreed upon in 2023 and a first globally – addresses the risks of AI and stipulates clear requirements and obligations for developers and deployers in relation to specific uses of Artificial Intelligence. Additionally, the regulation aims to resolve and streamline any administrative and financial bottlenecks faced by businesses, especially Small and Medium Enterprises (SMEs).
In this regard, the EC had announced a package of policy measures called AI Innovation Package to support SMEs and startups in Europe to develop trustworthy AI in consonance with EU’s rules. Together with Coordinated Plan on AI, the AI Act forms a wider package on AI-related policy measures guiding the development and deployment trajectory of AI in the bloc.
To foster trust, the primary goal of the Act is to ensure Europeans feel safe while working with AI. But there are multiple risks. For example, an AI system might take a decision or action that might be difficult to assess or review, especially in cases where a person(s) might be in a disadvantageous position.
More on the EU AI Act here
The AI Act is designed to ensure that AI systems used within the EU are trustworthy and safe. It introduces a risk-based approach to regulation, categorizing AI applications into different risk levels: low/no-risk, limited risk, and high risk. Each category comes with its own set of requirements and obligations for developers and users.
- Low/No-Risk AI: Most AI applications fall into this category and will not be subject to stringent regulations.
- Limited Risk AI: Includes technologies like chatbots and tools that could produce deepfakes. These systems must meet transparency requirements to ensure users are not deceived.
- High-Risk AI: Covers applications such as biometric identification, facial recognition, and AI used in critical sectors like education and employment. These systems must be registered in an EU database and comply with strict risk and quality management standards1.
Key Provisions and Compliance
The AI Act introduces several key provisions aimed at safeguarding fundamental rights and promoting innovation:
- Transparency and Accountability: Developers of high-risk AI systems must provide detailed documentation and ensure their systems are transparent and accountable.
- Risk Management: High-risk AI applications must undergo rigorous risk assessments and implement mitigation measures.
- General Purpose AI (GPAI): Developers of GPAI, such as large language models, will face specific transparency requirements and may need to undertake risk assessments.
Industry Reactions
The tech industry has been closely watching the development of the AI Act. Major companies like Apple, Meta, and Amazon have expressed their commitment to complying with the new regulations. OpenAI, the developer of the GPT model, has stated its intention to work closely with the EU AI Office to ensure compliance and provide guidance to other AI developers.
Looking Ahead
The AI Act represents a forward-looking approach to AI regulation, balancing the need for innovation with the imperative to protect fundamental rights. As the world watches, the EU’s pioneering efforts could serve as a model for other regions seeking to regulate AI in a way that fosters trust and safety.
This is just the beginning of a new era in AI governance. The coming months and years will be crucial as the AI Act’s provisions are implemented and refined, shaping the future of artificial intelligence not only in Europe but around the world.