The EU Artificial Intelligence Regulation (“Regulation”), prepared by the European Commission (“Commission”), entered into force across the European Union on August 2, 2025, introducing obligations for general-purpose AI models (General Purpose AI- GPAI ) .

Emerging during a period of rapid technological advancement, the Regulation stands out as a significant legal framework. The European Union has introduced this Regulation to govern the rapidly evolving field of artificial intelligence.

1. Main Objectives and Scope of the EU AI Regulation

• The Regulation aims to ensure the safe, transparent, and ethical use of AI systems, protecting individuals’ fundamental rights while promoting technological innovation. It also seeks to improve the functioning of the internal market and support the adoption of human-centered and trustworthy AI technologies.

• It prioritizes safeguarding fundamental rights such as health, safety, the environment, democracy, and the rule of law against the potential harmful effects of AI. The Regulation adopts a risk-based approach to regulate AI systems developed and used within and outside the EU, aiming to create an innovation environment aligned with ethical standards. It imposes binding obligations on developers, providers, importers, and users of AI systems, with the goal of establishing a global standard.

2. Risk Categories

The Regulation classifies AI systems into four main risk categories based on their potential impact on society. Each category is subject to different obligations and rules.

Unacceptable Risk AI Systems: This category includes AI applications that pose a clear threat to fundamental human rights, safety, and democratic values. Examples include social scoring systems, technologies that manipulate individual behavior, real-time biometric identification (with certain exceptions), and emotion recognition systems. These systems are entirely banned in the EU, and their use is subject to severe penalties.

High-Risk AI Systems: AI systems used in critical infrastructure (e.g., water, energy, transportation), healthcare, education, employment, justice, and border management are classified as high-risk. These systems are subject to strict obligations, including compliance assessments, transparency, data governance, and human oversight, to ensure they are trustworthy, transparent, and accountable.

Limited Risk AI Systems: Systems that require users to be aware they are interacting with AI fall into this category. For example, chatbots, deepfake content, or biometric categorization systems must comply with transparency obligations. Users should know that the content is AI-generated, and such systems must be clearly labeled.

Low-Risk AI Systems: Systems with minimal risk to human rights or safety, such as spam filters or AI-powered video games, fall into this category. These systems are not subject to specific obligations beyond general legal compliance and can be used freely.

3. Oversight and Enforcement Structure

To oversee the implementation of the Regulation, the European AI Office (“AI Office”) and the European Artificial Intelligence Board (“AI Board”) have been established. The AI Office has direct authority over general-purpose AI models and systemic risks, while the AI Board, composed of member state representatives, coordinates the consistent application of the Regulation and collaborates with national market surveillance authorities. Additionally, the European Data Protection Supervisor (EDPS) is responsible for data protection oversight within EU institutions. These structures support the Regulation’s risk-based approach and have been actively enforcing sanctions since August 2, 2025.

4. Conclusion

The Regulation is a pioneering framework aimed at ensuring the ethical and safe use of AI, protecting individuals’ rights while fostering technological innovation. Through its risk-based approach, it categorizes AI systems into unacceptable, high, limited, and low-risk groups, applying appropriate obligations to each. The Regulation provides a binding framework for AI developers, providers, and users both within and outside the EU, marking a significant step toward an ethical and safe transformation in the global AI ecosystem.