8.8/10 – User satisfaction with our support.
/
/
EU AI Act: what it is and how it protects users

EU AI Act: what it is and how it protects users

Technological advancements in recent years, especially in the field of artificial intelligence, demonstrate the wide potential of AI to revolutionize...
EU AI Act: what it is and how it protects users
18 December 2025
Index

Technological advancements in recent years, especially in the field of artificial intelligence, demonstrate the wide potential of AI to revolutionize any sector, including education.

However, measures are necessary to ensure the ethical use of technology, protect fundamental rights such as privacy, and prevent discriminatory practices, as well as other associated risks.

In Europe, the Artificial Intelligence Act (EU AI Act) is the regulation that governs the use of AI. Digital exam monitoring systems that use artificial intelligence, such as SMOWL, must mandatorily comply with it.

The 4 risk levels of the EU AI Act

What is the EU Artificial Intelligence Act?

The European AI regulation, which entered into force in 2024 with full application in 2026, is the world’s first comprehensive artificial intelligence law. This initial legal framework in the field aims to mitigate the risks of AI to foster appropriate and reliable use in Europe.

Although, as the European Commission indicates, most artificial intelligence systems present a limited risk and can help solve many challenges, their risks must be addressed to avoid undesirable consequences, such as unfair decision-making.

Europe’s AI Act defines four risk levels:

  • Unacceptable
  • High
  • Limited
  • Minimal

Based on these four levels, it establishes different requirements and obligations.

European AI Act: background

As part of its digital strategy, the European Union proposed regulating the use of AI in 2021. In April of that year, the Commission proposed the first artificial intelligence regulation, classifying systems by risk levels; the greater the risk, the greater the requirements for the systems.

Entry into force of the EU AI Act

The EU Council approved the regulation in May 2024. The European AI law, however, would not be fully applicable until 24 months later, with some exceptions applying sooner than that deadline.

Furthermore, it was agreed that obligations for high-risk systems would be applied in two phases (most in 24 months, and those integrated into specific regulated products, in 36 months).



The 4 risk levels of the EU AI Act

As indicated above, Europe’s AI Act differentiates between four distinct risk levels. Let’s look at each level, one by one.

Unacceptable risk

Artificial intelligence systems that pose an unacceptable risk are prohibited in the EU. These include:

  • Systems that manipulate the behavior of people or specific vulnerable groups, such as voice-activated toys that encourage dangerous behavior in young children.
  • Social scoring systems that classify people based on their socioeconomic status, behavior, or personal characteristics.
  • Real-time and remote biometric identification systems, such as facial recognition in public spaces.
  • Emotion recognition systems in workplaces and educational institutions.
  • Non-selective scraping systems of the Internet or security cameras to create or expand facial recognition databases.
  • Systems that evaluate or predict the risk of individual criminal offense.

High risk

Systems that negatively affect fundamental rights or safety are considered high-risk. They are divided into two different categories:

  • AI systems used in products subject to EU product safety law (cars, aviation, medical devices, toys, and lifts).
  • AI systems categorized in 8 specific areas:
    • Education and vocational training.
    • Biometric identification and categorization of natural persons.
    • Management and operation of critical infrastructure.
    • Employment, worker management, and access to self-employment.
    • Law enforcement.
    • Assistance in the interpretation and the application of the law.
    • Access to and use of essential private and public services and benefits.
    • Management of migration, asylum, and border control.

AI systems identified as high-risk must comply with a series of obligations before they can be placed on the market.

Limited risk

This category primarily includes generative artificial intelligence systems like ChatGPT, which must comply with the transparency requirements of the European Union’s AI Act, effective from August 2026.

The regulation contemplates specific disclosure obligations to ensure users are properly informed. For example, when interacting with AI systems like chatbots, people must be aware that they are using a machine.

But that is not all; generative AI providers must ensure that AI-generated content is identifiable. The European regulation also mandates the labeling of certain AI-generated content as such.

The European AI Act also establishes that models that may pose a systemic risk, such as the advanced GPT-4, must undergo comprehensive controls and inform the Commission of any incident considered serious.

Regarding copyright, AI system developers must design models to prevent them from generating illegal content and must publish summaries of the protected content used for training.

What is the EU Artificial Intelligence Act?

Minimal or no risk

The EU Artificial Intelligence Act does not impose specific requirements for systems in this category. Since they do not represent a significant danger to safety and health and do not jeopardize fundamental rights, they can be freely used, sold, and distributed within the European Union.

This classification includes systems such as video games, spam filters for emails, or movie or music recommendation systems.

EU AI Act: enforcement and penalties

Each Member State must designate one or more market surveillance authority responsible for implementing, monitoring, and enforcing the regulations. If multiple authorities are designated, the State must establish a Single Point of Contact.

These institutions have the authority to conduct remote supervision and access documentation, datasets, and source code from providers. They are also empowered to intervene when AI systems pose risks or fail to comply with the regulation.

While remaining independent and impartial in the performance of their duties, market surveillance authorities are required to inform the Commission and other Member States of any measures adopted, as well as the results of risk evaluations.

In coordination with the Commission, these authorities may propose investigations, request corrective measures, and impose penalties for non-compliance with the European Artificial Intelligence Act.

Penalties under the EU AI Act

Sanctions vary based on the severity of the infringement and can range from small fines to penalties of €35 million in the most serious cases, as detailed below:

  • Very serious infringements related to prohibited practices: up to €35 million or 7% of total annual worldwide turnover.
  • Serious infringements regarding non-compliance by providers, representatives, importers, distributors, etc.: up to €15 million or 3% of total annual worldwide turnover.
  • Minor infringements associated with providing incomplete, incorrect, or misleading information: up to €7.5 million or 1% of total annual worldwide turnover.

The maximum fines for violating the European AI Act are higher than those of the General Data Protection Regulation (GDPR), which provides for maximum penalties of €20 million or 4% of annual global turnover.

EU AI Act compliance: does SMOWL adhere to the regulation?

Yes. SMOWL, as an EU-funded project and a European AI-powered proctoring solution, complies with the applicable regulation on Artificial Intelligence. 

Updated on

Foto del autor del blog de SMOWL Leyre Paniagua
Audiovisual Communication graduate (UPV), SEO copywriter, and content creator for the English-speaking markets.

Discover how SMOWL works

  1. Register in mySmowltech indicating your LMS.
  2. Check your email and follow the steps to integrate the tool.
  3. Enjoy your free trial of 25 licenses.

Request a free demo with one of our experts

In addition to showing you how SMOWL works, we will guide and advise you at all times so that you can choose the plan that best suits your company or institution.

Write below what you are looking for