Artificial Intelligence Regulation: European Comission's Guidelines about Prohibited Practices

Blog 16/4/2025

Artificial Intelligence Regulation: European Comission's Guidelines about Prohibited Practices

Ana Martins, Managing Director – Compliance, Governance & Sustainability at Timestamp, explains the guidelines on prohibited AI practices from the European Commission.

The Artificial Intelligence Regulation (Regulation (EU) 2024/1689, known as the "AI Act") entered into full force, in August 2024.

On February 2nd 2025, the first general provisions of the Regulation have started, establishing strict rules for the development, deployment and use of artificial intelligence systems.

On 4 February 2025, the European Commission published the "Guidelines on Prohibited AI Practices". Although not legally binding, these guidelines provide detailed directions and practical examples to standardise the interpretation of Article 5 of the Artificial Intelligence Regulation across the European Union.

These prohibitions apply to AI systems that violate core principles such as human dignity, privacy and non-discrimination.

1 - WHAT ARE THE AI PRACTICES PROHIBITED UNDER THE ARTIFICIAL INTELLIGENCE REGULATION?

These correspond to practices that pose a risk to the fundamental rights and values of the EU and may cause severe physical, psychological, financial or economic harm.

2 - WHICH AI PRACTICES ARE PROHIBITED UNDER THE AI ACT?

a)    Cognitive and Behavioural Manipulation

Manipulation occurs when an AI system influences a person’s thinking or actions in a covert way, without them realising they are being steered towards a specific choice.

Examples of prohibited practices:

  • Algorithms that manipulate emotions to induce compulsive purchases;
  • AI that exploits addictions or psychological vulnerabilities to create dependency;
  • Systems that covertly influence voting or political opinions.

Examples of permitted exceptions:

  • Brain-machine interfaces and neuro-assistant technologies are permitted if ethically designed and privacy-respecting.
  • Lawful persuasion is also allowed, provided it is conducted transparently and respects the user’s freedom of choice.
  • Personalisation based on informed consent is allowed. For example, a personalised learning system may adjust content based on student performance, as long as the user maintains control over their choices.

b)    Exploitation of Vulnerabilities of Specific Groups through AI

This prohibition covers systems that exploit the vulnerabilities of groups such as children, the elderly, persons with disabilities or individuals in situations of social and economic disadvantage.

The prohibition exists because these groups may find it more difficult to recognise or resist manipulations, making them easy targets for abuse and exploitation by AI systems.

Examples of prohibited practices:

  • AI-enabled toys that manipulate children to obtain information or influence behaviour;
  • Credit systems that economically exploit the elderly or low-income individuals.

Examples of permitted exceptions:

  • AI applications that support children's learning, such as interactive platforms and virtual tutors, provided they do not manipulate behaviour or foster harmful dependency;
  • Health Assistants and Support Robots for the Elderly and People with Disabilities;
  • Systems Supporting Social and Economic Integration.

c)    Social Scoring Systems

This point prohibits systems that assess or classify individuals based on their social behaviour, personal characteristics or history, resulting in unequal treatment or discrimination.

The European Union’s concern is that AI must not be used to categorise citizens unfairly, restricting their rights or opportunities based on arbitrary scoring, as seen in some authoritarian regimes.

Examples of prohibited practices:

  • AI systems that restrict civil liberties based on personal history (such as China’s social credit system);
  • Systems that deny essential services (healthcare, credit, education) based on behavioural scores;
  • “Social reputation” systems that limit access to jobs and housing.

Examples of permitted exceptions:

  • Companies may use AI to assess the financial fraud risk of customers, provided the data is relevant (e.g., transactional behaviour and metadata within the context of the service);
  • Systems analysing medical and behavioural data (e.g., schizophrenia diagnosis based on patient behaviour) are not prohibited, provided the assessment is relevant, necessary and does not result in unjustified treatment.

d)    Real-time Biometric Surveillance in Public Spaces

The use of facial recognition and other forms of biometric surveillance in public spaces without specific legal justification.

Examples of prohibited practices:

  • Facial recognition for mass monitoring without consent;
  • Systems tracking citizens in real time without valid legal grounds.

Examples of permitted exceptions:

  • If judicially authorised for specific cases of public security threat (e.g. terrorism prevention);
  • Specific applications of systems for locating missing persons.

e)    Indiscriminate Facial Image Scraping

The use of AI to collect and extract large volumes of facial images from public sources, such as the internet and CCTV footage, without a specific legal basis, is prohibited.

Examples of prohibited practices:

  • Creating facial recognition databases by automatically extracting images from the internet using AI without individuals’ consent;
  • Using systems to collect facial images from public surveillance footage without legal authorisation, particularly for general population monitoring.

Examples of permitted exceptions:

  • Collection carried out in a targeted and justified manner, as part of a specific criminal investigation, with judicial authorisation;
  • Specific system applications for security purposes, provided they respect fundamental rights and comply with data protection legislation.

f)    Emotion Recognition in Sensitive Contexts (Work and Education)

The use of emotion recognition systems in workplace and educational settings is prohibited, as it constitutes a privacy violation and may lead to discrimination or excessive control.

Examples of prohibited practices:

  • Monitoring employees’ emotional states at work through AI systems to assess productivity or behaviour;
  • Using AI to analyse students’ facial expressions or tone of voice in the classroom to assess attention or engagement, without valid justification.

Examples of permitted exceptions:

  • Use justified on medical grounds, such as supporting patients with communication difficulties;
  • Use required for safety purposes, e.g., to prevent accidents, such as detecting extreme fatigue in drivers or workers in high-risk environments.

g) Biometric Categorisation Based on Sensitive Data

The AI Act prohibits biometric categorisation systems that use biometric data to infer individuals’ sensitive attributes, such as race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation.

Examples of prohibited practices:

  • An AI system claiming to deduce a person’s race based on their voice;
  • An AI system analysing tattoos or facial features to infer religious beliefs;
  • An AI system attempting to categorise a user’s sexual orientation based on photos shared online.

Examples of permitted exceptions:

  • The use of biometric data for medical diagnoses using AI systems may be permitted, e.g., image-based analysis for detecting skin diseases or genetic conditions related to skin or eye colour;
  • The use of biometric categorisation to ensure demographic representativeness in databases may be permitted, provided it complies with data protection regulations.

h) Real-time Biometric Identification for Law Enforcement

The AI Act generally prohibits the use of remote real-time biometric identification (RBI) systems in public spaces for law enforcement purposes.

However, there are specific exceptions in which this technology may be used, provided it is authorised under national legislation and all regulatory conditions and safeguards are met.

Examples of prohibited practices:

  • Police install CCTV cameras equipped with real-time facial recognition in multiple city locations, including places of worship, minority neighbourhoods and commercial establishments;
  • Authorities use a real-time AI facial recognition system to identify participants in a political protest, based on images captured by city-installed cameras;
  • During a football match, an emotional recognition system analyses spectator behaviour to predict potential violent incidents. The system automatically activates RBI to identify supporters previously involved in disturbances.

Examples of permitted exceptions:

  • RBI systems used to locate victims of serious crimes and missing persons, such as in cases of human trafficking, child sexual exploitation or abduction, where there is a well-founded suspicion of imminent danger;
  • RBI systems used to prevent specific and imminent threats to individuals’ physical safety, including terrorist threats, provided they are duly justified.

3 - WHY ARE THESE PRACTICES PROHIBITED UNDER THE AI REGULATION?

The European Commission has prohibited these systems, as they are considered to violate fundamental rights such as:

  • Right to privacy and data protection;
  • Freedom of thought and free decision-making autonomy;
  • Non-discrimination and equal treatment;
  • Safety and protection from technological abuse.

4 - WHAT ARE THE PENALTIES FOR NON-COMPLIANCE WITH THE AI REGULATION'S RULES ON PROHIBITED PRACTICES?

Penalties for non-compliance with the rules on prohibited AI practices are the most severe set out in the AI Act, reflecting the seriousness of these violations.

Providers and users that develop or use prohibited AI systems may be fined up to EUR 35 million or 7% of the infringing company’s total worldwide annual turnover, whichever is higher.

These stringent penalties are justified by the high risk these practices pose to fundamental rights, safety and the values of the European Union. The regulation aims to prevent significant harm resulting from the abusive use of AI.

The imposition of heavy sanctions is intended to deter companies and public bodies from engaging in these illegal practices and to ensure a consistent level of protection and compliance throughout the European Union.

5 - HOW TO ACT AND ENSURE COMPLIANCE WITH THE AI REGULATION?

Companies that develop, import, distribute, implement or use Artificial Intelligence within the European Union must ensure that their systems do not violate the prohibited practices set out in the AI Regulation (EU) 2024/1689.

Any system that falls within the described scenarios may not be marketed or used in the EU, and failure to comply with these rules may result in severe fines and legal sanctions.

In this context, it is essential that companies carry out a detailed assessment of their AI systems to ensure full legal compliance.

Companies must map, analyse and classify their systems by Risk, to determine whether they fall under the prohibited practices and adopt the necessary measures accordingly.

6 - HOW CAN TIMESTAMP HELP?

We offer a structured methodology to map, analyse and classify your organisation’s AI systems, identifying relevant critical points and associated risks in line with the regulatory framework.

Timestamp provides a 360° approach to Artificial Intelligence, covering Regulatory Compliance; Technological and Functional Consulting; and Technological Solutions, supporting your company throughout the entire AI project lifecycle, from diagnosis, design, development and system implementation, to monitoring.

We tailor Artificial Intelligence to your business strategy, requirements and needs, in an ethical and responsible way, ensuring best industry practices, leading technologies and regulatory alignment.

Find out more about how we can help your company: Privacy & Digital Security | Timestamp & Data & AI | Timestamp

 By: Ana Martins | Managing Director – Compliance, Governance & Sustainability

Share this post

Copy link