Blog 16/4/2025
Ana Martins, Managing Director – Compliance, Governance & Sustainability at Timestamp, explains the guidelines on prohibited AI practices from the European Commission.
The Artificial Intelligence Regulation (Regulation (EU) 2024/1689, known as the "AI Act") entered into full force, in August 2024.
On February 2nd 2025, the first general provisions of the Regulation have started, establishing strict rules for the development, deployment and use of artificial intelligence systems.
On 4 February 2025, the European Commission published the "Guidelines on Prohibited AI Practices". Although not legally binding, these guidelines provide detailed directions and practical examples to standardise the interpretation of Article 5 of the Artificial Intelligence Regulation across the European Union.
These prohibitions apply to AI systems that violate core principles such as human dignity, privacy and non-discrimination.
These correspond to practices that pose a risk to the fundamental rights and values of the EU and may cause severe physical, psychological, financial or economic harm.
Manipulation occurs when an AI system influences a person’s thinking or actions in a covert way, without them realising they are being steered towards a specific choice.
Examples of prohibited practices:
Examples of permitted exceptions:
This prohibition covers systems that exploit the vulnerabilities of groups such as children, the elderly, persons with disabilities or individuals in situations of social and economic disadvantage.
The prohibition exists because these groups may find it more difficult to recognise or resist manipulations, making them easy targets for abuse and exploitation by AI systems.
Examples of prohibited practices:
Examples of permitted exceptions:
This point prohibits systems that assess or classify individuals based on their social behaviour, personal characteristics or history, resulting in unequal treatment or discrimination.
The European Union’s concern is that AI must not be used to categorise citizens unfairly, restricting their rights or opportunities based on arbitrary scoring, as seen in some authoritarian regimes.
Examples of prohibited practices:
Examples of permitted exceptions:
The use of facial recognition and other forms of biometric surveillance in public spaces without specific legal justification.
Examples of prohibited practices:
Examples of permitted exceptions:
The use of AI to collect and extract large volumes of facial images from public sources, such as the internet and CCTV footage, without a specific legal basis, is prohibited.
Examples of prohibited practices:
Examples of permitted exceptions:
The use of emotion recognition systems in workplace and educational settings is prohibited, as it constitutes a privacy violation and may lead to discrimination or excessive control.
Examples of prohibited practices:
Examples of permitted exceptions:
The AI Act prohibits biometric categorisation systems that use biometric data to infer individuals’ sensitive attributes, such as race, political opinions, trade union membership, religious beliefs, sex life or sexual orientation.
Examples of prohibited practices:
Examples of permitted exceptions:
The AI Act generally prohibits the use of remote real-time biometric identification (RBI) systems in public spaces for law enforcement purposes.
However, there are specific exceptions in which this technology may be used, provided it is authorised under national legislation and all regulatory conditions and safeguards are met.
Examples of prohibited practices:
Examples of permitted exceptions:
The European Commission has prohibited these systems, as they are considered to violate fundamental rights such as:
Penalties for non-compliance with the rules on prohibited AI practices are the most severe set out in the AI Act, reflecting the seriousness of these violations.
Providers and users that develop or use prohibited AI systems may be fined up to EUR 35 million or 7% of the infringing company’s total worldwide annual turnover, whichever is higher.
These stringent penalties are justified by the high risk these practices pose to fundamental rights, safety and the values of the European Union. The regulation aims to prevent significant harm resulting from the abusive use of AI.
The imposition of heavy sanctions is intended to deter companies and public bodies from engaging in these illegal practices and to ensure a consistent level of protection and compliance throughout the European Union.
Companies that develop, import, distribute, implement or use Artificial Intelligence within the European Union must ensure that their systems do not violate the prohibited practices set out in the AI Regulation (EU) 2024/1689.
Any system that falls within the described scenarios may not be marketed or used in the EU, and failure to comply with these rules may result in severe fines and legal sanctions.
In this context, it is essential that companies carry out a detailed assessment of their AI systems to ensure full legal compliance.
Companies must map, analyse and classify their systems by Risk, to determine whether they fall under the prohibited practices and adopt the necessary measures accordingly.
We offer a structured methodology to map, analyse and classify your organisation’s AI systems, identifying relevant critical points and associated risks in line with the regulatory framework.
Timestamp provides a 360° approach to Artificial Intelligence, covering Regulatory Compliance; Technological and Functional Consulting; and Technological Solutions, supporting your company throughout the entire AI project lifecycle, from diagnosis, design, development and system implementation, to monitoring.
We tailor Artificial Intelligence to your business strategy, requirements and needs, in an ethical and responsible way, ensuring best industry practices, leading technologies and regulatory alignment.
Find out more about how we can help your company: Privacy & Digital Security | Timestamp & Data & AI | Timestamp
By: Ana Martins | Managing Director – Compliance, Governance & Sustainability
Share this post