AI practices banned as of 2 February 2025 | In Principle

Go to content
Subscribe to newsletter
In principle newsletter subscription form

AI practices banned as of 2 February 2025

The EU’s Artificial Intelligence Act prohibits certain particularly harmful AI practices. These provisions will begin to apply from 2 February 2025. Non-compliance can attract an administrative fine of up to EUR 35 million or, in the case of a company, up to 7% of its total annual worldwide turnover from the previous fiscal year, whichever is higher.

Pursuant to Art. 5 of the AI Act (Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence), the following practices (among others) are prohibited:

  1. The placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken, in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm.

    An example given in a brochure from the Polish Ministry of Digital Affairs is an AI system in a shopping mall using subliminal techniques, emitting soft sounds and images in its advertising.
     
  2. The placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective or the effect of materially distorting the behaviour of that person or a person belonging to that group, in a manner that causes or is reasonably likely to cause that person or another person significant harm.

    The brochure gives the example of an AI system which is a “personal assistant” for the elderly using advanced manipulation techniques.
     
  3. The placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or predicted personal or personality characteristics, with the social score leading to specific effects pointed out in the AI Act.
     
  4. The placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.
     
  5. The placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons.

    An example from the ministry brochure is using cameras recording facial expressions to draw conclusions about employees’ emotions, and creating emotional profiles of employees.
     
  6. The placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.

As an example, the brochure cites an AI system used at the workplace which draws conclusions about an employee’s political beliefs based on facial expressions and categorises employees on that basis.

These prohibitions may be particularly momentous for institutions in the financial sector, as well as employers planning to use AI systems in the workplace.

To mitigate the risk of liability for violating these prohibitions, entities deploying AI systems in their activity should verify whether they are using AI systems that may be deemed banned under the AI Act. If so, they should take appropriate steps to stop using these systems or modify the way they operate accordingly.

Karolina Romanowska, adwokat, Łukasz Rutkowski, attorney-at-law, Data Protection practice, Wardynski & Partners