A closer look at the EU AI Act: protection against manipulation and abuse

What you need to know about the ban on AI that ‘substantially disrupts’ human behavior

Recently, there has been a lot of focus on the EU AI Act’s requirement to ensure sufficient AI literacy (if you want to learn more about that, check out our whitepaper). Due to all the attention on AI literacy, the ban on certain AI systems, which also took effect on February 2, has been somewhat overshadowed. As a result, practices such as social scoring and predictive policing are now prohibited, as is real-time facial recognition in public spaces - except for a few exceptions.

March 6th, 2025   |   Blog   |   By: Friso Spinhoven

Share

A closer look at the EU AI Act: protection against manipulation and abuse

Two key AI bans that require extra attention

The EU AI Act prohibits two types of AI applications that are somewhat unclear and therefore require extra attention:

  1. Manipulative AI systems
  2. AI systems that exploit vulnerabilities related to age, disability, or socio-economic circumstances

Why this ban?

The purpose of these bans is to protect human autonomy—the ability to make independent decisions. This is essential for personal growth and development, allowing individuals to make choices and take responsibility for them. The Cambridge Analytica scandal, in which voters were manipulated on a large scale, likely played a role in the introduction of these restrictions.

Where does the EU AI Act draw the line?

It is sometimes difficult to distinguish between legitimate persuasion and prohibited manipulation. The EU AI Act clarifies that normal and lawful marketing practices, such as advertising, do not count as manipulation. However, it does provide some examples of techniques that may cross the line, including:

  • Stimuli targeting the subconscious
  • Brain-machine interfaces
  • Virtual reality applications

Unfortunately, the EU AI Act does not offer much concrete guidance on where exactly the boundary lies. It simply states that an AI system crosses the line if it interferes with a person’s ability to make an informed decision or substantially disrupts their behavior. Additionally, this must result in significant financial, physical, or psychological harm - a concept that remains open to interpretation.

Real-life examples

I personally suspect that social media algorithms, which determine what posts appear in your news feed, could potentially fall under this ban. This applies especially to vulnerable groups, such as young people, who are less capable of critical thinking. If you search online for “social media algorithms addiction”, you’ll find plenty of scientific publications about the negative effects of such AI applications. But we will have to wait and see how courts apply these rules in practice.

Other examples include:

  • Addictive gaming algorithms: Various games use AI algorithms to get players - especially young people - hooked. They employ techniques such as loot boxes and time-limited rewards to psychologically influence vulnerable users and maximize their spending.
  • AI in online gambling platforms: Gambling platforms use AI systems to analyze gambling behavior and offer personalized promotions to people who are susceptible to gambling addiction. This makes it difficult for users to control their gambling behavior, which can lead to severe financial and psychological harm.

On February 4, the European Commission published guidelines providing further explanation of the prohibited applications. While still quite abstract, these guidelines do include some useful examples that can help interpret the AI Act.

How to avoid violating the law?

In the meantime, is there really nothing that can be said about how organizations can prevent unintentionally violating this ban? I believe there is. It is important to involve the people who experience the effects of AI deployment, as is generally the case for the responsible use of AI.

Ask them about their views on plans for a particular AI application and what advantages and disadvantages they see. Think about how to maximize the benefits and minimize the drawbacks. A so-called “moral deliberation” can help with this, but in any case, ensure that you map out all relevant interests and make a justifiable trade-off between them.

A good benchmark is whether you are willing and able to publicly justify your decision. If so, then the chance that you are engaged in a prohibited AI practice is small.