A closer look at the EU AI Act: clarification or confusion over the definition of AI?
What is and isn’t considered AI according to the European Commission?
On February 6, the European Commission (EC) finally published its long-awaited clarification on the definition of an AI system as included in the AI Act. An important initiative, because a clear definition is essential for understanding what does and does not fall under the AI Act. In some respects, the EC guidelines do indeed provide more clarity. But on perhaps the most important point, the opposite seems to be the case. That is unfortunate, because this may actually lead to more debate about the scope of the AI Act.
March 6th, 2025 | Blog | By: Friso Spinhoven
Share

Breakdown of the AI system definition
The AI Act defines an AI system as follows:
"A machine-based system designed to operate with different levels of autonomy and that may exhibit adaptability after deployment, and that, for explicit or implicit objectives, infers how to generate output from received input, such as predictions, content, recommendations, or decisions that may affect physical or virtual environments."
The EC begins by breaking down the definition in the AI Act into seven components, and then discusses them one by one. These seven components are:
- A machine-based system
- That operates with different levels of autonomy
- That may exhibit adaptability after deployment
- That serves implicit or explicit objectives
- That uses AI techniques
- To generate output
- That may affect the physical or virtual environment
1. Performing calculations with a machine-based system
According to the EC, this includes both the hardware and software on which an AI system runs. It refers to systems that can perform calculations, including quantum computers. But, and this is striking, the EC also considers biological and organic systems to fall under this, as long as they provide computational capacity. This is a far-reaching interpretation, because our brains are also, to a large extent, "computers." Suppose an interface is created between our brains and AI systems (something Elon Musk’s Neuralink is actively working on). Then, according to this definition, it could become difficult to determine where artificial intelligence ends and human intelligence begins.
2. Different levels of autonomy
The degree to which an AI system operates autonomously is closely related to human involvement. The AI Act is clearly not applicable to systems that are fully controlled by a human. That much is obvious. It becomes less clear when the EC states that this can involve both “direct” and “indirect” human involvement. By indirect involvement, the EC means "automated system-based control mechanisms that enable humans to delegate or oversee the functioning of systems." This already leaves much to be desired in terms of clarity. But even worse, it only makes it more vague when full human involvement is present.
3. Possible adaptations through learning ability
On this point, the EC is brief: adaptability refers to the learning ability of AI systems, allowing them to adjust their behavior (output) during use. Here, the term machine learning (ML) immediately comes to mind. But systems without adaptability can also be considered AI. And when ML is involved, it is not always AI, as later becomes apparent. Or is it? The EC seems to be wavering between two positions on this point.
4. Implicit or explicit objectives
Again, the EC does not spend many words on this. The guidelines clarify that AI systems are designed to achieve certain objectives and that these goals can be explicitly programmed. But the goals can also be implicit, in the sense that they can be inferred from the behavior of the AI system or the underlying assumptions. And that these goals can stem from training data or from the system’s interaction with its environment. But what the distinction between explicit and implicit goals means for the definition of an AI system is not really made clear.
5. AI techniques to generate output
This component gets by far the most attention, because it is essential for distinguishing AI from other algorithms. According to the AI Act, the latter category includes systems that operate solely based on human-defined rules that are executed automatically. Whereas AI is capable of inferring from input data how to generate output. This is perhaps where the strongest link to intelligence lies: AI can interpret past patterns and predict what will happen in the environment. Our brains are also constantly engaged in this. This is, by the way, just one of the many possible ways to look at the concept of intelligence, and certainly not meant as a complete or definitive definition. The term "intelligence" is perhaps just as difficult to define as the term "artificial intelligence," but Wikipedia aptly summarizes it as: "The capacity to perceive, process, reason, draw conclusions, and generate thoughts."
But where exactly the boundary lies between AI and "regular" algorithms remains unclear according to the guidelines. The EC discusses various ML techniques extensively and seems to suggest that ML automatically means a system is AI. But then, the EC places certain ML systems outside the definition of AI because of their "limited capacity to analyze patterns and independently adjust their output." This creates new questions about the definition of AI and leaves borderline cases unresolved.
6. Output of an AI system
The guidelines are based on the four examples given in the AI Act for the output of an AI system: predictions, content, recommendations, and decisions. According to the EC, this output is generated using machine learning (ML), logic, and knowledge-based approaches. At the same time, the guidelines state in various examples from section 5 that this does not necessarily mean the system qualifies as AI. Beyond distinguishing between "identifying complex correlations" and "static, rule-based mechanisms with limited data", the guidelines do not go much further.
7. Impact on the physical or virtual environment
Finally, the EC states that an AI system actively influences its environment. However, this does not provide much clarity, as non-AI algorithms can also impact their environment - as was painfully demonstrated in the Dutch childcare benefits scandal. While the EC acknowledges this, it remains unclear how this criterion can practically help identify AI.
Conclusion
Correctly, the EC states that whether a system qualifies as AI should be assessed on a case-by-case basis, but unfortunately, the guidelines do not provide much further clarity. This is partly because AI—just like intelligence itself—is difficult to capture in a definition. But it is also due to the inconsistencies and ambiguities within the guidelines themselves.
As a result, there will still regularly be borderline cases, especially for systems with a higher degree of human involvement and non-learning systems. But even for ML systems, the EC states that they do not always qualify as AI. This goes against the common interpretation and could have the undesirable effect of providers trying to evade the obligations of the AI Act by claiming that their model is too simple to be considered AI.
Want to learn more?
