AI Act Prohibited Practices Guidelines
The European Commission has issued comprehensive guidelines on prohibited AI practices under the AI Act, effective February 2, 2025. These guidelines interpret Article 5's prohibitions to ensure consistent enforcement across the Union.
Eight Prohibited AI Practices
Harmful Manipulation
AI systems deploying subliminal, manipulative or deceptive techniques causing significant harm
Exploitation of Vulnerabilities
Systems exploiting age, disability, or socio-economic vulnerabilities
Social Scoring
Evaluation systems leading to detrimental treatment in unrelated contexts
Crime Risk Prediction
Individual assessments based solely on profiling or personality traits
Additional Prohibitions
Biometric Restrictions
  • Untargeted facial image scraping from internet/CCTV
  • Emotion recognition in workplace and education (except medical/safety)
  • Biometric categorization for sensitive characteristics
  • Real-time remote biometric identification for law enforcement (limited exceptions)
Key Principles
All prohibitions aim to protect fundamental rights including human dignity, privacy, non-discrimination, and democratic values. Violations carry fines up to €35 million or 7% of global turnover.
Harmful Manipulation & Deception
01
Subliminal Techniques
Operating beyond conscious awareness to influence behavior without detection
02
Purposeful Manipulation
Designed to exploit cognitive biases and psychological vulnerabilities
03
Deceptive Practices
Presenting false information to undermine autonomy and free choice
04
Material Distortion
Appreciably impairing ability to make informed decisions
05
Significant Harm
Causing or likely to cause physical, psychological, or financial damage
Article 5(1)(a) and (b) prohibit AI systems that manipulate or exploit vulnerabilities causing significant harm. All conditions must be met simultaneously for prohibition to apply.
Social Scoring Prohibition
What's Prohibited
AI systems evaluating persons based on social behavior or personal characteristics over time, leading to detrimental treatment in unrelated contexts or disproportionate to behavior gravity.
Unrelated Context Treatment
Using data from unrelated social contexts for evaluation purposes
Disproportionate Consequences
Treatment unjustified or excessive compared to social behavior
Public & Private Sectors
Applies regardless of whether provided by public or private entities
Crime Prediction Systems
Prohibition Scope
Article 5(1)(d) prohibits AI systems assessing crime risk based solely on profiling or personality traits. Exception: Systems supporting human assessment based on objective, verifiable facts linked to criminal activity.
Key Requirements
  • Cannot rely solely on profiling
  • Must include objective facts
  • Requires human assessment
  • Limited to natural persons
€35M
Maximum Fine
Or 7% global turnover
100%
Human Oversight
Required for exceptions
Biometric Data Protection
Untargeted Scraping Ban
Prohibited: Creating facial recognition databases through untargeted scraping from internet or CCTV footage
Emotion Recognition Limits
Banned in workplace and education except for medical or safety reasons
Sensitive Categorization
Cannot infer race, political opinions, religion, sexual orientation from biometric data
Real-Time Biometric Identification
General Prohibition with Limited Exceptions
Real-time remote biometric identification in public spaces for law enforcement is prohibited except in three strictly defined situations with proper authorization.
Victim Search
Targeted search for victims of abduction, trafficking, or missing persons
Threat Prevention
Preventing imminent threats to life or terrorist attacks
Serious Crime
Locating suspects of offenses punishable by 4+ years custody
Safeguards & Enforcement
Required Safeguards
  • Prior judicial or independent administrative authorization
  • Fundamental rights impact assessment
  • Registration of authorized systems
  • Notification of each use
  • National law compliance
  • Annual reporting requirements
Market Surveillance
Member States must designate market surveillance authorities by August 2, 2025. These authorities enforce prohibitions through investigations, complaints, and cross-border cooperation.
27
Member States
Coordinated enforcement
2025
Entry Date
February 2nd
Compliance & Next Steps
Key Takeaways
1
Understand Prohibitions
Review all eight prohibited practices and assess your AI systems against each criterion
2
Implement Safeguards
Build preventive measures into AI system design and deployment processes
3
Document Compliance
Maintain records demonstrating adherence to AI Act requirements
4
Monitor Updates
Guidelines will be regularly reviewed based on practical implementation experience
These guidelines are non-binding but provide the Commission's interpretation. Ultimate authority rests with the Court of Justice of the European Union. Providers and deployers must ensure continuous compliance throughout AI system lifecycles.