A Global First in Regulating Artificial Intelligence
In a landmark move, the European Union has adopted the world’s first comprehensive regulation on artificial intelligence, the EU AI Act, which officially entered into force on August 1, 2024. This regulation fosters innovation and protects fundamental rights by establishing a harmonized legal framework for developing, deploying, and overseeing AI systems across the EU. Moreover, it sets a global precedent for responsible AI governance, as its influence is expected to stretch beyond European borders.
Shortcuts:
Framework | Timeline | Impact | Info-Events | Programme Information | Contact

Dr Dimitrios Marinos, our lecturer at HSLU, has deep expertise in artificial intelligence, big data analytics, digital transformation, AI ethics, data governance and more.
A Risk-Based Framework for Responsible AI
At the heart of the EU AI Act lies a risk-based approach that categorizes AI systems into four levels: unacceptable risk, high risk, limited risk, and minimal risk. Each level determines how closely authorities must monitor the system. AI systems that pose an “unacceptable risk”, such as those involving social scoring or manipulative biometric surveillance, face an outright ban. High-risk systems, used in law enforcement, healthcare and critical infrastructure, must meet strict obligations. These include conformity assessments, robust data management, and human oversight.
By contrast, limited-risk systems like chatbots or content generators only have to inform users that they are interacting with AI. Minimal-risk applications, such as AI in video games or spam filters, fall largely outside the scope of regulation. Thanks to this layered structure, the Act enables flexible governance while safeguarding individual rights and the public interest.
General-Purpose AI and Enforcement Timeline
The Act also directly addresses General-Purpose AI (GPAI) systems, including large foundational models like ChatGPT. These systems must follow transparency standards, disclose their training data sources, and comply with EU copyright laws. If they present systemic risks, they also need to carry out safety testing and provide documentation.
Importantly, the legislation rolls out step by step. Here is a summary of the most critical deadlines:
Date | Provision |
Aug 1, 2024 | AI Act enters into force |
Feb 2, 2025 | Prohibitions on unacceptable-risk AI become binding |
Aug 2, 2025 | GPAI obligations and governance structures apply |
Aug 2, 2026 | High-risk system requirements become enforceable |
Aug 2, 2027 | Full compliance deadline for all regulated systems |
This staged implementation is intended to give developers, regulators, and businesses time to adjust to the new legal landscape.
Global Impact and Legal Reach
The AI Act has extraterritorial scope. This means companies outside the EU must comply if their AI systems affect users within the Union. Similar to the GDPR, this approach confirms the EU’s leadership in global digital governance.
A new European AI Office will coordinate enforcement, working together with national regulators and an EU-wide Artificial Intelligence Board. Fines for violations are steep: banned uses can result in penalties of up to €35 million or 7% of global turnover. Less severe breaches may still cost companies between €7.5 and €15 million.
With the AI Act, Europe is not just regulating artificial intelligence, it is defining a global benchmark for ethical, transparent, and human-centric AI.
We would like to thank Dr Dimitrios Marinos for his dedication and for sharing these valuable insights.
Data is the resource of the 21st century!
Register and join us for a free online Information-Event:
Monday, 11 August 2025 (Online, English)
Monday, 8 September 2025 (Online, German)
Monday, 6 October 2025 (Online, English)
Monday, 3 November 2025 (Online, German)
Programme Info: MSc in Applied Information and Data Science
More Field Reports & Experiences: Professional portraits & study insights
Frequently Asked Questions: FAQ