On March 13, 2024, the European Parliament approved the EU Artificial Intelligence Act (“AI Act”), a sweeping AI governance framework which presents the first comprehensive law in the world aimed at ensuring AI systems are safe and respectful of individual rights. Like Europe’s General Data Protection Regulation (“GDPR) before it, the AI Act has broad extra territorial reach and is accompanied by steep fines that are sure to get senior leadership and board attention. The AI Act is a risk based framework, and the implementation periods of the new law vary depending on the category of risk an AI system triggers.
‘WHAT’ - What systems are caught?
Not all software systems will fall under the new AI Act. The law has been designed to capture software that is developed with machine-learning (whether supervised or not), logic, statistical, knowledge-based or a hybrid approach that can generated outputs based on a set of human-defined objective.
The AI Act officially defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
‘WHY’ - Why is the EU AI Act in Place?
The AI Act takes a risk based approach to assess the impact of AI systems on individuals’ fundamental rights. The new regime aims to improve the functioning of the internal market and promote the uptake of human centric and trustworthy artificial intelligence. It intends to protect individuals’ health, safety and fundamental rights against any harmful effects of AI.
The AI Act imposes compliance obligations on persons who develop, deploy, import or distribute AI systems in the EU market.
‘WHERE’ - Where is the EU AI Act Enforced?
Similarly to other EU digital laws, such as the GDPR, Digital Services Act, Digital Markets Act and the recently passed Data Act, the AI Act has extraterritorial application capturing providers (i.e. product manufacturers and developers), deployers (i.e. commercial users), importers and distributors of AI systems in jurisdictions within and outside the EU.
The AI Act impacts all AI systems, with particular focus on high-risk and general purpose AI, developed or deployed anywhere in the world that are put onto the EU market, or where the output is intended to be used in the EU. Regardless of their location, companies who wish to operate within the EU or offer their AI systems within the EU market must comply with the obligations set out in the AI Act.
Take a US headquartered multinational bank that makes use of an AI system licensed by a US provider to help it assess the performance of its global personnel. The system is hosted, controlled and used by the US bank, but given the performance metrics generated by the AI system are intended to apply to EU personnel, this AI system will be in scope of the AI Act.
There are exceptions to compliance with these new obligations, however these are limited to military AI systems or AI systems used for the sole purpose of scientific research. Open-source AI systems also benefit from exemptions in many circumstances.
‘WHEN’ - When you need to comply by?
The AI Act will now become law after final formalities and will become enforceable 24 months after its entry into force, with certain exceptions, for example:
- prohibited AI systems in existence on this date will have 6 months to be removed from the market;
- AI systems already in existence and deployed before this date (and which are not prohibited) do not need to comply with the AI Act, unless and until “substantial modifications” are made;
- Provisions regarding member states setting penalties apply after 12 months; and
- obligations relating to high-risk AI systems subject to EU harmonisation legislation apply after 36 months.
‘WHO’ - Who Regulates the EU AI Act?
Both pan-European and Member State regulators will enforce the new law. At a national level, each Member State will enforce the AI Act through at least one notifying authority and at least one market surveillance authority.
A European Artificial Intelligence Board, comprising the Member State regulators, will be responsible for the implementation and enforcement of the AI Act. The Board is tasked with providing guidance on the interpretation and application of the act, and monitors the AI systems within the EU for risks and recommends ways to mitigate potential harm.
Additionally, a European AI Office has been inaugurated within the European Commission to support the EU’s approach to AI, and monitor compliance by general purpose AI systems.
‘HOW MUCH’ - How Much are the Fines for Non-Compliance?
Fines are set at the higher of:
- €35 million or 7% of total worldwide annual turnover for non-compliance with the provisions regulating prohibited AI systems;
- €7.5 million or 1% of the total worldwide annual turnover for supplying of incorrect, incomplete or misleading information to authorities;
- €15 million or 3% of total worldwide annual turnover for non-compliance with other requirements or obligations.
For SMEs the fines will be the lower of the turnover and the fixed sum.
This informational piece, which may be considered advertising under the ethical rules of certain jurisdictions, is provided on the understanding that it does not constitute the rendering of legal advice or other professional advice by Goodwin or its lawyers. Prior results do not guarantee a similar outcome.
Contacts
- /en/people/s/scott-gretchen
Gretchen Scott
Partner - /en/people/t/tene-omer
Omer Tene
Partner - /en/people/t/thurbon-rachel
Rachel Thurbon
Associate