AI Act Gets EU Approval: What Companies Can Do Now to Prepare

Image: Unsplash

AI Act Gets EU Approval: What Companies Can Do Now to Prepare

The European Parliament has overwhelmingly approved a plan to regulate artificial intelligence on the heels of member states agreeing on harmonized rules in December. Following a vote on Wednesday (with 523 votes in favor, 46 against, and 49 abstentions), Parliament said ...

March 13, 2024 - By TFL

AI Act Gets EU Approval: What Companies Can Do Now to Prepare

Image : Unsplash

Case Documentation

AI Act Gets EU Approval: What Companies Can Do Now to Prepare

The European Parliament has overwhelmingly approved a plan to regulate artificial intelligence on the heels of member states agreeing on harmonized rules in December. Following a vote on Wednesday (with 523 votes in favor, 46 against, and 49 abstentions), Parliament said in a statement on that the AI Act “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field” by establishing “obligations for AI based on its potential risks and level of impact.”

During the plenary debate on Tuesday, the Internal Market Committee co-rapporteur Brando Benifei said: “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very center of AI’s development.”

Obligations that come in connection with the development/use of high-risk AI systems and transparency requirements for general-purpose AI systems are among some of the key tenets of the newly-approved AI Act … 

Banned applications: The new rules ban certain AI applications that threaten citizens’ rights, including biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities will also be forbidden.

Law enforcement exemptions:  The use of biometric identification systems (“RBI”) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. “Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorization. Such uses may include, for example, a targeted search of a missing person or preventing a terrorist attack. Using such systems post-facto (“post-remote RBI”) is considered a high-risk use case, requiring judicial authorization being linked to a criminal offence.

Obligations for high-risk systems: Clear obligations are also foreseen for other high-risk AI systems (due to their significant potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law). Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice, and democratic processes (e.g. influencing elections). 

Such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

Transparency requirements: General-purpose AI (“GPAI”) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training. The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing, and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio, or video content (“deepfakes”) need to be clearly labelled as such.

Measures to support innovation and SMEs: Regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Action Steps for Companies: In light of the potential fines for violations of the AI Act (up to €35 million or 7 percent of annual worldwide turnover) and the wide scope of applicability (the Act applies to providers and deployers of AI systems that are located both in the EU and outside the EU in the event that their systems affect EU citizens, entities, or the EU market), companies are expected to start preparing ahead of the Act officially becoming law by May or June of this year and the full set of regulations coming into effect by mid-2026.

In a newly-published note, Paul Hastings LLP’s Kimia Favagehi says that companies can act now by way: (1) assessing their AI systems for category of risk; (2) conducting adequate due diligence to ensure quality of data sets, privacy and security safeguards, and overall ethical use; and (3) consulting with experts to ensure your company complies with the various legal, business, and ethical considerations associated with AI.

At the same time, Simpson Grierson’s Michelle Dunlop, Karen Ngan and Richard Watts state that businesses that are currently developing and/or deploying AI would be well-advised to start preparing for the AI Act by: (1) auditing the use, development, and supply of AI systems within your business and its supply chains; (2) mapping and documenting relevant processes (eg databases, training, cybersecurity); (3) considering the level of risk your business’s current and/or proposed AI systems will likely fall under for the purposes of the AI Act, and understanding the applicable requirements; (4) conducting privacy impact and algorithmic assessments before AI systems are implemented/developed and putting in place procedures for ongoing risk identification; (5) reviewing existing contractual arrangements with third party providers and suppliers; and (6) taking note of the staggered compliance deadlines for enforcement under the AI Act.

related articles