Blog
Regulatory Affairs

Summary: EU AI Act webinar on expected international impacts (K&L Gates)

Mar 7, 2024
3
min read
Share
X

On February 27, 2024, K&L Gates held a webinar, Regulating AI— Part IV: The EU AI Act – The Expected International Impacts to discuss the complexities and implications of the EU AI Act. 

Topics explored

  • The potential impact of AI on businesses, governance, and regulatory landscapes across EU member states. 
  • The EU AI Act's structure and enforcement challenges
  • The broader implications for AI development and deployment within and outside the European Union.

Discussion overview

Panelists shared perspectives that the EU AI Act does not have a strong vision for fostering AI development within the EU. The legislation includes aspects that intend to support innovation, such as sandboxes and real-world testing, as well as Article 55, which aims to support smaller market players in building AI expertise. However, the Act primarily focuses on preventing risks rather than promoting innovation. 

The discussion also covered challenges of the EU AI Act due to differing opinions within the European Parliament. Some political groups believed the Act was not far-reaching enough, particularly concerning prohibitions on facial recognition and social scoring. 

The Parliament made significant concessions, particularly regarding national security exemptions and the regulation of foundation models. The Act now includes provisions that involve model developers and other players in the AI value chain. This inclusion aims to ensure that downstream providers and deployers have the necessary information to comply with the law, addressing potential issues such as biases in training datasets.

EU AI Act Must-Knows

Structure and key elements

The EU AI Act is structured around a risk-based approach, with different levels of regulation and obligations depending on the risk posed by an AI application. The Act defines four basic levels of risk: unacceptable, high, low, and minimal. 

Prohibited practices are clearly outlined, and high-risk applications face stringent rules and compliance burdens. The Act also introduces special rules for general-purpose AI (GPAI) models, with systemic GPAI models undergoing stricter scrutiny. The Act's implementation timeline is phased, with most rules applying two years after the text's entry into force, but with some exceptions. The AI office will play a crucial role in defining classification rules and providing guidance for the Act's enforcement.

Extraterritorial reach

The EU AI Act has an extraterritorial reach similar to the GDPR, affecting various stakeholders, including AI system providers and deployers. The Act enforces a principle-based approach to AI and could require entities with products or services targeting the EU market to comply with EU regulations. This includes documentation and accountability measures to ensure compliance. The Act applies to providers outside the EU if their AI systems are used within the EU, emphasizing the global impact of the legislation.

Challenges in enforcement

The enforcement of the EU AI Act poses significant challenges due to the variety of authorities involved, from national cybersecurity agencies to data protection authorities. This could lead to inconsistencies and challenges similar to those experienced with GDPR enforcement. 

The Act's implementation timeline is phased, with most rules applying two years after the text's entry into force, but with some exceptions. The AI office will play a crucial role in defining classification rules and providing guidance for the Act's enforcement. However, it was noted that some of the EU enforcement entities are already overwhelmed with current responsibilities, making EU AI Act enforcement a daunting task. 

The EU AI Act's enforcement will depend on the engagement of stakeholders, including companies and academics, in providing input and participating in regulatory sandboxes and standardization efforts. The Act's vagueness in certain areas may offer opportunities for shaping its application through guidelines, standards, and codes of conduct.

Legal challenges and class action risks

The possibility of legal challenges against the EU AI Act was discussed, with concerns that the Act's enforcement will likely lead to class action lawsuits similar to those seen with GDPR. The EU AI Act has implications for global stakeholders, including those outside the EU.

The Act requires providers to supply deployers with the necessary information for compliance, affecting the entire AI value chain. In addition, the Act emphasizes accountability and documentation, with all stakeholders using AI needing to demonstrate their compliance. 

The Act's penalties for non-compliance are substantial, with fines reaching up to 35 million euros or 7% of worldwide annual turnover for prohibited practices. The panelists emphasized the need for companies to rethink their internal compliance structures to address the Act's requirements effectively. This involves assembling multidisciplinary teams that include cybersecurity, privacy, and technical expertise, beyond just relying on data protection officers. Panelists suggested that even entities not directly involved with high-risk AI need to document and demonstrate compliance with the EU AI Act. 

A webinar replay is available on the K&L Gates website.


Keeping track of global GenAI compliance standards 

Periodically, Sema publishes a no-cost newsletter covering new developments in Gen AI code compliance. The newsletter shares snapshots and excerpts from Sema’s GenAI Code compliance Database. Topics include recent highlights of regulations, lawsuits, stakeholder requirements, mandatory standards, and optional compliance standards. The scope is global.

You can sign up to receive the newsletter here.

About Sema Technologies, Inc. 

Sema is the leader in comprehensive codebase scans with over $1T of enterprise software organizations evaluated to inform our dataset. We are now accepting pre-orders for AI Code Monitor, which translates compliance standards into “traffic light warnings” for CTOs leading fast-paced and highly productive engineering teams. You can learn more about our solution by contacting us here.

Disclosure

Sema publications should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only. To request reprint permission for any of our publications, please use our “Contact Us” form. The availability of this publication is not intended to create, and receipt of it does not constitute, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.

Table of contents

Gain insights into your code
Get in touch

Are you ready?

Sema is now accepting pre-orders for GBOMs as part of the AI Code Monitor.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.