EU’s AI Act – Striking a Balance between Innovation and Responsibility

by Divya Kumat, on May 27, 2024 3:41:04 PM

In an ever-evolving landscape of technology, regulators face a formidable challenge: how to foster innovation while curbing unethical practices. Nowhere is this delicate equilibrium more evident than in the European Union, which has stood at the forefront of driving responsible innovation. The European Union’s (“EU”) Artificial Intelligence (“AI”) Act is the first-ever legal framework on AI, which comprehensively addresses the risks of AI and positions Europe to play the leading role globally. Other countries will soon follow on the same lines; hence, it is imperative to understand the implications of the Act and also to be sensitised on the importance of harmonizing AI rules across the EU, fostering innovation while safeguarding fundamental rights and values. It provides for EU-wide rules on data quality, transparency, human oversight and accountability. With challenging requirements, significant extraterritorial effects, and fines of up to 35 million euros or 7% of global annual revenue (whichever is higher), the AI Act will have a profound impact on a significant number of companies conducting business in the European Union.

There have been numerous cases filed against the AI companies for breach of copyrights, which are currently sub judice and some of the cases have been decided levying fines and other sanctions. Recently, Google has been fined Euro 250m over AI intellectual property breach.

As the gaze of the entire world remains fixed, the recent enactment of the Artificial Intelligence Act converges decisively with the Union’s relentless pursuit of progress. Proposed on April 21, 2021, the Act came into effect on March 13, 2024, and should be implemented soon in phases. Its application covers 27 member countries in the Europe region. Following are the primary features of the Act:

EU blog
  • Applicability: It applies to entities both inside and outside of the EU, as long as the AI system is placed on the EU market, or its use effects the people located in the EU.
  • Risk based classification:
    • Unacceptable risk: Cognitive behavioural manipulation of people, social scoring systems, biometric identification and categorisation of people except for law enforcement purposes etc., are prohibited.
    • High risk: AI systems that are used in products falling under the EU’s product safety legislation. This includes toys, aviation, cars, medical devices and lifts, management and operation of critical infrastructure, education and vocational training etc.
    • Limited risk, e.g. chat bots and deep fakes, are subject to transparency obligations.
    • Minimal risk, e.g. AI enabled video games and spam filters, are unregulated.
  • Transparency measures: These are applicable to every operator, be it the person who develops, deploys, imports, or distributes an AI system. They ensure that the persons exposed to the system are informed in a timely manner when interacting with an AI system, including to keep information regarding the AI enabled functions and to mark them as artificially generated or manipulated.
  • General Purpose AI Model: Also known as the foundational model (GPAI), this is an AI model that can be trained with large amounts of data to be capable of performing a wide range of tasks. E.g. GPT-4 and DALL-E. There are additional obligations imposed on such models such as keeping of technical documentation and policy regarding the EU copyright laws. A GPAI model may be treated under high-risk category if it poses systemic risks.
  • AI Office: This authority has been set up by the EU Commission to coordinate the compliance, implementation, and enforcement of the Act.
  • Non-compliance: The consequences for noncompliance can be hefty, ranging from penalties of €35 million or 7 percent of global revenue to €7.5 million or 1.5 percent of revenue, depending on the infringement and size of the company.

The Artificial Intelligence Act paves the way for responsible and ethical use of AI. It sets a precedent for other nations to follow, underlining EU’s commitment to shape the future of AI in a manner that aligns with societal values and norms. As innovation progresses with every floating second, the EU has set its foot in the market to safeguard its citizens’ rights and interests.

Let us all aim to be innovative in a responsible manner.


Subscribe to Blogs