Skip to content
Accueil » The AI Act: a legal framework that’s too restrictive for AI in Europe?

The AI Act: a legal framework that’s too restrictive for AI in Europe?

AI ACT

Artificial intelligence is profoundly transforming our world. Analysts are imagining new work environments, jobs at risk, and an unprecedented upheaval for the 21st century.

Faced with this technological revolution, the European Union adopted an ambitious legal framework: the AI Act.

This regulation, which came into force in August 2024, aims to regulate the development and use of AI to ensure citizens are protected and their fundamental rights are respected. Let’s look at it together:

What is the AI Act?

The AI Act, or Artificial Intelligence Regulation, is a European legislative text. It is the first regulation in the world aimed at establishing a comprehensive legal framework for the development and use of artificial intelligence. We already covered this topic in this article. But today, the AI ACT is very often criticized for its rigidity and is seen as a brake on innovation.

The main objective of the AI Act is to ensure that AI systems placed on the European market respect the European Union’s values. This regulation aims to frame the development and deployment of AI in a way that ensures citizens are protected and their fundamental rights are respected.

When did the AI Act come into force?

The AI Act already came into force on August 1, 2024. This date marks a historic turning point in AI regulation worldwide. This means that companies and organizations that develop or use AI systems must now comply with the new rules set out by this regulation.

Also read: Is AI plagiarism? Here’s what the law says

Who adopted the AI Act?

The AI Act was adopted by the European Union’s institutions, namely:

  • The European Parliament: It played a central role in negotiating and adopting the final text, ensuring that citizens’ interests were taken into account. Members of the European Parliament voted in favor of adopting the AI Act, highlighting the importance of regulating this emerging technology.
  • The Council of the European Union: It represents the Member States and helped find a consensus among the different countries. The Member States therefore agreed on the common rules that will apply across the entire European Union.
  • The European Commission: It initially proposed the draft regulation and led the negotiations. The Commission played a driving role in shaping this ambitious legislative text.

How does the AI Act classify AI systems?

The AI Act distinguishes four levels of risk associated with AI systems:

  • Unacceptable risk: AI systems that manipulate people, especially children, or exploit vulnerabilities are banned.
  • High risk: AI systems that can have significant consequences for people’s safety or fundamental rights are subject to strict obligations (e.g., health, justice, employment).
  • Limited risk: AI systems that present limited risk are subject to less strict obligations, but must still comply with general principles (e.g., transparency, non-discrimination).
  • Minimal risk: AI systems that present minimal risk are not subject to specific obligations.

Is there a “ChatGPT rule” in the AI Act?

There is no specific “ChatGPT rule” explicitly mentioned in the AI Act.

However, language models like ChatGPT are generally classified in the “limited risk” category. This means they are subject to obligations around transparency, data quality, and human oversight.

The AI Act notably requires that users be informed when they are interacting with an AI system and that the data used to train these models be high quality and not biased. However, this text has sparked online rumors, including the idea that ChatGPT will be banned in 2025, which is completely false.

What the AI Act bans:

The AI Act categorically bans AI systems that:

  • Manipulate people: For example, systems designed to exploit individuals’ vulnerabilities, especially children.
  • Socially score individuals: Systems that assign people a social score based on their behavior or personal characteristics are banned.
  • Use real-time biometric data in public spaces for identification purposes: This includes facial recognition in the street, except in very specific cases and under strict conditions.
  • Create social credit scoring systems: Systems that assess a person’s trustworthiness based on their social behavior are banned.

See also: Which jobs are threatened by artificial intelligence?

The potential downsides of the AI Act

While the AI Act is a major step forward in AI regulation, it also raises some questions:

  • A brake on innovation:

The AI Act, although it aims to ensure the safe and ethical development of artificial intelligence in Europe, could also risk slowing innovation in this sector. A regulatory framework considered too strict could particularly penalize small businesses, limiting their ability to innovate and grow in the AI field.

  • International competitiveness:

The AI Act could put European companies at a disadvantage compared to their American and Chinese competitors, who have less strict regulations.

  • Implementation complexity:

The regulation is complex, and implementing it will require significant efforts from companies and public authorities.

The AI Act is an innovative piece of legislation designed to ensure the safe and ethical development of artificial intelligence in Europe. By bringing together the various European institutions, it establishes a coherent and ambitious approach to regulating AI.

This text seeks to strike a fair balance between protecting citizens and promoting innovation. But could European AI regulation become a technological brake for Europe in the 21st century? Share your opinion in the comments!

Leave a Reply

Your email address will not be published. Required fields are marked *

Glen

Glen