Skip to content
Accueil » IA Act: Discover European Regulation for Artificial Intelligence

IA Act: Discover European Regulation for Artificial Intelligence

AI Act regulations

Europe has taken a significant step in regulating artificial intelligence with the entry into force of the European AI Act on August 1, 2024. This globally pioneering legislation aims to govern the development and use of AI systems based on the risks they pose, thereby establishing an unprecedented regulatory framework.

This text will detail the main provisions of the law, its potential impact, as well as the challenges and criticisms it raises. Here is the official website where you can find the full text. In the meantime, here is the main information:

Entry into Force and Risk Classification

The European AI Act is now in force and applies to all AI systems, whether existing or under development. This legislation is distinguished by its classification of AI systems into four risk levels:

  • no risk,
  • minimal risk,
  • high risk,
  • and prohibited AI systems.

This gradual approach allows for precise targeting of areas where risks are most significant while avoiding excessive regulation for less problematic technologies.

No Risk and Minimal Risk

AI systems classified as “no risk” and “minimal risk” represent the majority of technologies currently in use. For these categories, regulation is relatively light, allowing companies to continue innovating without being hampered by heavy regulatory constraints.

Approximately 85% of AI companies fall into the “minimal risk” category, meaning they require little regulation, thus allowing them to focus on developing and improving their products.

Read also: Artificial Intelligence in Canada: Who are the Players?

High Risk and Prohibited AI Systems

AI systems deemed high-risk, such as those collecting biometric data or used for critical infrastructure and employment decisions, will be subject to strict regulations.

These systems will need to prove that their training data is appropriate and that adequate human oversight is in place to prevent abuse and potentially catastrophic errors.

Prohibited AI systems, on the other hand, include practices deemed unacceptable, such as manipulating user decision-making or expanding facial recognition databases via scraping.

These prohibitions will come into effect in February 2025, giving companies time to cease these activities.

Strict Regulation and Compliance Deadline

Regulation for high-risk AI systems is particularly rigorous. Companies will need to provide evidence of the quality of the training data used by their AI systems, demonstrate continuous human oversight, and ensure the transparency and security of their technologies.

These requirements aim to minimize the risks of abuse, discrimination, and privacy violations.

Companies have between three and six months to comply with this new legislation. Non-compliance can result in fines of up to 7% of their total annual turnover.

This measure aims to ensure that all companies take the regulation seriously and invest the necessary resources to comply with the new requirements.

You will find detailed information on risk levels and specificities on this page.

Oversight and Enforcement of the Law

The implementation of this law will be overseen by an AI Office within the European Commission, which will see its staff strengthened to ensure effective monitoring.

An AI Board, composed of delegates from the 27 EU member states, will also be created to harmonize the application of the law across the European Union. This governance structure aims to ensure consistent and effective application of the rules, while allowing for some flexibility to address national specificities.

Investments in AI

To support this new regulation, the European Commission plans to boost investments in AI. One billion euros will be invested in 2024, with a goal of reaching up to 20 billion euros by 2030.

These investments aim to encourage innovation while ensuring that new technologies comply with the safety and ethical standards established by the new law. This should also enable Europe to remain competitive on the global stage in AI development.

See also: Artificial Intelligence for Image Creation: Here’s How to Do It

Criticisms and Necessary Revisions

Despite the significant advances brought by this legislation, it is not without criticism. Some experts believe that clarifications and adjustments are necessary, particularly regarding the risk levels of certain technologies.

For example, the distinction between different risk levels can sometimes seem unclear, and clarifications are requested to better guide companies and regulators.

Human Rights and Biometrics

One of the major criticisms concerns the protection of human rights. Some experts believe that the current legislation does not go far enough to protect individuals from the potential abuses of AI technologies.

The collection and use of biometric data, in particular, raise concerns about privacy and mass surveillance. Revisions may be necessary to strengthen protections in these areas and ensure that AI technologies are used ethically and responsibly.

Police and National Security

Another area of concern is the use of AI by police forces and national security agencies. Although the law provides strict regulations for high-risk AI systems, some experts believe that additional measures are needed to prevent abuse and ensure that these technologies are not used disproportionately or discriminatorily.

Transparency and accountability in the use of AI by public authorities will be essential to maintain public trust.

Conclusion

The European AI Act represents a major step forward in regulating artificial intelligence technologies. By classifying AI systems based on the risks they present and imposing strict regulations for high-risk technologies, Europe is establishing a pioneering regulatory framework that could serve as a model for other regions of the world.

However, this legislation is not without challenges. Clarifications and adjustments will likely be necessary to address concerns raised by experts and ensure that the law effectively protects human rights and prevents abuse. By carefully monitoring the implementation of this law and continuing to invest in responsible innovation, Europe can hope to leverage the benefits of AI while minimizing the associated risks.

The coming months and years will be crucial to observe how this legislation will be applied and adjusted in response to technological developments and feedback from stakeholders. If successful, the European AI Act could mark the beginning of a new era of technological regulation, where innovation and security go hand in hand for the benefit of society as a whole.

Leave a Reply

Your email address will not be published. Required fields are marked *

Glen

Glen