The European Union is advancing its digital strategy by proposing regulatory measures for artificial intelligence (AI) to foster better development and usage conditions for this groundbreaking technology. Recognizing AI’s potential to revolutionize healthcare, transportation, manufacturing, and energy sectors with more efficient and sustainable solutions, the European Commission introduced the first EU AI regulatory framework in April 2021. This pioneering legislation aims to classify AI systems by their risk levels to users, dictating the degree of regulation required.
Key aspirations of the EU Parliament for AI legislation include ensuring AI systems within the EU are safe, transparent, accountable, non-discriminatory, and eco-friendly, with a preference for human oversight over automated systems to mitigate risks. Additionally, the Parliament seeks a consistent, technology-neutral definition for AI applicable to future developments.
The proposed AI Act outlines various rules based on AI systems’ risk levels, ranging from minimal to unacceptable risks, with specific prohibitions on AI applications considered a threat to people’s safety. High-risk AI systems, affecting safety or fundamental rights, will undergo rigorous assessment before and throughout their lifecycle. The act also addresses generative AI, like ChatGPT, imposing transparency and safety requirements, and categorizes limited risk AI systems that must meet minimal transparency criteria to ensure user awareness.
On December 9, 2023, the EU Parliament reached a provisional agreement with the Council on the AI Act, marking a significant step toward formalizing the world’s first comprehensive AI regulations. This agreement, pending formal adoption by both entities, sets a global precedent for AI governance.
Source – https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence