European lawmakers have recently approved what is being hailed as the most comprehensive legislation on artificial intelligence in the world. The new law sets out strict rules for developers of AI systems and introduces new restrictions on how the technology can be used.
The legislation, known as the AI Act, includes bans on certain uses of AI, new transparency rules, and requirements for risk assessments on high-risk AI systems. It is expected to take effect gradually over the next few years and will apply to AI products in the EU market, regardless of where they were developed. Violators of the law could face fines of up to 7% of a company’s worldwide revenue.
The AI Act is the first regulation globally to focus on the safe and human-centric development of AI. Large AI companies are unlikely to want to risk losing access to the EU market, which has a population of approximately 448 million people. The impact of the law is expected to be global, as several other jurisdictions worldwide are also considering new rules for AI.
Despite some pushback from industry groups and European governments, the legislation includes prohibitions on certain AI uses and requirements for providers of general-purpose AI models and the most powerful AI systems. Critics have argued that the law should focus more on risky uses of AI technology rather than blanket rules for all general-purpose AI models.
The AI Act also includes measures for labeling deepfakes, conducting risk assessments on high-risk AI systems, and ensuring the use of high-quality data. Lawmakers have made the legislation flexible to adapt to the rapidly evolving technology landscape, allowing for updates based on market and technological developments by the European Commission.
Overall, the EU’s AI Act represents a significant step towards regulating AI technology and ensuring its safe and ethical development.
“Zombie enthusiast. Subtly charming travel practitioner. Webaholic. Internet expert.”