The European Union on Wednesday gave the final nod to the world’s first major set of ground rules to govern artificial intelligence.
The so-called “AI Act” divides the technology into four categories that range from “unacceptable” – which would see the models banned – to high, medium, and low hazard.
The regulation focuses on the risky uses of the technology by the public and private sectors. There will be tougher obligations for AI service providers, stricter transparency rules for advanced natural language models like OpenAI’s ChatGPT, and an outright ban on tools that are considered far too dangerous by the state.
EU Approve Landmark AI Act to Regulate Artificial Intelligence Technology
Senior EU officials say the rules are there to protect citizens from the risks posed by the technology that has been developing at breakneck speed, while also fostering innovation in the sector from within the bloc.
EU chief Ursula von der Leyen said in an X post that the regulation would be ushering in a “pioneering framework for innovative AI, with clear guardrails”, before adding that it will set a blueprint for “trustworthy AI” throughout the world while benefiting Europe’s talent pool.
A provisional political consensus regarding the AI Act was passed in early December, which was then endorsed in the parliament’s Wednesday session where it received support from 523 lawmakers, with 46 voting against and 49 not casting any votes.
The 27 EU member states are expected to approve the AI Act by April before it is added to the bloc’s Official Journal in May, after passing the final checks and gaining approval from the European Council.
Concerns Mount Regarding the Use of AI to Influence Global Elections
Some EU member nations, including Germany and France, previously advocated for AI self-regulation over the EU-led curbs.
They were concerned that stifling regulation on the advanced technology could be a hurdle in Europe’s efforts to compete with Chinese and American companies in the sector. Both countries are home to some of the continent’s most promising AI startups.
Just last week, the bloc implemented the landmark Digital Markets Act to crack down on anti-competitive behaviors from major tech companies, forcing them to open up their services to third parties in sectors believed to be their dominant position.
The EU put major US tech titans like Alphabet, Apple, Amazon, Meta, Microsoft, and China’s ByteDance – parent of TikTok – on notice as so-called gatekeepers.
There is also rising concern over the potential abuse of artificial intelligence systems despite heavyweights like Google, Amazon, Microsoft, and Nvidia driving the industry forward.
There was excitement in the air when OpenAI wowed the world with the human-like capabilities of its generative AI – ChatGPT. The model launched in late 2022 is well-capable in producing poems, writing code, or passing bar and medical exams.
Other models like DALL-E and Midjourney can produce realistic images within seconds by reading and understanding users’ prompts.
However, with the excitement came a swift realization of its dangers, such as the possibility of AI-generated audio and video deepfakes triggering disinformation campaigns in the lead-up to the biggest global election year.
Companies like Google and OpenAI have limited the abilities of their respective large language models – Gemini and ChatGPT – to answer election-related queries, as part of their efforts to self-regulate and avoid disinformation campaigns.
Legal Experts Say the AI Act is a Major Milestone for Regulating AI
Legal experts described the EU’s AI Act as a major milestone for international regulation of artificial intelligence, noting that it could be a precursor for other countries to follow suit.
Mark Ferguson, a public policy expert at law firm Pinsent Masons, said that the act was “just the beginning” and businesses that leverage AI will now need to start working closely with the government to understand how the laws will be implemented.
Steven Farmer, a partner and AI expert at international law firm Pillsbury lauded the EU for making the first move and developing a very comprehensive set of regulations.
He highlighted that the AI Act is similar to the EU’s General Data Protection Regulation (GDPR) which was enacted to protect users’ data from being abused by service providers.
AI Act to Come into Effect From 2025
The AI Act, which takes a risk-based approach, will implement tougher requirements for riskier models, and even outright ban AI tools deemed to carry the most threat. The regulation will come into force in 2025 onwards, with companies having to comply with most provisions within two years.
Related News: Google Has Restricted Gemini From Answering Questions About 2024 Global Elections