How Europe is setting the barricades for AI in the rest of the world
Authorities worldwide are scrambling to develop AI regulations, including those in the European Union, where draught legislation experienced a critical moment on Thursday.
A committee of the European Parliament voted to strengthen the leading legislative proposal as it moves towards passage as part of a multi-year initiative by Brussels to create artificial intelligence safety nets. The urgency of those efforts has increased as a result of the rapid development of chatbots like ChatGPT, which show both the advantages of the new technology and its new dangers.
HOW ARE THE RULES APPLIED?
Any good or service that makes use of an artificial intelligence system will be governed by the AI Act, which was initially proposed in 2021. The law will categorize AI systems into four risk categories, ranging from minimal to unacceptable. Riskier applications will be subject to stricter requirements, including using accurate data and being more transparent. Consider it a “risk management system for AI,” suggested Johann Laux, a specialist at the Oxford Internet Institute.
WHAT RISKS ARE THERE?
Protecting fundamental rights and values from any AI threats to health and safety is one of the EU’s top priorities.
That means some applications of AI—like “social scoring” systems that assess people based on their actions—are wholly unacceptable. It is also prohibited to use AI that preys on the weak, including children, or that employs harmful subliminal manipulation, such as an interactive talking toy that promotes risky behavior.
By voting to outlaw predictive policing tools, which analyze data to predict where and who will commit crimes, lawmakers strengthened the proposal. A wider ban on remote facial recognition was also approved, except for a few law enforcement uses like thwarting a specific terrorist threat. The system scans onlookers and then uses AI to compare their faces to a database.
The goal is “to avoid a controlled society based on AI,” Brando Benifei, an Italian lawmaker assisting in directing the European Parliament’s AI initiatives, told reporters on Wednesday. We believe that these technologies have the potential to be used for bad as well as good, and we believe the risks to be too great.
Keep Reading
AI systems must meet strict requirements like being transparent with users and putting in place risk assessment and mitigation measures when used in high-risk categories like employment and education, which could have an impact on a person’s life course.
According to the executive arm of the EU, the majority of AI systems, like those used in video games or spam filters, are low- or no risk.
WHERE DOES CHATGPT STAND?
Chatbots were only mentioned briefly in the original 108-page proposal, which only called for their labeling to let users know they’re interacting with a machine. Later, the parties to the negotiations added clauses requiring general-purpose AI systems like ChatGPT to meet some of the same standards as high-risk systems.
The requirement to completely document any copyright materials used to instruct AI systems how to produce text, images, videos, or music that resembles human work is a crucial addition. That would inform content producers if blogs, electronic books, academic papers, or popular songs were used to train algorithms that power ChatGPT or other systems. They could then determine whether their work has been plagiarised and take legal action.
WHY ARE EU RULES SUCH A PRIORITY?
The European Union does not play a significant role in creating advanced AI. The United States and China fill that position. However, Brussels frequently sets the pace for regulations that end up as de facto world norms.
According to Laux, “Europeans are, globally speaking, fairly wealthy and there are a lot of them,” so businesses and organizations frequently decide that it is simpler to comply given the 450 million consumers who make up the bloc’s single market than to create different products for various regions.
But enforcing the law is not the only solution. According to Laux, Brussels is attempting to grow the market by instilling user confidence through the establishment of common AI rules.
The idea is that if you can get people to trust applications and AI, they will also use it more, according to Laux. “And they will unleash the economic and social potential of
AI when they use it more frequently.”
WHY WOULD YOU VIOLATE THE RULES?
Fines for breaking the rules can be as high as 33 million euros ($33 million) or 6% of a company’s annual global revenue, which for tech giants like Google and Microsoft could be in the billions.
WHAT COMES UP?
Years may pass before the regulations are fully implemented. The draught legislation will now be put to a vote by EU lawmakers during a plenary meeting in mid-June. Then it enters three-way talks with the 27 member states of the bloc, the Parliament, and the executive Commission, where it might experience further changes as they haggle over the specifics. Final approval is anticipated by the end of the year, or early in 2024 at the latest. There will then likely be an adaptation period of up to two years for businesses and organizations.