Artificial intelligence experts usually follow one of the two popular opinions – it will either destroy our lives completely or improve them enormously. And that’s why this week’s European Parliament debate on how AI systems are regulated is so important.
But could the technology be actually made secure?
It took the European Parliament two years to come up with a definition of an AI system – software that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with”.
It is expected to vote on its Artificial Intelligence Act this week. The proposed regulation includes the first legal rules of their kind on AI, which don’t just get limited to voluntary codes and require businesses to comply.
AI doesn’t seem to have any respect for borders. We need to have international collaboration on this. But numerous challenges remain, including different territories having different ideas:
The EU seems to have the most strict proposals, which include grading AI products on the basis of their impact. Meanwhile, the UK is folding AI rules into existing regulators.
Moreover, while the US currently has just voluntary codes, China wants companies to notify users when an AI algorithm is used.
“If people trust it, then they’ll use it,” said Jean-Marc Leclerc from IBM’s Government and Regulatory Affairs team.
AI is capable of improving people’s lives in incredible ways. It is already helping address issues such as pandemics and climate change, make paralysed people walk again, and discover antibiotics.
But what about predicting how likely a person is to commit a crime or screening candidates for job interviews?
The European Parliament wants companies to inform the public about the risks attached to each AI product, or get penalised for breaking the rules. But can developers accurately predict or effectively control how their AI product might be used by the public?
So far, artificial intelligence has largely been self-policed.
Therefore, when it comes to deciding who actually gets to write the rules, several questions arise, such as whether businesses will put profits before people if they start writing the rules and whether they will try to get as close as possible to lawmakers tasked with setting out the rules.
Some experts believe it is important to listen not just to corporations. People who are affected by these transformations should also get a say in the matter.
OpenAI’s ChatGPT came into public use just over six months back. Now, it can generate essays, pass professional exams, and even plan people’s holidays. In other words, these large-scale language models are evolving at a phenomenal rate. A growing number of AI experts are now raising concerns over the technology’s huge potential for harm. Meanwhile, the EU’s Artificial Intelligence Act will not come into force until at least 2025, possibly making it “way too late”.
K-pop star Rosé and Bruno Mars will perform their viral hit APT. live for the first time today at the… Read More
Cristiano Ronaldo has hinted at a dream that has fans buzzing—playing professional football alongside his son, Cristiano Jr. In a… Read More
The government of Canada has responded to a recent media report linking Indian Prime Minister Narendra Modi to the killing of… Read More
The Labor Department reported on Thursday that new jobless claims fell by 6,000 to 213,000 for the week of November… Read More
Chief Justice Matthew J. Fader of the Supreme Court of Maryland has named Judge Joseph M. Stanalonis as the county… Read More
The Coachella Valley Music and Arts Festival is back in 2025 with an incredible lineup. Lady Gaga, Green Day, Post… Read More
This website uses cookies.
Read More