How could AI be made safe? Here are five key challenges ahead
Artificial intelligence experts usually follow one of the two popular opinions – it will either destroy our lives completely or improve them enormously. And that’s why this week’s European Parliament debate on how AI systems are regulated is so important.
But could the technology be actually made secure?
Five Key Challenges To Make AI Safe
Agreeing What AI Is
It took the European Parliament two years to come up with a definition of an AI system – software that can “for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with”.
It is expected to vote on its Artificial Intelligence Act this week. The proposed regulation includes the first legal rules of their kind on AI, which don’t just get limited to voluntary codes and require businesses to comply.
No Global Agreement Yet
AI doesn’t seem to have any respect for borders. We need to have international collaboration on this. But numerous challenges remain, including different territories having different ideas:
The EU seems to have the most strict proposals, which include grading AI products on the basis of their impact. Meanwhile, the UK is folding AI rules into existing regulators.
Moreover, while the US currently has just voluntary codes, China wants companies to notify users when an AI algorithm is used.
Keep Reading
Ensuring Public Trust
“If people trust it, then they’ll use it,” said Jean-Marc Leclerc from IBM’s Government and Regulatory Affairs team.
AI is capable of improving people’s lives in incredible ways. It is already helping address issues such as pandemics and climate change, make paralysed people walk again, and discover antibiotics.
But what about predicting how likely a person is to commit a crime or screening candidates for job interviews?
The European Parliament wants companies to inform the public about the risks attached to each AI product, or get penalised for breaking the rules. But can developers accurately predict or effectively control how their AI product might be used by the public?
Who Gets To Decide The Rules?
So far, artificial intelligence has largely been self-policed.
Therefore, when it comes to deciding who actually gets to write the rules, several questions arise, such as whether businesses will put profits before people if they start writing the rules and whether they will try to get as close as possible to lawmakers tasked with setting out the rules.
Some experts believe it is important to listen not just to corporations. People who are affected by these transformations should also get a say in the matter.
Acting Swiftly
OpenAI’s ChatGPT came into public use just over six months back. Now, it can generate essays, pass professional exams, and even plan people’s holidays. In other words, these large-scale language models are evolving at a phenomenal rate. A growing number of AI experts are now raising concerns over the technology’s huge potential for harm. Meanwhile, the EU’s Artificial Intelligence Act will not come into force until at least 2025, possibly making it “way too late”.