Rishi Sunak 'looking very carefully' at risk of extinction posed by AI
By Lewis Denison, ITV News Westminster Producer
Rishi Sunak has said he is "looking very carefully" at the risk of extinction posed by Artificial Intelligence (AI) after a group of leading scientists issued a new warning about the threat it poses to humankind.
The prime minister said the technology, which learns by analysing information from the internet, "clearly can bring massive benefits to the economy and society" but accepted there are concerns about the "existential risk" it presents.
Artificial Intelligence has brought huge advances to society, such as helping paralysed people to walk and discovering new antibiotics.
But a statement signed by hundreds of industry leaders warned: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Mr Sunak retweeted the statement, coordinated by American research group Center for AI Safety (sic), writing: "The government is looking very carefully at this."
What is AI?
Artificial Intelligence is a concept which has been around for decades. The term itself was first coined in 1956, but it was only recently that the technology became widely usable.
Technology firm IBM says "at its simplest form, artificial intelligence is a field, which combines computer science and robust datasets, to enable problem-solving".
Computer scientist John McCarthy, one of the founders of the discipline who originated the term, said it "is the science and engineering of making intelligent machines, especially intelligent computer programs.
"It is related to the similar task of using computers to understand human intelligence."
The most publicly accessible forms of AI come as chatbots, such as ChatGPT, which formulate responses to queries based on information gathered from the internet.
What are the concerns?
Geoffrey Hinton, the so-called Godfather of AI, said humanity’s survival is threatened when "smart things can outsmart us".
The former Google computer scientist who quit recently so he could talk more openly about AI, told a conference at the Massachusetts Institute of Technology on Wednesday that the technology may "keep us around for a while to keep the power stations running but after that, maybe not".
"These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,” Hinton said, adding: "Even if they can’t directly pull levers, they can certainly get us to pull levers."
Worryingly, he told the tech conference he is "not sure there is a solution".
It follows a warning two months ago from over 1,000 researchers and technologists, including Elon Musk, who called for a six-month pause on AI development, saying it poses "profound risks to society and humanity".
It said "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control".
Yoshua Bengio, another AI pioneer, said his main concerns are about the tech being exploited for "nefarious purposes to destabilise democracies, for cyberattacks, disinformation".
"You can have a conversation with these systems and think that you’re interacting with a human. They’re difficult to spot," he added in comments to the Associated Press.
The Center for AI Safety's executive director Dan Hendrycks said it is now time to address the “potential catastrophes around the corner."
He compared it to nuclear scientists in the 1930s warning people to be careful even though “we haven’t quite developed the bomb yet.”
Mr Hendrycks added: “We’re trying to address these risks before they happen rather than try and address catastrophes after the fact.”
But he also said there is no reason why regulators can't also manage the “urgent, ongoing harms” being presented by AI currently, such as the risk of disinformation.
Forms of AI such as ChatGPT can generate news articles which appear to be legitimate but are actually being used to spread false information.
Fears have also been raised by schools and universities, concerned about how to combat the potential for widespread cheating ChatGPT offers.
Midjourney, another form of AI software, can generate images which look like real life photographs but are actually fake.
Thousands of internet users were fooled recently by an image of an explosion at the Pentagon which turned out to be fake. There are concerns AI-generated images could trick people into believing conspiracies.
What will the UK do about Artificial Intelligence?
The PM said he'd raised the issue of AI while meeting world leaders in Japan for the recent G7 summit and will do so again when he meets President Joe Biden in Washington next week.
Speaking on Tuesday, he told broadcasters he believes the UK can play a "leadership role" in regulating AI by working "not just with the companies, but with countries around the world".
"We need to make sure [AI is used] in a way that is safe and secure," he said.
“That’s why I met last week with the CEOs of major AI companies to discuss what are the guardrails that we need to put in place, what’s the type of regulation that should be put in place to keep us safe.
“People will be concerned by the reports that AI poses an existential risk like pandemics or nuclear wars - I want them to be reassured that the government is looking very carefully at this."
In his recent Budget, Chancellor Jeremy Hunt promised to invest close to £1 billion to create a new supercomputer and establish a new AI Research Resource to help UK developers compete on the global market.
Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know...
A white paper proposing AI rules was published at the end of March, saying regulators should "consider principles including safety, transparency and fairness to guide the use of AI in their industries".
The government plans to use existing regulators in different sectors rather than giving responsibility for AI governance to a new single regulator.
At the time, Science, Innovation and Technology Secretary Michelle Donelan said: “AI has the potential to make Britain a smarter, healthier and happier place to live and work.
“Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.
“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”