Should Artificial Intelligence be regulated after warnings humanity could face 'extinction'?

A person typing into a laptop.
AI technologies have become more commonplace in recent months. Credit: PA

Experts have warned that Artificial Intelligence (AI) technologies could pose an existential threat to humanity, unless action is taken now to mitigate its capabilities.

The caution has captured public attention and prompted a response from Prime Minister Rishi Sunak.

But how much of an immediate risk does AI represent and what could be done to heed the warning.

ITV News explains all you need to know about the current situation.

What is AI?

AI is the intelligence demonstrated by machines, as opposed to the natural intelligence displayed by both animals and humans.

Examples of AI include face recognition software and digital voice assistants, such as Apple's Siri and Amazon's Alexa.

How could AI lead to human extinction?

AI could be weaponised, for example to develop new chemical weapons and enhance aerial combat, the San Francisco-based Centre for AI Safety says on its website.

The centre released the statement about the risk of extinction from AI, which was signed by the industry leaders.

Other risks are also listed on its website, including AI potentially becoming dangerous if it is not aligned with human values.

Humans could become dependent on machines if important tasks are increasingly delegated to them, according to the centre.

Meanwhile, in the future, AI could be deceptive, not out of malice, but because it could help agents achieve their goals - it may be more efficient to gain human approval through deception than to earn it legitimately.


Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know...


Who has warned of the possible AI dangers facing humanity?

Two of the three "godfathers" of AI have signed the letter, including Geoffrey Hinton, emeritus professor of computer science at the University of Toronto, and Yoshua Bengio, professor of computer science at the Universite de Montreal/Mila.

Dr Hinton resigned from his job at Google earlier this month, saying that, in the wrong hands, AI could be used to to harm people and spell the end of humanity.

Sam Altman and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT developer OpenAI, are among other signatories.

The list also includes dozens of academics, senior bosses at Google and DeepMind, the co-founder of Skype and the founders of AI company Anthropic.

Earlier this year, more than 1,000 researchers and technologists, including Elon Musk, had signed a much longer letter calling for a six-month pause on AI development.

What can be done to regulate AI?

Risks posed by AI are reduced through research, field-building, and advocacy, the Centre for AI Safety says.

The AI research includes: identifying and removing dangerous behaviours; studying deceptive and unethical behaviour; training AI to behave morally; and improving its security and reliability.

Geoffrey Hinton quit his role at Google after warning of the dangers that AI could pose in the future. Credit: AP

Funding, research infrastructure and educational resources are also used by the centre to expand on its AI safety research.

In addition, it raises public awareness of AI risks and safety, provides technical expertise to inform policymaking and advises industry leaders on structures and practices to prioritise AI safety.

What is the UK doing?

Mr Sunak retweeted the Centre for AI Safety's statement on Wednesday and said the government is "looking very carefully" at it.

He said he raised it at the G7 summit and will discuss the topic again when he visits the US.

"The government is looking very carefully at this. Last week I stressed to AI companies the importance of putting guardrails in place so development is safe and secure," he tweeted.

"But we need to work together. That's why I raised it at the @G7 and will do so again when I visit the US."

Last week the PM spoke about the importance of ensuring the right "guard rails" are in place to protect against potential dangers, ranging from disinformation and national security to "existential threats".

What are the benefits of AI?

AI can perform life-saving tasks, such as algorithms analysing X-rays, scans and ultrasounds, amongst other medical images.

The technology can additionally help doctors to identify and diagnose diseases, including cancer and heart conditions, more accurately and quickly.

One example of an AI benefit is new brain technology which helped a man, who was paralysed in a bicycle accident more than a decade ago, to stand and walk again.

Neuroscientists at the Ecole Polytechnique Federale de Lausanne (EPFL), in Switzerland, have created what they call a "wireless digital bridge", which is able to restore the connection lost between the brain and the spinal cord.

This digital bridge is a brain–spine interface which has allowed Gert-Jan Oskam to regain control over the movement of his legs, enabling him to stand, walk and even climb stairs.


The Partygate: The Inside Story podcast brings you fresh revelations and our whistleblowers in their own words in the definitive behind-closed-doors story of how ITV News uncovered one of the biggest scandals of our era