Artificial intelligence could lead to the 'extinction' of humanity, experts warn

OpenAI boss Sam Altman Credit: AP

Artificial Intelligence (AI) experts fear the technology could put the future of humanity at risk.

Some of the biggest names in the development of AI have said world leaders should be "mitigating the risk of extinction".

In a short statement, business and academic leaders said the risks from AI should be treated with the same urgency as "pandemics or nuclear war".

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said.

The statement was organised by the Centre for AI Safety, a San Francisco-based non-profit which aims “to reduce societal-scale risks from AI”.

It said the use of AI in warfare could be “extremely harmful” as it could be used to develop new chemical weapons and enhance aerial combat.

The list included dozens of academics, senior bosses at companies like Google DeepMind, the co-founder of Skype, and the founders of AI company Anthropic.

AI is now in the global consciousness after several firms released new tools, such at ChatGPT, allowing users to generate text, images and even computer code by just asking for what they want.

Who signed the letter and what warnings have they given before?

Geoffrey Hinton Credit: AP

The letter was signed by tech bosses including Geoffrey Hinton, who is sometimes nicknamed the “Godfather of AI”.

Earlier this year he quit Google and warned how dangerous the future of the technology could be.

For more than a decade he worked on helping develop software which paved the way for AI systems such as ChatGPT.

He previously told the New York Times he regretted his work and said “bad actors” would use new AI technologies to harm others and could spell the end of humanity.

OpenAI CEO Sam Altman speaking before US Congress. Credit: AP

Sam Altman and Ilya Sutskever, the chief executive and co-founder respectively of ChatGPT- developer OpenAI also signed the letter.

Just weeks ago, Mr Altman told US politicians that government intervention "will be critical to mitigate the risks of increasingly powerful" AI systems.

Speaking before congress, Sam Altman said: “As this technology advances, we understand that people are anxious about how it could change the way we live. We are too."

He proposed the formation of a US or global agency that would license the most powerful AI systems and have the authority to “take that license away and ensure compliance with safety standards.”


Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know.