Fear artificial intelligence? We're already being manipulated by machines, says Cambridge academic
For many of us our knowledge of artificial intelligence (AI) comes from the movies.
The technology has been a dominant feature in dozens of science fiction films, the premise of which seldom results in positive benefits for humans.
From being turned into a power source in The Matrix, to cannon fodder in the Terminator, fictional AI is not a fun place to be as a human.
Recently many of the biggest names in tech have voiced fears that this could soon be less science fiction than science fact.
But for Prof Neil Lawrence from the University of Cambridge, when it comes to introducing safeguards against AI, he believes we should be focused more on the here and now.
In an interview with ITV News Anglia, Professor Lawrence, a member of the government's AI Council, talked about a number of challenges we face when dealing with AI.
A Clear and Present Danger?
The prime minister's own adviser on artificial intelligence recently warned that, left unchecked, AI systems could become a real threat to humanity within two years.
Prof Lawrence says while we shouldn't dismiss those fears, we should think more about the scale of the risk.
"The general story is that somehow we will create these machines that are so capable they’ll be able to manipulate humans and take over the world.
"That ignores the fact that we’ve already created machines that have relatively poor capabilities and already manipulate humans and have to a large extent taken over the world.
"If you don’t believe me, just walk down the street and see how many people have a face shoved into a mobile phone... The mistake my colleagues make is to assume it requires some kind of superintelligence for that to happen. It doesn’t."
He says evolution has a large to play in how we are manipulated by machines.
"We’ve evolved together for hundreds of thousands of years, but we didn’t have machines that have access to the vast quantities of our personal data and manipulate us in the most simple of ways.
"How depressing is it that people are fretting over whether or not enough people have liked that post on Instagram? That’s plugging into a basic societal need that we had when we were in small groups for social validation.
"Clearly social validation within a small group of friends or collaborators it’s important, but is it really important whether someone in Arkansas likes or doesn’t like my post? It feels important because it has always felt important to be socially validated.
"Already the machine is manipulating us in that regard causing us to engage more and more with online ecosystems that are actually not as interesting and diverse as our friends or nature in the real world."
He says we have become overly reliant on tech services.
"We’ve become utterly dependent on them. We can’t make choices to say we’re not going to use Google or Apple Maps - we have to have to use them to get around.
"I tried asking for directions in London the other day and no-one can give them any more.
"So, do I believe that we’re going to create superintelligence that’s going to operate against us to cause an existential threat to humanity? This thing already exists - it’s a combination of technologies and the ability to make vast amounts of money out of people's personal information."
"That money to be coming from sentence structures that are not aligned with our own aspirations, there are aligned with the advertisers."
Diversity matters
The government needs to listen to more diverse views on AI, argues Prof Lawrence.
"One of the most striking things is almost every single voice that we are hearing warning about existential threat is male, and every single one of those people warning about the actual challenges that we face as a society today is female.
"I find that quite striking, and I’m extremely concerned about the idea that this sort of notion of hierarchy and power and wealth is what's going to dominate our futures.
"I would much rather look to those other voices who are talking about the problems we face today.
"The problem with the existing asymmetries of power that are affecting people today, worrying about how they may well evolve into worse directions but also realising that these new technologies could, if steered correctly, offer us a potential future that escapes from that existing superintelligence that is doing such damage right now."
Can the UK become an AI superpower?
The country was rated in the top three of nations working in the artificial intelligence space, but as with many things it comes down to funding.
A lack of constant cash injections has led to nations such as Canada stealing a march on AI research.
"Canada made very very significant investments in AI a decade ago - hundreds of millions invested into AI institutes - and you can really see the leadership coming forward."
Prof Lawrence while Britain has some "amazing success stories" such as DeepMind, the country lacks the infrastructure needed for constant success.
"There are challenges with our understanding of the wider AI ecosystem and our ability to build and sustain companies that are going to be global players.
"The nature of those challenges is quite complicated; it comes due to a lack of funding to take smaller companies into larger companies.
"The tendency for when we do have a successful company is for that company to be bought by a larger US tech company. But also, fundamentally, these ecosystems themselves tend to have the effect that the rich get richer, not just in terms of money, but once that ecosystem is existing people tend to invest in companies within the ecosystem."
What are the impacts of a US-centric tech industry?
In essence the main impacts are the world is presented with solutions to problems that may not exist in other countries. Take taxis for example.
"If you’ve ever been to California and you’ve ever tried to take a regular taxi there, you immediately understand why they thought Uber was a good idea, because the taxi services in San Francisco were historically very very poor and Uber was much better.
"But if you take a London cab it’s not so clear why it’s a good idea.
"But, of course, these large tech services end up being subsidised very greatly by venture capital money.
"We all saw the effect of bicycles being distributed across our streets - that was also venture capital money deciding that we all need bicycles.
"In effect we're getting solutions imposed on us by west coast United States perspective on how the world should be.
"That does negatively affect us but relatively speaking our values are quite aligned with those of the west coast United States.
"The larger problems is when those values are imposed on cultures and societies that are less well equipped than us to deal with these large global forces."
Is it about asking the right questions?
Rather than top-down solutions, Prof Lawrence would like to see more products based on actual local need.
For example, the more industries impacted by AI, the more impacts will be felt by ordinary people needing jobs.
"Yes, there’s going to be large-scale job disruption, and the most interesting thing about that job disruption is it going to be happening, not totally, but in large part with professional people.
"Those are the people that are controlling the conversation. If we contrast that with the conversations we’ve had about previous automation which affected manual workers - those voices actually were not around the table when we did the decision making.
"We need to make sure that within this next generation of technological disruption that we are listening to those voices and understanding their lives, and understanding how this technology can make their lives better.
"That means moving beyond initial conversation of: 'Wait a second - ChatGPT can write legal contracts, I thought that was my job to my legal contracts? My job is under threat!'
"Maybe your job of translating plain English into legalese wasn’t a very useful job and maybe it’s a good thing that we’ve got automatic tools that can do that on our behalf. And maybe if you think carefully about what that means for your profession we can ensure that your profession is better equipped to deal with it."
Are there parallels with the space race of the 50s and 60s?
Initially, after the US President John F Kennedy announced his intention to land a man on the moon, experts were unconvinced.
JFK's science adviser asked why send humans into space, when scientific instruments would do the same job?
For Prof Lawrence, it's about human endeavour.
Watch our report on how AI could be a boost for the region
"JFK gave the extraordinary speech 'We choose to go to the moon'... it’s one of the most beautiful speeches you can read, not just because of what he says but because of what he talks about in terms of how we as human beings, as fragile vulnerable creatures, have developed.
"He mentioned a lot of things: he mentioned the wheel, he mentions invention of writing, he mentioned the printing press, and he talks about going to space as the next stage in that.
"They had a wonderful vision for the future that’s emerging coming out of come out of the Second World War, but we all remember about the space race, what we remember is Neil Armstrong, trying to express in very very few words what it felt like to be the first person on the moon.
"At the heart of the space race is... the human being. As Armstrong landed that craft, was he chose to take control over that machine from the computer and land it himself.
"Every single Apollo mission was capable of landing automatically on the moon, and every single astronaut chose to land the craft themselves.
"And this goes right to the heart challenge we're facing. The machine should only ever be seen as a way of empowering us, when I say empowering us I don’t mean empowering individuals.
"I don’t mean enriching us, I mean every single one of us, looking at our aspirations and helping us to fulfil those aspirations with a complex multifaceted society - never as a replacement for us.
"And all of those astronauts as they land, are making a statement that I want to be the one that does this, and the importance of Armstrong's words, and the words of anyone that has been into space who can come and tell us about what it’s like for fragile humans to be in the most treacherous environment is the real story of the space race.
"Do we already see those geopolitics forming around this capability? Yes. Do we need to be careful about global threats? Yes. Are there challenges with things like autonomous weapons? Yes.
"Are these challenges in the future? Yes, they are also in the present. We are seeing these things pan out in terms of information wars, in terms of manipulation of elections, in terms of automated weaponry being used in Ukraine for relatively cheap prices.
"These are not new threats. People have been thinking about them and we have to be very, very careful about believing the very few people [who] have the answers.
"We have to be careful to convene a diversity of expertise in delivering the answers we have to the challenges."
Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out What You Need To Know