If robots are taking over the world, what repercussions does this have for the human brain? LSE Professor of Law Andrew Murray is tackling this question in his research.
Science fiction and science fact are getting closer together if this recent news headline is any indication: “European politicians vote to rein in the robots”.
The vote, calling for tighter regulations on all forms of artificial intelligence, reflects widespread fears that super intelligent machines could harm humanity if they’re not programmed in the right way.
Bank of England Governor Mark Carney has already gone public with his warnings that robots could steal 15 million jobs from British workers in the years ahead. Those concerns have now prompted European MEPs to call for individuals who lose their jobs to robots to be paid a special income.
But fears of redundancies aside, what are the longer term implications for humans? Is there a danger our brains will atrophy if we delegate an increasing number of tasks to artificial intelligence?
Andrew Murray, an LSE Professor of Law with expertise in new media and technology, says there are signs this is already happening.
“For some time we have been outsourcing decision making to relatively dumb devices (Apple’s ‘Siri’ and Amazon’s ‘Alexa’ for example). The more and more we ask these devices to do things, the lazier our brains are going to get,” Professor Murray says.
“The implications for humans are enormous and there is a very real danger we will regress slightly. This is a major concern that we need to start thinking about. We also need to think about what happens when the robots become smarter than us and when they are aware of their superiority.”
The 2006 non-fiction book The Singularity is Near – which traces the point at which machine intelligence overtakes human intelligence – has never been more relevant, Professor Murray adds.
“As we replace more human organs with mechanical devices, such as heart valves and brain chips, there will come a time when we will need to address what it means to be a cyborg. Once upon a time this was a topic that occupied the minds of science fiction writers. However, it’s now openly discussed as a serious issue among academics, robotics experts, scientists, lawyers and philosophers.”
But reining back from that, we are already paying the price for our increasing reliance on technology, Professor Murray claims.
An over-reliance on automated systems allegedly caused the crash of Asiana Flight 214 in San Francisco three years ago, according to officials investigating the incident.
Pilots were too slow to react when they made a poor landing approach, hitting a seawall and sending the plane’s fuselage skidding down the tarmac. They had not trained sufficiently to deal with the emergency, authorities claimed, relying on automated systems that they did not fully understand.
“The fact was that most of their flying hours had been logged under auto pilot rather than being in full control,” Professor Murray adds.
The counter argument is that most studies show that pilot and driver error are responsible for more than 90 per cent of air and road traffic accidents.
“In general, if you remove humans from the equation, you get a much safer environment. That applies to hospitals as well, where human error in relation to medications and botched operations accounts for a number of deaths.”
The collective evidence shows that the world would be safer in the hands of robots, but humans would lose some critical cognitive skills, Professor Murray says.
At an elementary level, machines can already multi-task more effectively than humans, he claims, and are superior when it comes to algorithmic problem solving.
“Machines lag behind humans when it comes to emotional intelligence, intuition, empathy and creativity but there is no reason to believe they also cannot learn these things over time.”
Experts are divided over the time it will take for artificial intelligence to surpass human brains, but estimations range over a 70-year period.
Ray Kurzweil, author of The Singularity is Near, was only one year out in predicting that a computer would beat a world champion at chess by 1998 (Deep Blue won in 1997). He estimates that computer and human intelligence will be indistinguishable by 2029.
The million dollar question is whether humans will ever be rendered obsolete in the future. Professor Murray says that could happen, but only if humans allow it.
“Machines cannot biologically replicate so humans have a distinct advantage in that sense,” he adds.
“Whether we can co-exist happily with them, though, depends upon us. If we do not treat our creations with respect, the dystopia of the Terminator films — although highly unlikely — becomes more realistic.
“A commonly made argument that machines should have no rights is similar to one made by slave plantation owners in the 18th and early 19th Century. This led ultimately to the US Civil War. Treat them well and recognise their rights and we can co-exist happily.”
Source: LSE, London School of Economics and Political Science