Scientists Admit Losing Control of Artificial Intelligence

Written by Shivali Best

From driving cars to beating chess masters at their own game, computers are already performing incredible feats.

And artificial intelligence is quickly advancing, allowing computers to learn from experience without the need for human input. But scientists are concerned that computers are already overtaking us in their abilities, raising the prospect that we could lose control of them altogether.

Last year, a driverless car took to the streets of New Jersey, which ran without any human intervention. The car, created by Nvidia, could make its own decisions after watching how humans learned how to drive.

But despite creating the car, Nvidia admitted that it wasn’t sure how the car was able to learn in this way, according to MIT Technology Review. The car’s underlying technology was ‘deep learning’ – a powerful tool based on the neural layout of the human brain.

Deep learning is used in a range of technologies, including tagging your friends on social media, and allowing Siri to answer questions.  A recent report by PwC found that four in 10 jobs are at risk of being replaced by robots. The report also found that 38 per cent of US jobs will be replaced by robots and artificial intelligence by the early 2030s.

The analysis revealed that 61 per cent of financial services jobs are at risk of a robot takeover. This is compared to 30 per cent of UK jobs, 35 per cent of Germany and 21 per cent in Japan. The system is also being used by the military, which hopes to use deep learning to steer ships, destroy targets and control deadly drones.

There is also hope that deep learning could be used in medicine to diagnose rare diseases. But if its creators lose control of the system, we’re in big trouble, experts claim. Speaking to MIT Technology Review, Professor Tommi Jaakkola, who works on applications of deep learning, said: ‘If you had a very small neural network [deep learning algorithm], you might be able to understand it.’

‘But once it becomes very large, and it has thousands of units per layer and maybe hundreds of layers, then it becomes quite un-understandable.’

This is concerning, considering deep learning could soon be used to control deadly military weapons, and cars. In a recent study, a computer was tasked with predicting disease by analysing patient records.

The findings are concerning, considering deep learning could soon be used to control deadly military weapons, and cars (stock image)

Dr Joel Dudley, who lead the project at New York’s Mount Sinai Hospital, said: ‘We can build these models, but we don’t know how they work.’ In the hopes of staying in control of these powerful systems, many of the world’s largest technology firms created an ‘AI ethics board’ in 2016.

Researchers with Alphabet, Amazon, Facebook, IBM, and Microsoft teamed up to create the new group, known as the Partnership on Artificial Intelligence to Benefit People and Society, to develop a standard of ethics for the development of AI.


1. We will seek to ensure that AI technologies benefit and empower as many people as possible.

2. We will educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.

3. We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.

4. We believe that AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.

5. We will engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.