© 2017 Neon Nettle

Subscribe to our mailing list

Advertise Contact About Us Write For Us T&C's Privacy Support Us © 2017 Neon Nettle All Rights Reserved.

Google Confirms Its AI Programme May Become Aggressive When Stressed

Artificial intelligence could be humanity's downfall

By: Jacky Murphy  |@NeonNettle
 on 13th February 2017 @ 5.24pm
googles artificial intelligence program  ai  has learned how to become aggressive © press
Googles artificial intelligence program (AI) has learned how to become aggressive

Googles artificial intelligence program (AI) has learned how to become aggressive when placed under stress, the search giant warned.

There have been countless predictions that Ai could be the downfall of humanity, but despite this, tech giants still continue to invest in AI.

The Express reports: In 2016, Google’s AI program, known as DeepMind, showed its makers it was capable of learning independently, teaching itself to beat the world champion in a game of Go!.

Now, it has continued on its ruthless streak and opts for “highly aggressive” strategies when it is in fear of losing.

In the latest tests, two DeepMind agents were tasked with playing a game of ‘Gathering’ – a computer game where two people, or in this case computers, play against each other to collect the most apples.

The AI beings operated smoothly when there were enough apples to go around, but once the apples became more sparse, the DeepMind systems began using laser beams, or tagging, to knock the other one out, ensuring they could collect all the apples.

A blog post from the DeepMind team read: “We let the agents play this game many thousands of times and let them learn how to behave rationally using deep multi-agent reinforcement learning.

Robotic assistants are on the rise, here's how AI is getting more human.

DeepMind mimicked behaviour of human-learning
“Rather naturally, when there are enough apples in the environment, the agents learn to peacefully coexist and collect as many apples as they can.

“However, as the number of apples is reduced, the agents learn that it may be better for them to tag the other agent to give themselves time on their own to collect the scarce apples.”

There are fears that AI's wishes will not fit in line with humanity's
Joel Z Leibo, who is part of the DeepMind team, told Wired: "This model... shows that some aspects of human-like behaviour emerge as a product of the environment and learning.

"Less aggressive policies emerge from learning in relatively abundant environments with less possibility for costly action.

“The greed motivation reflects the temptation to take out a rival and collect all the apples oneself.”

Subscribe to our mailing list

Follow NN