In 2014, Google acquired a British company called DeepMind Technologies Limited that deals with artificial intelligence technology. The main goal of the organization is to enhance the AI industry with neural networks that mimic the human brain. This way, their input helps bots think deeper than just a series of rational and mathematical equations. In 2016, their work finally lived up to their expectations as it managed to win a Go game against a professional player. The parent company Alphabet Inc. wants to employ the expertise of DeepMind to find out if autonomous AI robots are capable of aggressive behavior against humanity.
The world of Sci-Fi literature is full of scenarios where robots become smart enough to think for themselves. From that point on, the universe falls victim to their cold wickedness. However, this hasn’t happened yet in our reality, but Alphabet Inc. holds such concerns regarding the future. Thus, as tech giants completely turned to this ultimate technology to create their gadgets, the company wants to test the weak points of AI.
A team of computer scientists from DeepMind has just released a paper that answers the question whether a computer is capable of aggressive behavior or not. To attain this knowledge, scientists focused on two video games. They are called Gathering and Wolfpack, and they employ basic playing strategies. The two software function like two different mediums where AI agents have to find easier or harder ways to cooperate.
Gathering uses the format of a two-player game. The main goal there is to collect as many apples placed in the middle as possible. Beside a cooperative strategy, players can also use the option to eliminate competition with a laser. On the other hand, Wolfpack can be played by three parties. Two of them team up, and their goal is to find the third participant. The rogue player has the possibility to hide behind objects and use them as obstacles. The search team earns points whenever it captures the third player or when a member of the team is nearby when the prey is caught.
The conclusion to which the team of scientists reached is somewhat similar to the Sci-Fi scenarios. The AI agents are capable of aggressive behavior when they can gain something by doing so. However, if the award can be obtained only by teams, they are willing to collaborate and work as an entity. Depending on which method is necessary to attain their purpose, they can act either more friendly or more antagonistic.
Image source: 1