This one takes a moment to sink in. At first it looks like any other game of hide-and-seek, but once you realise the ‘bots’ have been let loose with no environmental knowledge and learned to play and strategise on their own, it’s truly extraordinary. Advances like this are like gaping into a world of infinite possibility in terms of AI not only learning from scratch, but learning from itself. The gap between computers and humans is ever narrowing.

It’s happening, people. Artificial intelligence is using tools. Researchers at OpenAI, the artificial intelligence lab, have shown what happened when they programmed AI capable of learning from its mistakes while playing hide-and-seek.

The AI played hide-and-seek in two teams in an environment that featured boxes, ramps, and walls that could all be moved or locked in place. Once one of the bots locked an item in place, it could not be moved by a bot from the other team.

So early on, the bots learned to create little forts with these boxes, ramps, and walls to hide from the other team, and the bots often worked together to create these forts. Eventually, the team that was seeking learned they could use ramps to get over the fort walls. Next, the hiding team started locking the ramps before hiding. Then the seeking team learned to use the locked ramps to get on top of blocks, and they ‘surfed’ to the fort and jumped in. As you might expect, the hiding team then also learned to lock the boxes.

This wild development is explained in a new paper released this month by OpenAI, which was co-founded by Elon Musk and others in December 2015.

Continue reading this article on Inverse.

Watch a video of the game on Digg.

Posted by:Sophie Sabin

Leave a Reply

Your email address will not be published. Required fields are marked *