We all know we act differently when somebody’s watching; even more so when somebody calls us out. But what about the effect on our behaviour when the supervisor is non-human – yet still a peer? Perhaps it’s the perceived impartiality of robots that makes us take another look at our own actions. This study from Yale University investigates.

Three people and a robot form a team playing a game. The robot makes a mistake, costing the team a round. Like any good teammate, it acknowledges the error.

“Sorry, guys, I made the mistake this round,” it says. “I know it may be hard to believe, but robots make mistakes too.”

This scenario occurred multiple times during a Yale-led study of robots’ effects on human-to-human interactions.

The study, which published on March 9 in the Proceedings of the National Academy of Sciences, showed that the humans on teams that included a robot expressing vulnerability communicated more with each other and later reported having a more positive group experience than people teamed with silent robots or with robots that made neutral statements, like reciting the game’s score.

“We know that robots can influence the behaviour of humans they interact with directly, but how robots affect the way humans engage with each other is less well understood,” said Margaret L. Traeger, a Ph.D. candidate in sociology at the Yale Institute for Network Science (YINS) and the study’s lead author. “Our study shows that robots can affect human-to-human interactions.”

Because social robots are becoming increasingly prevalent in human society, she said, people are encountering them in stores, hospitals and other everyday places. This makes understanding how they shape human behaviour important.

“In this case,” Traeger said, “we show that robots can help people communicate more effectively as a team.”

Continue reading this article on Yale News.

Posted by:Sophie Sabin

Leave a Reply

Your email address will not be published. Required fields are marked *