We read a lot about machines “taking over“, but what’s behind this fear, could it be a lack of trust? We believe in the power of empathy to foster bonds, and it seems a university team in the US is taking this thinking digital. For us it is an important way in which creative thinking and our humanity is being championed for future benefit.
A team at Indiana’s Purdue School of Mechanical Engineering in the US is developing a series of ‘classification models’ to test how much humans trust machines.
During trials, the team used two types of “trust sensor” models to test brainwave patterns and electrical changes in the skin. As a result, they were able to use established these established indicators to measure levels of trust.
The research is eventually hoping to design intelligent machines capable of changing their behaviour to enhance human trust.
As the researchers point out, humans are “increasingly required to interact with intelligent systems”. Any misunderstanding could halt this process and so our ability to navigate world.
The benefits of understanding how artificial intelligence (AI) works go both ways. If we trust machines do their jobs, we know when to override and check as well as when not to.
A successful outcome of the project is the potential to scale up our use of technology. Currently it’s one to one but the research team see the future featuring “human-agent collectives” made up of multiple machines.