![](https://images.axios.com/swsWOWq1ypllmWcDBODTtAw2N5k=/0x0:1920x1080/1920x1080/2019/01/30/1548889175403.jpg)
As intelligent machines begin muscling into daily life, a big issue remaining is how deeply people will trust them to take over critical tasks like driving, elder or child care, and even military operations.
Why it matters: Calibrating a human's trust to a machine's capability is crucial, as we've reported: Things go wrong if a person places too much or too little trust in a machine. Now, researchers are searching for ways of monitoring trust in real time so they can immediately alter a robot's behavior to match it.
The trouble is that trust is inexact. You can't measure it like a heart rate. Instead, most researchers examine people's behaviors for evidence of confidence.
- But an ongoing project at Purdue University found more accurate indicators by peeking under the hood at people's brain activity and skin response.
- In an experiment whose results were published in November, the Purdue team used sensors to measure how participants' bodies changed when they were confronted with a virtual self-driving car with faulty sensors.
Understanding a person's attitude toward a bot — a car, factory robot or virtual assistant — is key to improving cooperation between human and machine. It allows a machine to "self-correct" if it's out of sync with the person using it, Neera Jain, a Purdue engineering professor involved with the research, tells Axios.
Some examples of course-correcting robots:
- An autonomous vehicle that would give a particularly skeptical driver more time to take control before reaching an obstacle that it can't navigate on its own.
- An industrial robot that reveals its reasoning to boost confidence in a worker who might otherwise engage a manual override and potentially act less safely.
- A military reconnaissance robot that gives a trusting soldier extra information about the uncertainty in a report to prevent harm.
Go deeper: