Why Does the Automation Say One Thing but Does Something Else? Effect of the Feedback Consistency and the Timing of Error on Trust in Automated Driving
Résumé
Driving automation deeply modifies the role of the human operator behind the steering wheel. Trust is required for drivers to engage in such automation, and this trust also seems to be a determinant of drivers’ behaviors during automated drives. On the one hand, first experiences with automation, either positive or not, are essential for drivers to calibrate their level of trust. On the other hand, an automation that provides feedback about its own level of capability to handle a specific driving situation may also help drivers to calibrate their level of trust. The reported experiment was undertaken to examine how the combination of these two effects will impact the driver trust calibration process. Four groups of drivers were randomly created. Each experienced either an early (i.e., directly after the beginning of the drive) or a late (i.e., directly before the end of it) critical situation that was poorly handled by the automation. In addition, they experienced either a consistent continuous feedback (i.e., that always correctly informed them about the situation), or an inconsistent one (i.e., that sometimes indicated dangers when there were none) during an automated drive in a driving simulator. Results showed the early- and poorly-handled critical situation had an enduring negative effect on drivers’ trust development compared to drivers who did not experience it. While being correctly understood, inconsistent feedback did not have an effect on trust during properly managed situations. These results suggest that the performance of the automation has the most severe influence on trust, and the automation’s feedback does not necessarily have the ability to influence drivers’ trust calibration during automated driving.
Domaines
Sciences cognitivesOrigine | Publication financée par une institution |
---|