With autonomous vehicles already rolling on public roads, researchers from the University of Nottingham in the UK have used a camouflaged driver to look at how pedestrians react to visual cues from oncoming cars without a human at the wheel.
Folks trying to cross the road as the group’s Nissan Leaf test car approaches may be forgiven for believing that it was a fully autonomous vehicle, as the human driver wore clothing designed to look like a car seat, including full head gear resembling a headrest, while enabling the driver to control the vehicle.
The idea behind the study was to determine public trust in autonomous vehicles, and investigate the use of different External Human-Machine Interfaces (eHMI) for communicating the car’s intentions or driving behavior to pedestrians.
The team tried out three types of visual displays via an addressable RGB LED matrix to the front of the hood and a LED strip atop the windshield.
A researcher in the driver seat wore a “seat suit” so that pedestrians would believe the Nissan Leaf test car to be an autonomous vehicle
University of Nottingham
The first design employed the LED strip to mimic “the papillary response of an eye: lateral movement demonstrated scanning/awareness, and blinking provided an implicit cue of the vehicle’s intention to give way.” A second design made use of a face and eyes on the matrix display accompanied by “humanlike language” text prompts as the car approached a pedestrian (such as “I have seen you” or “I am giving way”), while a third produced a vehicle icon and used “vehicle-centric language” to try and get the message across.
The eHMIs were programmed using an Arduino Mega microcontroller board and triggered by a team member in the rear seat via push-button controls. The test vehicle was driven around the university’s campus for several days, and front and rear dashcam footage recorded the interactions of the 520 pedestrians encountered during the study period. More researchers positioned themselves at crossing points to ask folks to complete a short survey about the whole experience.
Combing through the data, the researchers noted that the interface design that used expressive eyes seemed the preferred method of communicating the vehicle’s intent.
“With regards to the displays, the explicit eyes eHMI not only captured the most visual attention, but it also received good ratings for trust and clarity as well as the highest preference, whereas the implicit LED strip was rated as less clear and invited lower ratings of trust,” said Professor Gary Burnett, Head of the Human Factors Research Group and Professor of Transport Human Factors in the Faculty of Engineering.
“An interesting additional discovery was that pedestrians continued to use hand gestures, for example thanking the car, despite most survey respondents believing the car was genuinely driverless – showing that there is still an expectation of some kind of social element in these types of interaction,” he added.
The study was recently presented at the Ergonomics & Human Factors 2023 Conference. Down the track, the team is looking to investigate how other vulnerable road users naturally interact with autonomous vehicles, and has also recommended that research over lengthier time periods are undertaken “to understand how the public’s response to a driverless car might change over time.”
Source: University of Nottingham